Overview
Test activities involve three key components: the environment (where the tests are conducted), the protocol (defining what tests are performed and how), and evaluation metrics (used to assess the test results). Through this service, we assist customers in designing a computational environment tailored to running the digital tests required for their software components. Specifically, this service:
- Defines the data (and metadata) requirements needed to support the test
- Identifies the necessary software environment (e.g., operating system, libraries)
- Selects an appropriate simulator and defines simulated environments, if required
- Chooses remote access tools, if applicable.
As a result, we deliver a comprehensive design of the testing environment, including specifications such as memory requirements, GPU/TPU acceleration, and necessary libraries. This environment can be deployed on either the agrifoodTEF digital infrastructure or the customer's computing infrastructure (i.e., on premises).
More about the service
This service provides users with a tailored definition of such an environment, including hardware specifications (for physical hardware or virtual machines), a customized runtime environment (e.g., in the form of a Docker container or virtual machine), and testing tools (such as preconfigured simulators or curated datasets).
During the initial interview, the hardware and software requirements will be refined, and the customer will demonstrate an example execution of the software component(s). Additional interviews may be scheduled, if necessary, to further refine the requirements or clarify the functioning of the components being tested.
At the end of the service, the customer will receive a complete definition of the digital testing environment, including hardware specifications (for physical hardware or virtual machines) and/or a customised runtime environment (e.g., a Docker container or a tailored virtual machine).
To account for the kinematics of the customer’s weeding robot, we use a virtual replica of a skid-steering mobile robot, customized to closely mirror the customer’s actual machine. Since the customer’s robot is equipped with a LiDAR sensor and camera system, we add corresponding virtual sensors to the Gazebo model and ensure proper interaction between the sensors and the simulated environment.
By leveraging the flexibility of simulation, we can modify the testing environments by adjusting row lengths (e.g., 3, 5, or up to 10 meters) and narrowing the row spacing (e.g., from 3 meters to 1 meter) to evaluate the navigation system’s performance in increasingly challenging scenarios.