
Overview
This service concerns the processing of data collected during digital testing of a system, i.e., experimentation done in a computational environment. Digital testing enables easy collection of experimental data; however, such data then need to be suitably processed to extract information about system performance. The service applies a predefined set of performance metrics (defined either by the customer or via service S00178) to the collected data. Data collection during test execution is expected to have been done by the customer (on request, this activity can be supported by AgrifoodTEF via service S00183). The service includes all the activities needed to apply the performance metrics, including the development of any software component needed for data processing or data preparation. The output of the service is a report providing the quantitative results of the application of the performance metrics to the experimental data. The report also includes an evaluation of these results aimed at highlighting possible issues and pointing out lines for improvement.
More about the service
This analysis, which is part of the report, highlights issues in system performance and identifies the elements in the processing toolchain of the system under test that are most likely involved in the issues. The customer can therefore leverage the report not only to understand if the performance of the system measures up to expectations but also to decide where to focus additional development effort to improve system performance.
The preliminary exchange of information is followed by the transmission to AgrifoodTEF of the data to be processed. At the end of the service, the customer receives a report containing the results of the application of the performance metrics to the data and an analysis of such results with indications about possible ways to improve system performance.
A set of metrics previously designed (e.g., via service S00178) are used to track specific aspects of the performance of the customer’s solution. The chosen metrics consider, in particular, the misplacement of the (simulated) robot from the centre row line, the number of waypoints correctly visited, and the number of collisions with plants. Metrics are computed for different runs and trajectories across the reference data, plotted/tabulated in a detailed report, which also includes qualitative videos demonstrating the navigation trajectory followed by the system in simulation.
The report also highlights the most significant deviations of the system from expected performance and provides links between them and particular situations/events, as documented in the (simulated) sensor data that have been collected during the execution of the tests, along with the datasets used for the performance metrics.