© iStock/D Kart
Simulation based testing clearly offers advantages. The simulation environment can provide object labels such as bounding boxes (as a replacement for the perception module), allowing the prediction and planning modules to be tested in isolation. Nevertheless, the critical challenge remains in designing the test case generation and management module. With a limited test budget, the test case generation module should outsmart the AD system under test by creating scenarios that lead to undesired behavior (e.g., collision). Simultaneously, to demonstrate sufficient coverage over the ODD, the generation of test cases should be coverage-driven while ensuring diversity.
Within industry, there is an ongoing effort to standardize scenario description languages, where working groups such as ASAM have expanded OpenScenario description language  in its second edition to allow characterizing more abstract scenarios. Intuitively, a “concrete” scenario refers to a scenario where all parameters are fixed (e.g., initial speed of the front vehicle equals 30 km/h), where an “abstract” scenario (in the simplest case) refers to the case where parameters are ranged (e.g., initial speed of the front vehicle is within 25~50 km/h).
What is the benefit of random testing?
The introduction of abstract scenarios naturally allows the use of random testing. Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. The output of the system under test is observed and when an abnormal situation occurs (e.g., collision, getting stuck in traffic), a report is sent to the developer. This is already a step beyond scenario replay, i.e., simply recording the scenarios and using the recorded ones as stimulus to the system.
Nevertheless, random testing is only an undirected testing method, meaning that it may not be geared towards bug finding. The art of bug finding is very similar to what a human does in puzzle solving. Based on some hints, we intend to try certain parts that are more likely to lead to success. Reflected in the test case generation process, if some configurations already trigger the autonomous vehicle to demonstrate close-to-undesired behaviors, it may be worth investigating similar configurations (e.g., by slightly adjusting the parameters) to find where undesired behavior can manifest. Standard random testing does not have such capabilities.
Current challenges researched in academia and at Fraunhofer IKS
Researchers in academia, industrial research, as well as at Fraunhofer IKS have been investigating testing methods beyond simple scenario replay and random testing, as well as other enabling technologies. Here are some of the challenges ahead.
- Behavior specification of other road users: One may not expect the road users to always demonstrate adversarial behavior. How can one create a reasonable model of other road users, such that the challenging scenarios generated by the testing tool are realistic?
- Combination of multiple test methods: Based on the state-of-the-art, the selection of test strategies always leads to choosing a bag of techniques. How can one establish the rationale of the established testing strategy, and how can one combine multiple test methods (e.g., scenario replay + optimization-based methods) to create maximum benefit?
- Simulation fidelity: How can one quantify the simulation fidelity such that the simulated miles can be used to contribute to the real validation efforts?
We at Fraunhofer IKS are actively providing solutions on these application-driven challenges, aiming to provide benefits to our partners.
 ASAM OpenScenario 2.0 https://www.asam.net/standards...
 Li, C., Cheng, C. H., Sun, T., Chen, Y., & Yan, R. (2022, May). ComOpT: Combination and Optimization for Testing Autonomous Driving Systems. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 7738-7744). IEEE.