This paper presents the design of an accurate rain model for the commercially-available Anyverse automotive simulation environment. The model incorporates the physical properties of rain and a process to validate the model against real rain is proposed. Due to the high computational complexity of path tracing through a particle-based model, a second more computationally efficient model is also proposed. For the second model, the rain is modeled using a combination of a particle-based model and an attenuation field. The attenuation field is fine-tuned against the particle-only model to minimize the difference between the models.
Autonomous driving plays a crucial role to prevent accidents and modern vehicles are equipped with multimodal sensor systems and AI-driven perception and sensor fusion. These features are however not stable during a vehicle’s lifetime due to various means of degradation. This introduces an inherent, yet unaddressed risk: once vehicles are in the field, their individual exposure to environmental effects lead to unpredictable behavior. The goal of this paper is to raise awareness of automotive sensor degradation. Various effects exist, which in combination may have a severe impact on the AI-based processing and ultimately on the customer domain. Failure mode and effects analysis (FMEA) type approaches are used to structure a complete coverage of relevant automotive degradation effects. Sensors include cameras, RADARs, LiDARs and other modalities, both outside and in-cabin. Sensor robustness alone is a well-known topic which is addressed by DV/PV. However, this is not sufficient and various degradations will be looked at which go significantly beyond currently tested environmental stress scenarios. In addition, the combination of sensor degradation and its impact on AI processing is identified as a validation gap. An outlook to future analysis and ways to detect relevant sensor degradations is also presented.
Simulation is an established tool to develop and validate camera systems. The goal of autonomous driving is pushing simulation into a more important and fundamental role for safety, validation and coverage of billions of miles. Realistic camera models are moving more and more into focus, as simulations need to be more then photo-realistic, they need to be physical-realistic, representing the actual camera system onboard the self-driving vehicle in all relevant physical aspects – and this is not only true for cameras, but also for radar and lidar. But when the camera simulations are becoming more and more realistic, how is this realism tested? Actual, physical camera samples are tested in laboratories following norms like ISO12233, EMVA1288 or the developing P2020, with test charts like dead leaves, slanted edge or OECF-charts. In this article we propose to validate the realism of camera simulations by simulating the physical test bench setup, and then comparing the synthetical simulation result with physical results from the real-world test bench using the established normative metrics and KPIs. While this procedure is used sporadically in industrial settings we are not aware of a rigorous presentation of these ideas in the context of realistic camera models for autonomous driving. After the description of the process we give concrete examples for several different measurement setups using MTF and SFR, and show how these can be used to characterize the quality of different camera models.
Camera-based advanced driver-assistance systems (ADAS) require the mapping from image coordinates into world coordinates to be known. The process of computing that mapping is geometric calibration. This paper provides a series of tests that may be used to assess the goodness of the geometric calibration and compare model forms: 1. Image Coordinate System Test: Validation that different teams are using the same image coordinates. 2. Reprojection Test: Validation of a camera’s calibration by forward projecting targets through the model onto the image plane. 3. Projection Test: Validation of a camera’s calibration by inverse projecting points through the model out into the world. 4. Triangulation Test: Validation of a multi-camera system’s ability to locate a point in 3D. The potential configurations for these tests are driven by automotive use cases. These tests enable comparison and tuning of different calibration models for an as-built camera.