Simulation plays a key role in the development of Advanced Driver Assist Systems (ADAS) and Autonomous Driving (AD) stacks. A growing number of simulation solutions addresses development, test, and validation of these systems at unprecedented scale and with a large variety of features. Transparency with respect to the fitness of features for a given task is often hard to come by, and sorting marketing claims from product performance facts is a challenge. New players – on users’ and vendors’ side – will lead to further diversification. Evolving standards, regulatory requirements, verification and validation practices etc. will add to the list of criteria that might be relevant for identifying the best-fit solution for a given task. There is a need to evaluate and measure a solution’s compliance with these criteria on the basis of objective test scenarios in order to quantitatively compare different simulation solutions. The goal shall be a standardized catalog of tests which simulation solutions have to undergo before they can be considered fit (or certified) for a certain use case. Here, we propose a novel evaluation framework and detailed testing procedure as a first step towards quantifying simulation quality. We will illustrate the use of this method with results from an initial implementation, thereby highlighting the top-level properties Determinism, Real-time Capability, and Standards Compliance. We hope to raise awareness that simulation quality is not a nice-to-have feature but rather a central aspect for the whole spectrum of stakeholders, and that it needs to be quantified for the development of safe autonomous driving.
Automated driving functions, like highway driving and parking assist, are increasingly getting deployed in high-end cars with the goal of realizing self-driving car using Deep learning techniques like convolution neural network (CNN), Transformers. Camera based perception, Driver Monitoring, Driving Policy, Radar and Lidar perception are few of the examples built using DL algorithms in such systems. Traditionally custom software provided by silicon vendors are used to deploy these DL algorithms on devices. This custom software is very optimal for supported features (limited), but these are not flexible enough for evaluating various deep learning model architecture types quickly. In this paper we propose to use various open-source deep learning inference frameworks to quickly deploy any model architecture without any performance/latency impact. We have implemented this proposed solution with three open-source inference frameworks (Tensorflow Lite, TVM/Neo-AI-DLR and ONNX Runtime) on Linux running in ARM.
Over the years autonomous driving has evolved leaps and bounds and a major part of that was owed to the involvement of deep learning in computer vision. Even in modern autonomous driving, multi-lane detection and projection has been a challenge which needs to be solved further. Several approaches have been proposed earlier involving conventional threshold techniques along with graphical models or with RANSAC and polynomial fitting. In recent times direct regression using deep learning models is also explored. In this paper, we propose a blend which uses a deep learning model for initial lane detection at pixel level and conditional random fields for modeling of lanes. The method provides a 15% improvement in lane detection and projection over conventional models.