With cars driving autonomously on roads, functional safety assumes critical importance to avoid hazardous situations for humans in the car and on the road. ISO 26262 defines Automotive Safety Integration Level (ASIL) with level QM (Least) to ASIL-D (Highest) based on severity and probability of defect causing harm to human life. This paper explores functional safety requirements and solutions for software systems in autonomous cars in four broad aspects. The first aspect covers usage of redundancy at various levels to ensure the failure of one system does not affect the overall operation of the car. It explores the usage of redundancy via multiple sensors and diverse processing of data to arrive at functionally safe results. Based on the redundancy requirements, in the second aspect, an HW (SoC) and SW architecture is proposed which can help meet these requirements. It explores the definition of SW framework, task scheduling, and tools usage to ensure systematic faults are prevented at the development stage. Autonomous driving systems will be complex and expecting all software modules comply with the highest functional safety level may not be feasible. The third aspect explores the usage of freedom from interference (FFI) via HW and SW mechanisms like Firewalls, MMU to allow safe and non-safe sub-systems to co-exist and operate according to their specification. The final aspect covers usage of SW and HW diagnostics to monitor, detect, and correct random faults found at run-time in HW modules. It explores the usage of diagnostics features like ECC, CRC, and BIST to help detect and avoid runtime failures.
Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles.