FisheyeDistanceNet [1] proposed a self-supervised monocular depth estimation method for fisheye cameras with a large field of view (> 180°). To achieve scale-invariant depth estimation, FisheyeDistanceNet supervises depth map predictions over multiple scales during training. To overcome this bottleneck, we incorporate self-attention layers and robust loss function [2] to FisheyeDistanceNet. A general adaptive robust loss function helps obtain sharp depth maps without a need to train over multiple scales and allows us to learn hyperparameters in loss function to aid in better optimization in terms of convergence speed and accuracy. We also ablate the importance of Instance Normalization over Batch Normalization in the network architecture. Finally, we generalize the network to be invariant to camera views by training multiple perspectives using front, rear, and side cameras. Proposed algorithm improvements, FisheyeDistanceNet++, result in 30% relative improvement in RMSE while reducing the training time by 25% on the WoodScape dataset. We also obtain state-of-the-art results on the KITTI dataset, in comparison to other self-supervised monocular methods.
Road Edge is defined as the borderline where there is a change from the road surface to the non-road surface. Most of the currently existing solutions for Road Edge Detection use only a single front camera to capture the input image; hence, the system’s performance and robustness suffer. Our efficient CNN trained on a very diverse dataset yields more than 98% semantic segmentation for the road surface, which is then used to obtain road edge segments for individual camera images. Afterward, the multi-cameras raw road edges are transformed into world coordinates, and RANSAC curve fitting is used to get the final road edges on both sides of the vehicle for driving assistance. The process of road edge extraction is also very computationally efficient as we can use the same generic road segmentation output, which is computed along with other semantic segmentation for driving assistance and autonomous driving. RoadEdgeNet algorithm is designed for automated driving in series production, and we discuss the various challenges and limitations of the current algorithm.
Free space is an essential component of any autonomous driving system. It describes the region, which is typically the road surface, around the vehicle which is free from obstacles. However, in practice, free space should not solely describe the area where a vehicle can plan a trajectory. For instance, in a single lane road with two way traffic the opposite lane should not be included as an area where the vehicle can plan a driving path although it will be detected as free space. In this paper, we introduce a new conceptual representation called DriveSpace which corresponds to semantic understanding and context of the scene. We formulate it based on combination of dense 3d reconstruction and semantic segmentation. We use a graphical model approach to fuse and learn the drivable area. As the drivable region is highly dependent on the situation and dynamics of other objects, it remains a bit subjective. We analyze various scenarios of DriveSpace and propose a general method to detect all scenarios. As it is a new concept, there are no datasets available for development and test, however, we are working on creating the same to show quantitative results of the proposed method.
Autonomous driving is an active area of research in the automotive market. The development of automated functions such as highway driving, autonomous parking etc. requires a robust platform for development and safety qualification of the system. In this context, virtual simulation platforms are key enablers for development of algorithms, software and hardware components. In this paper, we discuss multiple virtual simulation platforms such as open source car simulators, commercial automotive vendors and gaming platforms that are available in the market. We discuss the key factors that make the virtual platform suitable for automated driving function development. Based on the analysis of various simulation platforms, we end the paper with a proposal of two stage approach for the automated driving functionality development.
Highly Automated Driving is an active research and development area in automotive market for next generation series production. The development of automated functions like Highway driving or Parking gets partitioned across Edge and Central ECU's as part of vehicle E/E network. The paper introduces typical vehicle E/E topologies with emphasis on ADAS/AD domain with multiple intra/inters connectivity options. Within the ADAS/AD domain, various E/E system architecture topologies with multiple ECU partitioning are under exploration to optimize various parameters. The paper explains multiple topologies by analyzing two example system topologies. Topology-I enables incremental approach on the top of legacy ECUs, while topology-II enables cost optimized solution. The paper also explains functional partitioning of automated driving functionality (e.g. Highway driving, Parking) within ADAS/AD domain. This involves splitting of automated function in given topology across multiple ECUs in terms of perception (camera, radar & lidar), localization, fusion, driving policy, motion planning and control. The paper lists various parameters for considerations for given topologies e.g. Number of ECUs, intra domain connection bandwidth, functional safety, incremental development with legacy ECUs, cost and ease of software development. The paper ends with summarizing of these partitioning of functions and parameters tradeoffs enabling users to analyze custom E/E architectures.