This research focuses on the benefits of computer vision enhancement through use of an image pre-processing optimization algorithm in which numerous variations of prevalent image modification tools are applied independently and in combination to specific sets of images. The class with the highest returned precision score is then assigned to the feature, often improving upon both the number of features captured and the precision values. Various transformations such as embossing, sharpening, contrast adjustment, etc. can bring to the forefront and reveal feature edge lines previously not capturable by neural networks, allowing potential increases in overall system accuracy beyond typical manual image pre-processing. Similar to how neural networks determine accuracy among numerous feature characteristics, the enhanced neural network will determine the highest classification confidence among unaltered original images and their permutations run through numerous pre-processing and enhancement techniques.
Since it is essential for Computer Vision systems to reliably perform in safety-critical applications such as autonomous vehicles, there is a need to evaluate their robustness to naturally occurring image perturbations. More specifically, the performance of Computer Vision systems needs to be linked to the image quality, which hasn’t received much research attention so far. In fact, aberrations of a camera system are always spatially variable over the Field of View, which may influence the performance of Computer Vision systems dependent on the degree of local aberrations. Therefore, the goal is to evaluate the performance of Computer Vision systems under effects of defocus by taking into account the spatial domain. Large-scale Autonomous Driving datasets are degraded by a parameterized optical model to simulate driving scenes under physically realistic effects of defocus. Using standard evaluation metrics, the Spatial Recall Index (SRI) and the new Spatial Precision Index (SPI), the performance of Computer Vision systems on these degraded datasets are compared with the optical performance of the applied optical model. A correlation could be observed between the spatially varying optical performance and the spatial performance of Instance Segmentation systems.
In-Cabin Monitoring Systems (ICMS) are functional safety systems designed to monitor driver and/or passengers inside an automobile. DMS (Driver Monitoring System) and OMS (Occupant Monitoring System) are two variants of ICMS. DMS focusses solely on the driver to monitor events like fatigue, inattention and vital health signs. OMS monitor all the occupants, including unattended children. Besides safety, ICMS can also augment security and comfort through driver identification and personalization. In-Cabin monitoring brings a unique set of challenges from an imaging perspective in terms of higher analytics needs, smaller form factor, low light vision and color accuracy. This paper discusses these challenges and provides an efficient implementation on Texas Instrument's TDA2Px automotive processor. The paper also provides details on novel implementation of RGB+IR sensor format processing that commonly used in these systems enabling premium ICMS system using TDA2Px system-on-chip (SoC).