Deep Neural Networks (DNNs) are critical for real-time imaging applications including autonomous vehicles. DNNs are often trained and validated with images that originate from a limited number of cameras, each of which has its own hardware and image signal processing (ISP) characteristics. However, in most real-time embedded systems, the input images come from a variety of cameras with different ISP pipelines, and often include perturbations due to a variety of scene conditions. Data augmentation methods are commonly exploited to enhance the robustness of such systems. Alternatively, methods are employed to detect input images that are unfamiliar to the trained networks, including out of distribution detection. Despite these efforts DNNs remain widely systems with operational boundaries that cannot be easily defined. One reason is that, while training and benchmark image datasets include samples with a variety of perturbations, there is a lack of research in the areas of metrification of input image quality suitable to DNNs and a universal method to relate quality to DNN performance using meaningful quality metrics. This paper addresses this lack of metrification specific to DNNs systems and introduces a framework that uses systematic modification of image quality attributes and relate input image quality to DNN performance.
Subjective quality assessment remains the most reliable way to evaluate image quality while being tedious and money consuming. Therefore, objective quality evaluation ensures a trade-off by providing a computational approach for predicting image quality. Even though a large literature exists for 2D image and video quality evaluation, 360-degree images quality is still under-explored. One can question the efficiency of 2D quality metrics on such a new type of content. To this end, we propose to study the possible improvement of well-known 2D quality metrics using important features related to 360-degree content, i.e. equator bias and visual saliency. The performance evaluation is conducted on two databases containing various distortion types. The obtained results show a slight improvement of the performance highlighting some problems inherently related to both the database content and the subjective evaluation approach used to obtain the observers’ quality scores.
Color provides important information and features for face recognition. Different color spaces possess different characteristics and are suitable for different applications. In this paper, we propose to investigate how different color space components influence the performance of degraded face recognition. Towards this goal, nine color space components are selected for the evaluation. In addition, four different types of image-based degradations are applied to face image samples in order to discover the impact on the performance of face recognition. The experimental results show that, all selected color components have similar influence to the performance of face recognition system depend on the acquisition devices and the experimental setups.