The No-reference Autoencoder VidEo (NAVE) metric is a video quality assessment model based on an autoencoder machine learning technique. The model uses an autoencoder to produce a set of features with a lower dimension and a higher descriptive capacity. NAVE has been shown to produce accurate quality predictions when tested with two video databases. As it is a common issue when dealing with models that rely on a nested non-linear structure, it is not clear at what level the content and the actual distortions are affecting the model’s predictions. In this paper, we analyze the NAVE model and test its capacity to distinguish quality monotonically for three isolated visual distortions: blocking artifacts, Gaussian blur, and white noise. With this goal, we create a dataset consisting of a set of short-length video sequences containing these distortions for ten very pronounced distortion levels. Then, we performed a subjective experiment to gather subjective quality scores for the degraded video sequences and tested the NAVE pre-trained model using these samples. Finally, we analyzed NAVE quality predictions for the set of distortions at different degradation levels with the goal of discovering the boundaries on which the model can perform.
In sheet metal production, the quality of a cut edge determines the quality of the cut itself. Quality criteria such as the roughness, the edge slope, and the burr height are of decisive importance for further application and quality determination. In order to be able to determine these criteria analytically, the depth information of the edge must be determined at great expense. The current methods for obtaining the depth information are very time-consuming, require laboratory environments and are therefore not suitable for a fast evaluation of the quality criteria. Preliminary work has shown that it is possible to make robust and accurate statements about the roughness of a cut edge based on images when using an industrial camera with a standard lens and diffuse incident light, if the model used for this purpose has been trained on appropriate images. In this work, the focus is on the illumination scenarios and their influence on the prediction quality of the models. Images of cut edges are taken under different defined illumination scenarios and it is investigated whether a comprehensive evaluation of the cut edges on the evaluation criteria defined in standards is possible under the given illumination conditions. The results of the obtained model predictions are compared with each other in order to make a statement about the importance of the illumination scenario. In order to investigate the possibility of a mobile low-cost evaluation of cut edges, cheap hardware components for illumination and a smartphone for image acquisition are used.
The SC29/WG1 (JPEG) Committee within ISO/IEC is currently working on developing standards for the storage, compression and transmission of 3D point cloud information. To support the creation of these standards, the committee has created a database of 3D point clouds representing various quality levels and use-cases and examined a range of 2D and 3D objective quality measures. The examined quality measures are correlated with subjective judgments for a number of compression levels. In this paper we describe the database created, tests performed and key observations on the problems of 3D point cloud quality assessment.