Recent work in prediction of overall HDR and WCG display quality has shown that machine learning approaches based on physical measurements performs on par with more advanced perceptually transformed measurements. While combining machine learning with the perceptual transforms did improve over using each technique separately, the improvement was minor. However, that work did not explore how well these models performed when applied to display capabilities outside of the training data set. This new work examines what happens when the machinelearning approaches are used to predict quality outside of the training set, both in terms of extrapolation and interpolation. While doing so, we consider two models – one based on physical display characteristics, and a perceptual model that transforms physical parameters based on human visual system models. We found that the use of the perceptual transforms particularly helps with extrapolation, and without their tempering effects, the machine learning-based models can produce wildly unrealistic quality predictions.
In our previous work [1,2], we presented a block-based technique to analyze printed page uniformity both visually and metrically. In this paper, we introduce a new sets of tools for feature ranking and selection. The features learned from the models are then employed in a Support Vector Machine (SVM) framework to classify the pages into one of the two categories of acceptable and unacceptable quality. We utilize three methods in feature ranking including F-score, Linear-SVM weight, and Forward Search. The first two methods are filter methods while the last is categorized as a wrapper approach. We use the result from the wrapper method and information from the filter methods as confidence scores in our feature selection framework