Regular
Automotive cameraAutomatic white balanceartificial intelligenceAuto exposureAging
Blurring Edge WidthBehaviour
Computer visionContrast sensitivity functionChartContrast Signal to Noise Ratio (CSNR)convolutional neural networkContrast Detection Probability (CDP)CISColor FilterCamera performanceCSFCamera resolution
Deep LearningDemosaicingDXOMARKdeep learning
Evolutionary Algorithmsevaluatione-SFREcologically-validEMVA
Feature selectionFlareFull-reference Image Quality AssessmentFrequency of Correct Resolution (FCR)
Global Optimization
Human visual systemhuman visual perceptionHDR (High Dynamic Range)High Sensitivity
ISO 12233image quality metricsImage sensor characterizationImage quality matricesimage processingImage Quality (IQ) metricsImage signal Processor (ISP)Image sensorImage Qualityimage qualityImage quality evaluationImage quality metricsImage qualityImage quality assessmentinformation capacityISO speed ratingImage Sensor
Key performance index (KPI)
Light sourceLocal Optimization
mtfMTFMobile PhoneMachine visionmachine visionMotion BlurMicro LensModulation Transfer Function
NS-SFRNon-linear optimizationnoiseNoise evaluationNear-infrared (NIR)Nano PrismNeural networksNatural Scene
Objective Test
PSNRportraitPerformance evaluationPsychophysical experimentPhase detection
Quality of ExperienceQuantum efficiencyquality metric
SensitivitySelf-Supervised Learning (DINO)Subjective TestingSSIMSNRSFRSpatial frequency responsesubjective and objective quality assessmentSpectrometerSubband decompositionsystem performanceSubjective experimentSpatial Frequency ResponseSystem performanceSubjective testing
text quality
Video Frame InterpolationVisionVision TransformersVideo qualityVCX
WCG (wide color gamut)
YouTube
4K resolution
 Filters
Month and year
 
  43  25
Image
Pages A08-1 - A08-11,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

We live in a visual world. The perceived quality of images is of crucial importance in industrial, medical, and entertainment application environments. Developments in camera sensors, image processing, 3D imaging, display technology, and digital printing are enabling new or enhanced possibilities for creating and conveying visual content that informs or entertains. Wireless networks and mobile devices expand the ways to share imagery and autonomous vehicles bring image processing into new aspects of society. The power of imaging rests directly on the visual quality of the images and the performance of the systems that produce them. As the images are generally intended to be viewed by humans, a deep understanding of human visual perception is key to the effective assessment of image quality.

Digital Library: EI
Published Online: January  2023
  144  42
Image
Pages 261-1 - 261-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

Video streaming is becoming increasingly popular, and with platforms like YouTube, users do not watch the video passively but seek, pause, and read the comments. The popularity of video services is possible due to the development of compression and quality prediction algorithms. However, those algorithms are developed based on classic experiments, which are non-ecologically valid. Therefore, classic experiments do not mimic real user interaction. Further development of the quality and compression algorithms depends on the results coming from ecologically-valid experiments. Therefore, we aim to propose such experiments. Nevertheless, proposing a new experimental protocol is difficult, especially when there is no limitation on content selection and control of the video. The freedom makes data analysis more challenging. In this paper, we present an ecologically-valid experimental protocol in which the subject assessed the quality while freely using YouTube. To achieve this goal, we developed a Chrome extension that collects objective data and allows network manipulation. Our deep data analysis shows a correlation between MOS and objectively measured results such as resolution, which proves that the ecologically-valid test works. Moreover, we have shown significant differences between subjects, allowing for a more detailed understanding, of how the quality influences the interaction with the service.

Digital Library: EI
Published Online: January  2023
  315  98
Image
Pages 262--1 - 262-5,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

While slow motion has become a standard feature in mainstream cell phones, a fast approach without relying on specific training datasets to assess slow motion video quality is not available. Conventionally, researchers evaluate their algorithms with peak signal-to-noise ratio (PSNR) or structural similarity index measure (SSIM) between ground-truth and reconstructed frames. But they are both global evaluation index and more sensitive to noise or distortion brought by the interpolation. For video interpolation, especially for fast moving objects, motion blur as well as ghost problem are more essential to the audience subjective judgment. How to achieve a proper evaluation for Video Frame Interpolation (VFI) task is still a problem that is not well addressed.

Digital Library: EI
Published Online: January  2023
  111  50
Image
Pages 263--1 - 263-8,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

The paper describes a design of a subjective experiment for testing the video quality of High Dynamic Range, Wide Color gamut (HDR-WCG) content at 4K resolution. Due to Covid, testing could not use a lab, so an at-home test procedure was developed. To aim for calibration despite not fully controlling the conditions and settings, we limited subjects to those who had a specific TV model, which we had previously calibrated in our labs. Moreover, we performed the experiment in the Dolby Vision mode (where the various enhancements of the TV are turned OFF by default). A browser approach was used which took control of the TV, and ensure the content was viewed at the native resolution of the TV (e.g., dot-on-dot mode). In addition, we know that video imagery is not ergodic, and there is wide variability in types of low levels features (sharpness, noise, motion, color volume, etc.) that affect both TV and visual system performance. So, a large number of test clips was used (30) and the content was specifically chosen to stress key features. The obtained data is qualitatively similar to an in-lab study and is subsequently used to evaluate several existing objective quality metrics.

Digital Library: EI
Published Online: January  2023
  158  100
Image
Pages 301-1 - 301-6,  This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. 2023
Volume 35
Issue 8
Abstract

Most cameras use a single-sensor arrangement with Color Filter Array (CFA). Color interpolation techniques performed during image demosaicing are normally the reason behind visual artifacts generated in a captured image. While the severity of the artifacts depends on the demosaicing methods used, the artifacts themselves are mainly zipper artifacts (block artifacts across the edges) and false-color distortions. In this study and to evaluate the performance of demosaicing methods, a subjective pair-comparison method with 15 observers was performed on six different methods (namely Nearest Neighbours, Bilinear interpolation, Laplacian, Adaptive Laplacian, Smooth hue transition, and Gradient-Based image interpolation) and nine different scenes. The subjective scores and scene images are then collected as a dataset and used to evaluate a set of no-reference image quality metrics. Assessment of the performance of these image quality metrics in terms of correlation with the subjective scores show that many of the evaluated no-reference metrics cannot predict perceived image quality.

Digital Library: EI
Published Online: January  2023
  148  52
Image
Pages 302-1 - 302-7,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

With the development of image-based applications, assessing the quality of images has become increasingly important. Although our perception of image quality changes as we age, most existing image quality assessment (IQA) metrics make simplifying assumptions about the age of observers, thus limiting their use for age-specific applications. In this work, we propose a personalized IQA metric to assess the perceived image quality of observers from different age groups. Firstly, we apply an age simulation algorithm to compute how an observer with a particular age would perceive a given image. More specifically, we process the input image according to an age-specific contrast sensitivity function (CSF), which predicts the reduction of contrast visibility associated with the aging eye. We combine age simulation with existing IQA metrics to calculate the age-specific perceived image quality score. To validate the effectiveness of our combined model, we conducted a psychophysical experiment in a controlled laboratory environment with young (18-31 y.o.), middle-aged (32-52 y.o.), and older (53+ y.o.) adults, measuring their image quality preferences for 84 test images. Our analysis shows that the predictions by our age-specific IQA metric are well correlated with the collected subjective IQA results from our psychophysical experiment.

Digital Library: EI
Published Online: January  2023
  70  32
Image
Pages 303-1 - 303-4,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

With the development of various autofocusing (AF) technologies, sensor manufacturers are demanded to evaluate their performance accurately. The basic method of evaluating AF performance is to measure the time to get the refocused image and the sharpness of the image while repeatedly inducing the refocusing process. Traditionally, this process was conducted manually by covering and uncovering an object or sensor repeatedly, which can lead to unreliable results due to the human error and light blocking method. To deal with this problem, we propose a new device and solutions using a transparent display. Our method can provide more reliable results than the existing method by modulating the opacity, pattern, and repetition cycle of the target on the transparent display.

Digital Library: EI
Published Online: January  2023
  108  52
Image
Pages 305-1 - 305-7,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

We review the design of the SSIM quality metric and offer an alternative model of SSIM computation, utilizing subband decomposition and identical distance measures in each subband. We show that this model performs very close to the original and offers many advantages from a methodological standpoint. It immediately brings several possible explanations of why SSIM is effective. It also suggests a simple strategy for band noise allocation optimizing SSIM scores. This strategy may aid the design of encoders or pre-processing filters for video coding. Finally, this model leads to more straightforward mathematical connections between SSIM, MSE, and SNR metrics, improving previously known results.

Digital Library: EI
Published Online: January  2023
  141  71
Image
Pages 306-1 - 306-6,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

In recent years several different Image Quality Metrics (IQMs) have been introduced which are focused on comparing the feature maps extracted from different pre-trained deep learning models[1-3]. While such objective IQMs have shown a high correlation with the subjective scores little attention has been paid on how they could be used to better understand the Human Visual System (HVS) and how observers evaluate the quality of images. In this study, by using different pre-trained Convolutional Neural Networks (CNN) models we identify the most relevant features in Image Quality Assessment (IQA). By visualizing these feature maps we try to have a better understanding about which features play a dominant role when evaluating the quality of images. Experimental results on four benchmark datasets show that the most important feature maps represent repeated textures such as stripes or checkers, and feature maps linked to colors blue, or orange also play a crucial role. Additionally, when it comes to calculating the quality of an image based on a comparison of different feature maps, a higher accuracy can be reached when only the most relevant feature maps are used in calculating the image quality instead of using all the extracted feature maps from a CNN model. [1] Amirshahi, Seyed Ali, Marius Pedersen, and Stella X. Yu. "Image quality assessment by comparing CNN features between images." Journal of Imaging Science and Technology 60.6 (2016): 60410-1. [2] Amirshahi, Seyed Ali, Marius Pedersen, and Azeddine Beghdadi. "Reviving traditional image quality metrics using CNNs." Color and imaging conference. Vol. 2018. No. 1. Society for Imaging Science and Technology, 2018. [3] Gao, Fei, et al. "Deepsim: Deep similarity for image quality assessment." Neurocomputing 257 (2017): 104-114.

Digital Library: EI
Published Online: January  2023
  74  35
Image
Pages 307-1 - 307-7,  © 2023, Society for Imaging Science and Technology 2023
Volume 35
Issue 8
Abstract

Deep Neural Networks (DNNs) are critical for real-time imaging applications including autonomous vehicles. DNNs are often trained and validated with images that originate from a limited number of cameras, each of which has its own hardware and image signal processing (ISP) characteristics. However, in most real-time embedded systems, the input images come from a variety of cameras with different ISP pipelines, and often include perturbations due to a variety of scene conditions. Data augmentation methods are commonly exploited to enhance the robustness of such systems. Alternatively, methods are employed to detect input images that are unfamiliar to the trained networks, including out of distribution detection. Despite these efforts DNNs remain widely systems with operational boundaries that cannot be easily defined. One reason is that, while training and benchmark image datasets include samples with a variety of perturbations, there is a lack of research in the areas of metrification of input image quality suitable to DNNs and a universal method to relate quality to DNN performance using meaningful quality metrics. This paper addresses this lack of metrification specific to DNNs systems and introduces a framework that uses systematic modification of image quality attributes and relate input image quality to DNN performance.

Digital Library: EI
Published Online: January  2023

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]