Regular
AutoencoderAutomatic Image Quality OptimizationAudio qualityAudio degradationsAuto white balanceAcutanceAdaptive Video StreamingautomotiveAuto exposureAugmented Reality
Camera phone image qualityCMOS image sensorCamera SystemCIE color differencecamera calibrationcontrast sensitivity function (CSF)Collimator FixtureConvolutional Neural NetworkComputational Bokeh
DxOMark MobiledistortionDeep neural networkDeep Neural Networkdynamic rangeDepth MapDisplay systemsdatasetDriving simulation
Enhancement algorithmsEquiangular cubemapEquator biasEye-trackingEarly splitEdge detector
Feature vectorflare lightfilteringFine-tuning
Gaze
HEVCHuman Visual PerceptionHDRHigh Dynamic Range
image viewerImage Signal ProcessorImage analysis systemImage Quality RulerISO 12233Image quality metricsImage ResolutionImage SensorImage ProcessingImage quality rulerImage QualityImage information capacityImaging SystemImage signal processorICTCP color representationimageImage quality metricImage Quality Assessmentimage resolutionImage qualityImage quality assessmentImage Quality Metricimage quality
low complexityLee filterLong-Distance Testing
medical imagingMTF50MTFMtF areaModulation Transfer FunctionMTF50PMobile Image SensorMultimedia applicationsMotion Correction PerformanceModelingmodulation transfer functionmetricMetricMacro-uniformity
noiseNatural Scene Statisticsneural networkNoise power spectrumNatural Scenenoise power spectrum (NPS)Noisenoise equivalent quanta (NEQ)No-reference quality metricnoise power spectrumnon-linearityNo reference
Omnidirectional videoObjective Evaluation Methodoptical distortionObject Distance
pixel-based image differencePerformance evaluationPerformance analysisPerformancePerceptual metricperformance predictionparameter optimizationPsychophysical experimentPrint quality
quality assessmentQuantification MethodQuality Loss
Relative Standard Deviation Area (RSDA)Region of interstRelay LensRelationResolutionresolution
SFRSimulationSubjective image qualitySubjective and Objective Quality AssessmentScene-and-Process-Dependent Noise Power Spectrum (SPD-NPS)Siemens starstereo cameraShannon information capacitySpatial Frequency Responsesharpnessscene-dependencySimulated DistanceSAR imageSimulated Long-Range TestingScene-and-Process-Dependent Modulation Transfer Function (SPD-MTF)Subjective Quality DatabasesScene-DependenceStar ratingsSharpnesssource crowdingsmartphoneSystem PerformanceSmartphoneStatistical characterizationScene-and-Process-Dependent Noise Power Spectrumsubjective study
Time-of-flight (ToF)temporal noise
UncertaintyUnityUser feedback
Versatile Video CodingVideo quality assessmentViewing conditionsVideo Object TrackingVideo qualityVirtual EnvironmentVideovideo qualityVideo Quality AssessmentVRVisual adaptationVideo degradations
Working Distancewhole-slide imaging in digital pathologyWide Color Gamut
2D DCT2D metrics
3D360 Video360-degree videos360-deg quality assessment
4K
 Filters
Month and year
 
  23  1
Image
Pages A09-1 - A09-7,  © Society for Imaging Science and Technology 2020
Digital Library: EI
Published Online: January  2020
  41  14
Image
Pages 18-1 - 18-5,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

There are many test charts and software to determine the intrinsic geometric calibration of a camera including distortion. But all of these setups have a few problems in common. They are limited to finite object distances and require large test charts for calibrations at greater distances combined with powerful and uniform illumination. On production lines the workaround for this problem is often times the use of a relay lens which itself introduces geometric distortions and therefore inaccuracies that need to be compensated for. A solution to overcome these problems and limitations has originally been developed for space applications and has already become a common method for the calibration of satellite cameras. We have now turned the lab setup on an optical bench into a commercially available product that can be used for the calibration of a huge variety of cameras for different applications. This solution is based on a diffractive optical element (DOE) that gets illuminated by a plane wave generated with an expanded laser diode beam. In addition to the conventional methods the proposed one also provides the extrinsic orientation of the camera and therefore allows the adjustment of cameras to each other.

Digital Library: EI
Published Online: January  2020
  52  8
Image
Pages 39-1 - 39-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

A virtual reality (VR) driving simulation platform has been built for use in addressing multiple research interests. This platform is a VR 3D engine (Unity © ) that provides an immersive driving experience viewed in an HTC Vive © head-mounted display (HMD). To test this platform, we designed a virtual driving scenario based on a real tunnel used by Törnros to perform onroad tests [1] . Data from the platform, including driving speed and lateral lane position, was compared the published on-road tests. The correspondence between the driving simulation and onroad tests is assessed to demonstrate the ability of our platform as a research tool. In addition, the drivers’ eye movement data, such as 3D gaze point of regard (POR), will be collected during the test with an Tobii © eye-tracker integrated in the HMD. The data set will be analyzed offline and examined for correlations with driving behaviors in future study.

Digital Library: EI
Published Online: January  2020
  137  6
Image
Pages 66-1 - 66-9,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

In this work, we present a psychophysical study, in which, we analyzed the perceptual quality of images enhanced with several types of enhancement algorithms, including color, sharpness, histogram, and contrast enhancements. To estimate and compare the qualities of enhanced images, we performed a psychophysical experiment with 35 source images, obtained from publicly available databases. More specifically, we used images from the Challenge Database, the CSIQ database, and the TID2013 database. To generate the test sequences, we used 12 different image enhancement algorithms, generating a dataset with a total of 455 images. We used a Double Stimulus Continuous Quality Scale (DSCQS) experimental methodology, with a between-subjects approach where each subject scored a subset of the total database to avoid fatigue. Given the high number of test images, we designed a crowd-sourcing interface to perform an online psychophysical experiment. This type of interface has the advantage of making it possible to collect data from many participants. We also performed an experiment in a controlled laboratory environment and compared its results with the crowd-sourcing results. Since there are very few quality enhancement databases available in the literature, this works represents a contribution to the area of image quality.

Digital Library: EI
Published Online: January  2020
  71  16
Image
Pages 67-1 - 67-9,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

From complete darkness to direct sunlight, real-world displays operate in various viewing conditions often resulting in a non-optimal viewing experience. Most existing Image Quality Assessment (IQA) methods, however, assume ideal environments and displays, and thus cannot be used when viewing conditions differ from the standard. In this paper, we investigate the influence of ambient illumination level and display luminance on human perception of image quality. We conduct a psychophysical study to collect a novel dataset of over 10000 image quality preference judgments performed in illumination conditions ranging from 0 lux to 20000 lux. We also propose a perceptual IQA framework that allows most existing image quality metrics (IQM) to accurately predict image quality for a wide range of illumination conditions and display parameters. Our analysis demonstrates strong correlation between human IQA and the predictions of our proposed framework combined with multiple prominent IQMs and across a wide range of luminance values.

Digital Library: EI
Published Online: January  2020
  218  35
Image
Pages 166-1 - 166-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

Video capture is becoming more and more widespread. The technical advances of consumer devices have led to improved video quality and to a variety of new use cases presented by social media and artificial intelligence applications. Device manufacturers and users alike need to be able to compare different cameras. These devices may be smartphones, automotive components, surveillance equipment, DSLRs, drones, action cameras, etc. While quality standards and measurement protocols exist for still images, there is still a need of measurement protocols for video quality. These need to include parts that are non-trivially adapted from photo protocols, particularly concerning the temporal aspects. This article presents a comprehensive hardware and software measurement protocol for the objective evaluation of the whole video acquisition and encoding pipeline, as well as its experimental validation.

Digital Library: EI
Published Online: January  2020
  30  1
Image
Pages 167-1 - 167-6,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

The development of audio-visual quality models faces a number of challenges, including the integration of audio and video sensory channels and the modeling of their interaction characteristics. Commonly, objective quality metrics estimate the quality of a single component (audio or video) of the content. Machine learning techniques, such as autoencoders, offer as a very promising alternative to develop objective assessment models. This paper studies the performance of a group of autoencoder-based objective quality metrics on a diverse set of audio-visual content. To perform this test, we use a large dataset of audio-visual content (The UnB-AV database), which contains degradations in both audio and video components. The database has accompanying subjective scores collected on three separate subjective experiments. We compare our autoencoder-based methods, which take into account both audio and video components (multi-modal), against several objective (single-modal) audio and video quality metrics. The main goal of this work is to verify the gain or loss in performance of these single-modal metrics, when tested on audio-visual sequences.

Digital Library: EI
Published Online: January  2020
  98  10
Image
Pages 168-1 - 168-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

Video Quality Assessment (VQA) is an essential topic in several industries ranging from video streaming to camera manufacturing. In this paper, we present a novel method for No-Reference VQA. This framework is fast and does not require the extraction of hand-crafted features. We extracted convolutional features of 3-D C3D Convolutional Neural Network and feed one trained Support Vector Regressor to obtain a VQA score. We did certain transformations to different color spaces to generate better discriminant deep features. We extracted features from several layers, with and without overlap, finding the best configuration to improve the VQA score. We tested the proposed approach in LIVE-Qualcomm dataset. We extensively evaluated the perceptual quality prediction model, obtaining one final Pearson correlation of 0:7749±0:0884 with Mean Opinion Scores, and showed that it can achieve good video quality prediction, outperforming other state-of-the-art VQA leading models.

Digital Library: EI
Published Online: January  2020
  45  1
Image
Pages 169-1 - 169-7,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

Video object tracking (VOT) aims to determine the location of a target over a sequence of frames. The existing body of work has studied various image factors that affect VOT performance. For instance, factors such as occlusion, clutter, object shape, unstable speed and zooming, that influence video quality, do affect tracking performance. Nonetheless, there is no clear distinction between scene-dependent challenges such as occlusion and clutter and the challenges imposed by traditional notions of “quality impairments” inherited from capture, compression, processing, and transmission. In this study, we are concerned with the latter interpretation of quality as it affects video tracking performance. In this paper, we propose the design and implementation of a quality aware feature selection for VOT. First, we divided each frame of the video into patches of the same size and extracted HOG, and natural scene statistics (NSS) features from these patches. Then, we degraded the videos synthetically with different levels of post-capture distortions such as MPEG-4, AWGN, salt and pepper, and blur. Finally, we defined the best set of features HOG and NSS that generate the largest area under the curve in the success plots, yielding an improvement in the video tracker performance in videos affected by post-capture distortions.

Digital Library: EI
Published Online: January  2020
  73  18
Image
Pages 170-1 - 170-10,  © Society for Imaging Science and Technology 2020
Volume 32
Issue 9

This paper investigates camera phone image quality, namely the effect of sensor megapixel (MP) resolution on the perceived quality of images displayed at full size on high-quality desktop displays. For the purpose, we use images from simulated cameras with different sensor MP resolutions. We employ methods recommended in the IEEE 1858 Camera Phone Image Quality (CPIQ) standard, as well as other established psychophysical paradigms, to obtain subjective image quality ratings for systems with varying MP resolution from large numbers of observers. These are subsequently used to validate image quality metrics (IQMs) relating to sharpness and resolution, including those from the CPIQ standard. Further, we define acceptable levels of quality - when changing MP resolution - for mobile phone images in Subjective Quality Scale (SQS) units. Finally, we map SQS levels to categories obtained from star-rating experiments (commonly used to rate consumer experience). Our findings draw a relationship between the MP resolution of the camera sensor and the LCD device. The chosen metrics predict quality accurately, but only the metrics proposed by CPIQ return results in calibrated JNDs in quality. We close by discussing the appropriateness of star-rating experiments for the purpose of measuring subjective image quality and metric validation.

Digital Library: EI
Published Online: January  2020

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]