Regular
ADF SCANNERAFAUTOFOCUS SPEEDAUTO CONTROLACUTANCEATTRIBUTES SELECTIONARTIFACTSAUTOFOCUS IRREGULARITYALEXNET MODELAUTOFOCUSAUTO EXPOSURE
BEST
CONVERGENCECONVOLUTIONAL NEURAL NETWORKSCOLOR PROPERTIESCROSS TALKCPIQCAMERA QUALITYCAMERACONTRAST RATIOCCDCUDACAMERA BENCHMARKING
DYNAMIC RANGE COMPRESSIONDUST DETECTIONDEGRADATION TYPEDIMENSION REDUCTIONDEHAZINGDEGHOSTINGDXOMARK MOBILEDEAD LEAVES
EXPOSURE TIME
FEATURE SELECTIONFEATURE RANKINGFORWARD SEARCHFOVEATED JUST-NOTICEABLE-DIFFERENCE (FJND)
GPU COMPUTINGGAMMA-RAYSGAMMA-RAYS CAMERAGPGPUGRAININESS
HUMAN VISUAL SYSTEMHDRHETEROGENEOUS PROGRAMMINGHDR DOWN CONVERSIONHIGH DYNAMIC RANGEHUMAN COLOR PERCEPTIONHOLOGRAPHIC DISPLAYHYBRID LOG-GAMMA
IMAGE QUALITY MEASUREIMAGE REGISTRATIONIMAGE QUALITYIMAGE FUSIONIMAGE PROCESSINGISO15781IMAGE QUALITY ASSESSMENTIMAGE QUALITY EVALUATIONIMAGE RECONSTRUCTIONISO 12233ISO20490ISO12233
JUST NOTICEABLE DIFFERENCESJUST-NOTICEABLE-DIFFERENCEJND
KNOWLEDGE-BASED TAXONOMIC SCHEMEKURTOSIS
LINEAR DECODERLABORATORY-SETUPLIVE BROADCASTLUMINANCE RATIOLOCAL BINARY PATTERN (LBP) OPERATORSLATENCYLEGACY DISPLAYS
MCP IMAGE INTENSIFIERMODULATION TRANSFER FUNCTION (MTF)MOBILE IMAGINGMULTISPECTRAL DATA SETMASKING EFFECTMICROARCHITECTURAL ANALYSISMAGE DETECTORMULTI-IMAGINGMOTTLEMTFMACHINE LEARNING TECHNIQUE
NO-REFERENCENO-REFERENCE IMAGE QUALITY ASSESSMENTNR-IQANVIDIA VISUAL PROFILER
OBSERVER CALIBRATIONOBJECTIVE METRICSOBJECTIVE IMAGE QUALITY METRICOPTICAL DENSITY FADINGOBJECTIVE IMAGE QUALITY ASSESSMENT METRICS
PRINTINGPOINT SPREAD FUNCTION (PSF)PSYCHOPHYSICAL EXPERIMENTPERCEPTUAL IMAGE QUALITYPSYCHOPHYSICSPOWER SPECTRAP1858PICTORIAL IMAGE TARGETPERCEPTUAL EXPERIMENTATIONPRINT IMAGE QUALITY
QUALITY AWARE FILTERS
RESOLUTIONRGB-NIRREFERENCE IMAGE
SALIENCY-BASED CRITERIASHARPNESSSLANTED EDGESCANNERSMARTPHONESUPPORT VECTOR MACHINESPATIAL FREQUENCY RESPONSESUBJECTIVE TESTINGSMARTPHONE COLOR QUALITYSYSTEM PERFORMANCESCINTILLATORSUBJECTIVE STUDYSFRSHOOTING TIME LAG
TEXT FADINGTONE MAPPINGTEXTURE DESCRIPTORSTEXTURE LOSSTEXTURE BLUR
UHD SIGNAL MEASUREMENT TOOLUNIFORMITYUHD SIGNALSUNSUPERVISED LEARNINGULTRAHIGH-DEFINITION TELEVISION
VIEWING DISTANCEVIRTUAL REALITY (VR)VIEWER PREFERENCEVISUAL INFORMATION ANALYSISVISUAL PERCEPTION FIELDVIQETVIDEO RESOLUTION
WRAPPER APPROACHWAVE ABERRATION
ZERNIKE
8K
 Filters
Month and year
 
  26  1
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2017
Digital Library: EI
Published Online: January  2017
  57  7
Image
Pages 7 - 14,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

This article proposes a new no-reference image quality assessment method that is able to blindly predict the quality of an image. The method is based on a machine learning technique that uses texture descriptors. In the proposed method, texture features are computed by decomposing images into texture information using multiscale local binary pattern (MLBP) operators. In particular, the parameters of local binary pattern operators are varied, which generates MLBP operators. The features used for training the prediction algorithm are the histograms of these MLBP channels. The results show that, when compared with other state-of-the-art no-reference methods, the proposed method is competitive in terms of prediction precision and computational complexity. © 2016 Society for Imaging Science and Technology.

Digital Library: EI
Published Online: January  2017
  142  1
Image
Pages 15 - 20,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

No-reference image quality metrics are of fundamental interest as they can be embedded in practical applications. The main goal of this paper is to define a new selection process of attributes in no-reference learning-based image quality algorithms. To perform this selection, attributes of seven well known no-reference image quality algorithms are analyzed and compared with respect to degradations present into the image. To assess the performance of these algorithms, the Spearman Rank Ordered Correlation Coefficient (SROCC) is computed between the predicted values and the MOS of three public databases. In addition, an hypothesis test is conducted to evaluate the statistical significance of performance of each tested algorithm.

Digital Library: EI
Published Online: January  2017
  152  2
Image
Pages 21 - 25,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

A relatively recent thrust in IQA research has focused on estimating the quality of a distorted image without access to the original (reference) image. Algorithms for this so-called noreference IQA (NR IQA) have made great strides over the last several years, with some NR algorithms rivaling full-reference algorithms in terms of prediction accuracy. However, there still remains a large gap in terms of runtime performance; NR algorithms remain significantly slower than FR algorithms, owing largely to their reliance on natural-scene statistics and other ensemble-based computations. To address this issue, this paper presents a GPGPU implementation, using NVidia's CUDA platform, of the popular Blind Image Integrity Notator using DCT Statistics (BLIINDS-II) algorithm [8], a state of the art NR-IQA algorithm. We copied the image over to the GPU and performed the DCT and the statistical modeling using the GPU. These operations, for each 5x5 pixel window, are executed in parallel. We evaluated the implementation by using NVidia Visual Profiler, and we compared the implementation to a previously optimized CPU C++ implementation. By employing suitable optimizations on code, we were able to reduce the runtime for each 512x512 image from approximately 270 ms down to approximately 9 ms, which includes the time for all data transfers across PCIe bus. We discuss our unique implementation of BLIINDS-II designed specifically for use on the GPU, the insights gained from the runtime analyses, and how the GPGPU techniques developed here can be adapted for use in other NR IQA algorithms.

Digital Library: EI
Published Online: January  2017
  39  2
Image
Pages 26 - 29,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

Image quality assessment (IQA) has been important issue in image processing. While using subjective quality assessment for image processing algorithms is suitable, it is hard to get subjective quality because of time and money. A lot of objective quality assessment algorithms are used widely as a substitution. Objective quality assessment divided into three types based on existence of reference image : full-reference, reduced-reference, and no-reference IQA. No-reference IQA is more difficult than fullreference IQA because it does not have any reference image. In this paper, we propose a novel no-reference IQA algorithm to measures contrast of image. The proposed algorithm is based on just-noticeable-difference which utilizes the human visual system (HVS). Experimental results show the proposed method performs better than conventional no-reference IQAs.

Digital Library: EI
Published Online: January  2017
  148  3
Image
Pages 30 - 35,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

In this paper, we train independent linear decoder models to estimate the perceived quality of images. More specifically, we calculate the responses of individual non-overlapping image patches to each of the decoders and scale these responses based on the sharpness characteristics of filter set. We use multiple linear decoders to capture different abstraction levels of the image patches. Training each model is carried out on 100,000 image patches from the ImageNet database in an unsupervised fashion. Color space selection and ZCA Whitening are performed over these patches to enhance the descriptiveness of the data. The proposed quality estimator is tested on the LIVE and the TID 2013 image quality assessment databases. Performance of the proposed method is compared against eleven other state of the art methods in terms of accuracy, consistency, linearity, and monotonic behavior. Based on experimental results, the proposed method is generally among the top performing quality estimators in all categories.

Digital Library: EI
Published Online: January  2017
  127  0
Image
Pages 36 - 41,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

Due to the massive popularity of digital images and videos over the past several decades, the need for automated quality assessment (QA) is greater than ever. Accordingly, the impetus on QA research has focused on improving prediction accuracy. However, for many application areas, such as consumer electronics, the runtime performance and related computational considerations are equally as important as the accuracy. Most modern QA algorithms exhibit a large computational complexity. However, the large complexity of these algorithms does not necessarily prohibit their ability of achieving low runtimes if hardware resources are used appropriately. GPUs, which offer a large amount of parallelism and a specialized memory hierarchy, should be well-suited for QA algorithm deployment. In this paper, we analyze a massively parallel GPU implementation of the most apparent distortion (MAD) full-reference image QA algorithm with optimizations guided by a microarchitectural analysis. A shared memory based implementation of the local statistics computation has yielded 25% speedup over its original implementation. We describe the optimizations that produce the best results. We also justify our optimization recommendations with descriptions of the microarchitectural underpinnings. Although our study focuses on a single algorithm, the image-processing primitives used in this algorithm are fundamentally similar to those used in most modern QA algorithms.

Digital Library: EI
Published Online: January  2017
  167  22
Image
Pages 42 - 51,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

Finding an objective image quality metric that matches the subjective quality has always been a challenging task. We propose a new full reference image quality metric based on features extracted from Convolutional Neural Networks (CNNs). Using a pre-trained AlexNet model, we extract feature maps of the test and reference images at multiple layers, and compare their feature similarity at each layer. Such similarity scores are then pooled across layers to obtain an overall quality value. Experimental results on four state-of-the-art databases show that our metric is either on par or outperforms 10 other state-of-the-art metrics, demonstrating that CNN features at multiple levels are superior to handcrafted features used in most image quality metrics in capturing aspects that matter for discriminative perception. © 2016 Society for Imaging Science and Technology.

Digital Library: EI
Published Online: January  2017
  149  4
Image
Pages 52 - 58,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

This paper suggests a new quality measure of an image, pertaining to its contrast. Several contrast measures exist in the current research. However, due to the abundance of Image Processing software solutions, the perceived (or measured) image contrast can be misleading, as the contrast may be significantly enhanced by applying grayscale transformations. Therefore, the real challenge, which was not dealt with in the previous literature, is measuring the contrast of an image taking into account all possible grayscale transformations, leading to the best "potential" contrast. Hence, we suggest an alternative "Potential Contrast" measure, based on sampled populations of foreground and background pixels (e.g. scribbles or saliency-based criteria). An exact and efficient implementation of this measure is found analytically. The new methodology is tested and is shown to be invariant to invertible grayscale transformations.

Digital Library: EI
Published Online: January  2017
  27  3
Image
Pages 59 - 63,  © Society for Imaging Science and Technology 2017
Volume 29
Issue 12

The variability of human observers and differences in the cone photoreceptor sensitivities are important to understand and quantify in the context of Color Science research. Differences in human cone sensitivity may cause two observers to see different colors on the same display. Technicolor SA built a prototype instrument that allows classification of an observer with normal color vision into a small number of color vision categories. The instrument is used in color critical applications for displaying colors to human observers. To facilitate Color Science research, an Observer Calibrator is being designed and built. This instrument is modeled on one developed at Technicolor, but with improvements including providing higher luminance levels to the observers, a more robust MATLAB computer interface, two sets of individually controlled LED primaries, and the potential for interchangeable optical front ends to present the color stimuli to observers. The new prototype is lightweight, inexpensive, stable, and easy to calibrate and use. Human observers can view the difference between two displayed colors, or match one existing color by adjusting one LED primary set. The use of the new prototype will create opportunities for further color science research and will provide an improved experiment experience for participating observers.

Digital Library: EI
Published Online: January  2017

Keywords

[object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object] [object Object]