As content creation, editing, approval, prototyping, manufacturing and consumption become ever more distributed, the ability to display a broad variety of colors becomes increasingly more important. Displays that use RGB filters or even backlights cannot span all spectra or even just colors that occur in nature. To improve the accuracy of spectral and color reproduction, there have been attempts to include additional color primaries in displays. Existing solutions, however, impact cost, scalability, or spatial resolution and are predominantly applicable to projections system. We propose an approach based on combining diffraction gratings extractors (Fattal et al., 2013) and the HANS imaging pipeline (Morovič et al., 2011) initially developed for printing. This combination offers access to a very large color and spectral gamut with the same backlights as commercially used today.
In this article, the authors present a method for assessing image quality in stereoscopic images: QUALITAS. The proposed method is inspired by some features of the human visual system, such as contrast sensitivity, response to visual disparity and perception of distance. Individual qualities of the stereo-pair are not simply averaged. QUALITAS introduces Contrast Band-Pass Filtering on a wavelet domain in both views; in this way it weights left and right images perceptually depending on viewing conditions. The authors have tested the method on the LIVE 3D stereoscopic image database and compared the results with a wide set of image quality metrics from current research.
Motivated by the visual appearance of spatially denoised video sequences, we study the visibility of dynamic (temporal) noise. We investigate the visibility of noise for different spatial frequency bands. We conduct a subjective test with 22 observers. Included are two types of test patterns in the test: static (spatial) noise patterns and dynamic (spatiotemporal) noise. Eight spatial frequency bands are used for each pattern type. We obtain two main results: First, the contrast sensitivity of spatially low-frequency noise is significantly higher with temporal variation. Second, the noise visibility also depends on the content of the image or video. As the noise is masked by the content of the image, it becomes less perceivable. As higher frame rate might be used in future, a second test was performed comparing 24fps and 48 fps. Results show that the noise visibility is very similar. The significant increase of visibility with the temporal variation of spatially low-frequency noise should be respected in the design of future video processing methods.
In this paper we investigate a method for correcting a encoding artifact due to the periodic nature of video coding. This paper builds upon earlier work [1–3] by incorporating temporal drift correction feedback in the encoder to prevent visible artifacts seen when an I-frame reset occurs. The method has been implemented in the H.264 JM 18.0 reference encoder, and has been shown to significantly improve the perceived quality of the video quality when compared to the encoded video without temporal drift correction.
In this article, we propose a joint halftoning and data hiding technique for color images. To ensure high quality of the printed image, the color direct binary search (CDBS) iterative halftoning algorithm is used. The proposed approach uses the commonly available cyan, magenta and yellow colorants to hide data in the chrominance channels. Orientation modulation is used for data embedding during the iterative CDBS halftoning stage. The detector is using PCA-learned components to extract the embedded data from the scanned image. Experimental results show that this proposed CDBS-based data hiding method offers both higher data hiding capacity and higher robustness to the print-and-scan channel when compared to the state-of-the-art grayscale counterpart method. The relatively high correct detection rate make this approach suitable for applications which require exact extraction of embedded data in prints.
Halftoning is one of the key stages of any printing image processing pipeline. With colorant-channel approaches, a key challenge for matrix-based halftoning is the co-optimization of the matrices used for the individual colorants, which becomes increasingly complex and over-constrained as the number of the colorants increases. Both choices of screen angles (in clustered-dot cases) or structures and control over how the individual matrices relate to each other and result in over- versus side-by-side printing of the colorants impose restrictions that are challenging to reconcile. The solution presented in this paper relies on the benefits of a HANS pipeline, where local Neugebauer Primary use is specified at each pixel and where halftoning can be performed using a single matrix, regardless of the number of colorants used. The provably complete plane-dependence of the resulting halftones and an application to security printing will be presented among the solution's benefits.
Image-based relighting (IBRL) renders the appearance of a subject in a novel lighting environment as a linear combination of the images of its reflectance field , the appearance of the subject lit by each incident lighting direction. Traditionally, a tristimulus color camera records the reflectance field as the subject is sequentially illuminated by broad-spectrum white light sources from each direction. Using a multispectral LED sphere and either a tristimulus (RGB) or monochrome camera, we photograph a still life scene to acquire its multispectral reflectance field – its appearance for every lighting direction for multiple incident illumination spectra. For the tristimulus camera, we demonstrate improved color rendition for IBRL when using the multispectral reflectance field, producing a closer match to the scene's actual appearance in a real-world illumination environment. For the monochrome camera, we also show close visual matches. We additionally propose an efficient method for acquiring such multispectral reflectance fields, augmenting the traditional broad-spectrum lighting basis capture with only a few additional images equal to the desired number of spectral channels. In these additional images, we illuminate the subject by a complete sphere of each available narrow-band LED light source, in our case: red, amber, green, cyan, and blue. From the full-sphere illumination images, we promote the white-light reflectance functions for every direction to multispectral, effectively hallucinating the appearance of the subject under each LED spectrum for each lighting direction. We also use polarization imaging to separate the diffuse and specular components of the reflectance functions, spectrally promoting these components according to different models. We validate that the approximated multispectral reflectance functions closely match those generated by a fully multispectral omnidirectional lighting basis, suggesting a rapid multispectral reflectance field capture method which could be applied for live subjects.
This paper proposes a method to analyze the observed images of fluorescent images influenced by mutual illumination and estimate the spectral components. We suppose a general case where the entire surfaces of fluorescent objects have mutual illumination effects. First we model mutual illumination between the two objects. It is shown that the spectral composition is summarized with four components of (1) diffuse reflection, (2) diffuse-diffuse interreflection, (3) fluorescent self-luminescence, and (4) interreflection by mutual fluorescent illumination. Each component has two unknown factors of the spectral functions depending on wavelength and the weighting factors depending on pixel location. Second, an iterative algorithm is developed to solve this nonlinear estimation problem. Moreover, aiming a general solution which is independent of the initial conditions, we adopt a stabilization index to enforce the spectral smoothness and the spatial smoothness. Finally, the feasibility of the proposed method is shown using the spectral images of two adjacent fluorescent objects captured by a spectral imaging system in the visible range.
In this research, we evaluate robustness for noise and change of epidermis thickness in the method to estimate five components that are melanin, oxy-hemoglobin, deoxy-hemoglobin, shading and surface reflectance from spectral reflectance of skin at 5 wavelengths. We also estimated the five components from measured image of age spot and circles under eyes using the method. As a result of evaluation, we found that the noise of image is required to be 0.1% or less to accurately estimate five components and the thickness of epidermis affects the estimated value. However, we could acquire the distribution of major causative for age spot and circle under eyes by applying the method to measured spectral images.
Blood pressure is usually measured with a contact device called a sphygmomanometer cuff. Recently blood pressure can be easily measured with portable devices such as smart watches that take advantage of the progress of mobile technology. Even with the use of mobile devices, contact measurement is still required, which is one of the biggest limitations for monitoring. In this paper, we propose non-contact video based estimation method of pulse transit time (PTT) based on the quantitation method of hemoglobin level. The correlation between PTT measured by the proposed method and the blood pressure measurement with sphygmomanometer cuff was between -0.5792 to -0.7801, which confirms the effectiveness of the proposed method.