Abstraction in art often reflects human perception—areas of an artwork that hold the observer's gaze longest will generally be more detailed, while peripheral areas are abstracted, just as they are mentally abstracted by humans' physiological visual process. The authors' artistic abstraction tool, Salience Stylize, uses Deep Learning to predict the areas in an image that the observer's gaze will be drawn to, which informs the system about which areas to keep the most detail in and which to abstract most. The planar abstraction is done by a Random Forest Regressor, splitting the image into large planes and adding more detailed planes as it progresses, just as an artist starts with tonally limited masses and iterates to add fine details, then completed with our stroke engine. The authors evaluated the aesthetic appeal and effectiveness of the detail placement in the artwork produced by Salience Stylize through two user studies with 30 subjects.
High Dynamic Range (HDR) imaging has recently been applied to video systems, including the next-generation ultrahigh definition television (UHDTV) format. This format requires a camera with a dynamic range of over 15 f-stops and an S/N ratio that is the same as that of HDTV systems. Current UHDTV cameras cannot satisfy these conditions, as their small pixel size decreases the full-well capacity of UHDTV camera image sensors in comparison with that of HDTV sensors. We propose a four-chip capturing method combining threechip and single-chip systems. A prism divides incident light into two rays with intensities in the ratio m:1. Most of the incident light is directed to the three-chip capturing block; the remainder is directed to a single-chip capturing block, avoiding saturation in high-exposure videos. High quality HDR video can then be obtained by synthesizing the high-quality image obtained from the three-chip system with the low saturation image from the singlechip. Herein, we detail this image synthesis method, discuss the smooth matching method between spectrum characteristics of the two systems, and consider the modulation transfer function (MTF) response differences between the three- and single-chip capturing systems by means of analyzing using human visual models.
Image quality assessment (IQA) has been important issue in image processing. While using subjective quality assessment for image processing algorithms is suitable, it is hard to get subjective quality because of time and money. A lot of objective quality assessment algorithms are used widely as a substitution. Objective quality assessment divided into three types based on existence of reference image : full-reference, reduced-reference, and no-reference IQA. No-reference IQA is more difficult than fullreference IQA because it does not have any reference image. In this paper, we propose a novel no-reference IQA algorithm to measures contrast of image. The proposed algorithm is based on just-noticeable-difference which utilizes the human visual system (HVS). Experimental results show the proposed method performs better than conventional no-reference IQAs.