Recently, various deep-neural-network (DNN)-based approaches have been proposed for single-image super-resolution (SISR). Despite their promising results on major structure regions such as edges and lines, they still suffer from limited performance on texture regions that consist of very complex and fine patterns. This is because, during the acquisition of a low-resolution (LR) image via down-sampling, these regions lose most of the high frequency information necessary to represent the texture details. In this paper, we present a novel texture enhancement framework for SISR to effectively improve the spatial resolution in the texture regions as well as edges and lines. We call our method, high-resolution (HR) style transfer algorithm. Our framework consists of three steps: (i) generate an initial HR image from an interpolated LR image via an SISR algorithm, (ii) generate an HR style image from the initial HR image via down-scaling and tiling, and (iii) combine the HR style image with the initial HR image via a customized style transfer algorithm. Here, the HR style image is obtained by downscaling the initial HR image and then repetitively tiling it into an image of the same size as the HR image. This down-scaling and tiling process comes from the idea that texture regions are often composed of small regions that similar in appearance albeit sometimes different in scale. This process creates an HR style image that is rich in details, which can be used to restore highfrequency texture details back into the initial HR image via the style transfer algorithm. Experimental results on a number of texture datasets show that our proposed HR style transfer algorithm provides more visually pleasing results compared with competitive methods.
The addition of white noise to an image has been shown to increase the perceived sharpness of the image's blurred regions under certain conditions. Additive white noise has also been shown to increase the visual quality a compressed image, a finding which has been attributed, in large part, to the noise's ability to simulate textures that have been lost via the compression. To explore the perceptual underpinnings of this enhancing effect, in this paper, we tested whether the noise can be tuned based on properties of the source texture to provide even greater improvements in quality as compared to white noise. We used a parametric texture-synthesis algorithm to generate statistically and spectrally shaped noise patterns, which were scaled in contrast and then added to corresponding compressed texture regions. Subjects reported both the optimal contrast scaling factors and the associated quality improvement scores relative to the distorted regions. Our results indicate that the addition of the shaped noise can provide markedly greater quality improvements compared to white noise, a finding which cannot be explained by the mere presence of high-frequency content. We discuss how the optimal contrast scalings might be predicted, and we examine the performances of existing quality assessment algorithms on our enhanced images.