The perceptual process of images is hierarchical. Human tends to first perceive global structural information such as shapes of objects and further focus on local regional details such as texture. Furthermore, it is widely believed that structure information plays the most important role in task of utility assessment and quality assessment, especially in new scenarios like free-viewpoint television, where the synthesized views contain geometric distortion around objects. We thus hypothesize that the degradation of structural information in an image is more annoying for human observers than the one of the textures in certain application scenarios. In order to confirm our hypothesis, a bilateral filtering based model (BF-M) is proposed referring to a recent subjective perceptual test. In the proposed model, bilateral filters are first utilized to separate structure from the texture information in images. Afterward, features that capture object properties and features that reflect texture information were extracted from the response and the residual of bilateral filtering separately. A contour, a shape related and a texture based estimator are then proposed with the corresponding extracted features. Finally, the model is designed by leveraging the three estimators according to target tasks. With the task-based model, one can then investigate the role of structure/texture information in certain task by checking the correspondence optimized weights assigned to the estimators. In this paper, the hypothesis and the performance of the BF-M is verified on CU-Nantes database as utility estimator and on SynTEX, IRCCyN/IVC-DIBR databases as quality estimator. Experimental results show that (1) structure information does play greater role in several tasks; (2) the performance of the BF-M is comparable to the state-of-the art utility metrics as well as the quality metrics designed for texture synthesis and views synthesis. It is thus validated that the proposed model can also be applied as a task-based parametric image metric.
In this paper we present a method of texture synthesis which removes the need for users to set, or even understand, parameters which have an impact on the synthesized output. We accomplish this by first classifying each input texture sample into one of three texture types: regular, irregular and stochastic. We found that textures within a class were synthesized well with similar parameters. If we know the input texture class, we can provide a good starting set of parameters for the synthesis algorithm. Instead of requiring a user to manually select a set of parameters, we simply ask that the user tell us whether the synthesized texture is satisfactory or not. If the output is not satisfactory, we adjust parameters and try again until the user is happy with the output. In this implementation we use the image quilting method in [1], a texture synthesis algorithm, as well as texture classification. With small adjustments our method can be applied to other texture synthesis methods.
The goal in photography is generally the construction of a model of scene appearance. Unfortunately, statistical variations introduced by photon shot and other noise introduce errors in the raw value reported for each pixel sample. Rather than simply accepting those values as the best raw representation, the current work treats them as initial estimates of the correct values, and computes an error model for each pixel's value. The value error models for all pixels in an image are then used to drive a type of texture synthesis which refines the pixel value estimates, potentially increasing both accuracy and precision of each value. Each refined raw pixel value is synthesized from the value estimates of a plurality of pixels with overlapping error bounds and similar context within the same image. The error modeling and texture synthesis algorithms are implemented in and evaluated using KREMY (KentuckY Raw Error Modeler, pronounced "creamy"), a free software tool created for this purpose.