The edge response in retinal image is the first step for human vision recognizing the outside world. A variety of receptive field models for describing the impulse response have been proposed. Which satisfies the uncertain principle? occupied the interest from a point of minimizing the product (Δx)(Δ w) both in spatial and spectral. Among the typical edge response models, finally Gabor function and 2nd. Gaussian Derivative GD2 remained as strong candidates. While famous D. Marr and R. Young support GD2, many vision researchers prefer Gabor. The retinal edge response model is used for image sharpening.<br/> Different from the conventional image sharpening filters, this paper proposes a novel image sharpening filter by modifying the Lanczos resampling filter. The Lanczos filter is used for image scaling to resize digital images. Usually it works to interpolate the discrete sampled points like as a kind of smoothing filter not as sharpening. The Lanczos kernel is given by the product of sampling Sinc function and the scaled Sinc function. The scaled Sinc function expanded by the scale "s" plays a role of window function. The author noticed that the inverse scaling of Lanczos window can be used not for smoothing but for sharpening filter.<br/> This paper demonstrates how the proposed model works effectively in comparison with Gabor and GD2.
Digital Preservation has evolved from an early-stage field based heavily on research and the sharing of information to a nascent industry based on practical activity. In this transition there is a risk that the vital activity of sharing information and expertise declines in favor of the day-to-day practicalities of caring for content. This work explores how the Preservation Action Registries (PAR) Initiative can not only help to bridge the gap, but in doing so, create new opportunities that can help make automated digital preservation a practical reality even for non-expert users by describing a proof-of-principle demonstration of the automated application of Digital Preservation Policy, and subsequent changes to that policy.
The rise of cheaper and more accurate genotyping techniques has lead to significant advances in understanding the genotype-phenotype map. However, this is currently bottlenecked by manually intensive or slow phenotype data collection. We propose an algorithm to automatically estimate the canopy height of a row of plants in field conditions in a single pass on a moving robot. A stereo sensor pointed down collects a series of stereo image pairs. The depth images are then converted to height-above-ground images to extract height contours. Separate height contours corresponding to each frame are then concatenated to construct a height contour representing one row of plants in the plot. Since the process is automated, data can be collected throughout the growing season with very little manual labor complementing the already abundantly available genotypic data. Using experimental data from seven plots, we show our proposed approach achieves a height estimation error of approximately 3.3%.
White balancing is a fundamental step in the image processing pipeline. The process involves estimating the chromaticity of the illuminant source and using the estimate to correct the image to remove any color cast. Given the importance of the problem, there has been much previous work on illuminant estimation. Previous work is either more accurate but slow and complex, or fast and simple but less accurate. In this paper, we propose a method for illuminant estimation that uses (i) fast features known to be predictive in illuminant estimation and (ii) single feature decision boundaries in ensembles of multivariate regression trees, (iii) each of which has been constructed to minimize a multivariate distance measure appropriate for illuminant estimation. The result is an illuminant estimation method that is simultaneously fast, simpler, and more accurate.
Abstraction in art often reflects human perception—areas of an artwork that hold the observer's gaze longest will generally be more detailed, while peripheral areas are abstracted, just as they are mentally abstracted by humans' physiological visual process. The authors' artistic abstraction tool, Salience Stylize, uses Deep Learning to predict the areas in an image that the observer's gaze will be drawn to, which informs the system about which areas to keep the most detail in and which to abstract most. The planar abstraction is done by a Random Forest Regressor, splitting the image into large planes and adding more detailed planes as it progresses, just as an artist starts with tonally limited masses and iterates to add fine details, then completed with our stroke engine. The authors evaluated the aesthetic appeal and effectiveness of the detail placement in the artwork produced by Salience Stylize through two user studies with 30 subjects.
During recent years, deep learning methods have shown to be effective for image classification, localization and detection. Convolutional Neural Networks (CNN) are used to extract information from images and are the main element of modern machine learning and computer vision methods. CNNs can be used for logo detection and recognition. Logo detection consist on locate and recognize commercial brand logos within an image. These methods are useful in the areas of online brand management or ad placement. The performance of this methods is closely related on the quantity and the quality of the data, typically image/label pairs, used to train the CNNs. Collecting the pair of images and labels, commonly referred as ground truth, can be expensive and time consuming. Multiple techniques try to solve this problem by either transforming the available data using data augmentation methods or by creating new images from scratch or from other images using image synthesis methods. In this paper, we investigate the latter approach. We segment background images, extract depth information and then blend logo images accordingly in order to create new real looking images. This approach allows us to create an indefinite number of images with a minimum manual labeling effort. The synthetic images can later be used to train CNNs for logo detection and recognition.
The rapid evolution of sensors and devices capable of facilitating the adoption of augmented reality (AR) tools in the everyday work routine has motivated many actors of the public sector and private industry to explore the possibility of optimizing the workflow of their employees. Besides the entertainment and gaming fields, AR tools are currently used in a large number of professional application areas that range from maintenance of equipment to environment acquisition, modeling, and design. The objective of this paper is to present a use case for AR in the context of utility management, where one of the major Italian operators in the electricity sector has decided to improve the efficiency of the workflow of its employees through the adoption of an AR solution. Relying on the superimposition of virtual information on the acquired visual scene, the workers can obtain a complete overview of the area to be inspected. Furthermore, the use of wearable equipment allows for a seamless integration in the maintenance procedures, also complying with the prevailing safety regulations.
Skin detection is used in applications in computer vision, including image correction, image–content filtering, image processing, and skin classification. In this study, we propose an accurate and effective method for detecting the most representative skin color in one's face based on the face's center region, which is free from nonskin-colored features, such as eyebrows, hair, and makeup. The face's center region is defined as the region horizontally between the eyes and vertically from the middle to the tip of one's nose. The performance of the developed algorithm was verified with a data set that includes more than 300 facial images taken under various illuminant conditions. Compared to previous works, the proposed algorithm resulted in a more accurate skin color detection with reduced computational load.
Plant phenotyping, or the measurement of plant traits such as stem width and plant height, is a critical step in the development and evaluation of higher yield biofuel crops. Phenotyping allows biologists to quantitatively estimate the biomass of plant varieties and therefore their potential for biofuel production. Manual phenotyping is costly, time-consuming, and errorprone, requiring a person to walk through the fields measuring individual plants with a tape measure and notebook. In this work we describe an alternative system consisting of an autonomous robot equipped with two infrared cameras that travels through fields, collecting 2.5D image data of sorghum plants. We develop novel image processing based algorithms to estimate plant height and stem width from the image data. Our proposed method has the advantage of working in situ using images of plants from only one side. This allows phenotypic data to be collected nondestructively throughout the growing cycle, providing biologists with valuable information on crop growth patterns. Our approach first estimates plant heights and stem widths from individual frames. It then uses tracking algorithms to refine these estimates across frames and avoid double counting the same plant in multiple frames. The result is a histogram of stem widths and plant heights for each plot of a particular genetically engineered sorghum variety. In-field testing and comparison with human collected ground truth data demonstrates that our system achieves 13% average absolute error for stem width estimation and 15% average absolute error for plant height estimation.