Fluorescence microscopy has become a widely used tool for studying various biological structures of in vivo tissue or cells. However, quantitative analysis of these biological structures remains a challenge due to their complexity which is exacerbated by distortions caused by lens aberrations and light scattering. Moreover, manual quantification of such image volumes is an intractable and error-prone process, making the need for automated image analysis methods crucial. This paper describes a segmentation method for tubular structures in fluorescence microscopy images using convolutional neural networks with data augmentation and inhomogeneity correction. The segmentation results of the proposed method are visually and numerically compared with other microscopy segmentation methods. Experimental results indicate that the proposed method has better performance with correctly segmenting and identifying multiple tubular structures compared to other methods.
In the recent years, the global penetration of Internet and the rapid spread of mobile devices have led to an exponential rise of trade in counterfeit and pirated goods with a negative impact on the profits of affected firms and consequently damage for employment and economic growth. This peculiar online trade has taken place mostly on deep web, but today it has started to shift to common IM platform and image based social networks. Regard to this context, this work presents a specific multimedia analytics platform, that monitors image catalogues promoting potential counterfeit products on social networks in order to extract useful information (as email, WeChat or WhatsApp, external links to specific online marketplaces) and profile the potential fakers. The preliminary results, derived by considering the image catalogues shared by various sellers on image based social networks, show the effectiveness of the proposed multimedia analytics methodologies.
The 21st century witnesses a blooming of online fashion retailing business, as well as the Peer-to-Peer (P2P) marketplaces. However, many listings on the P2P marketplace lack complete and accurate information. In this research, we target the fashion online marketplace, and try to retrieve garment color information to help sellers and buyers gain a more comprehensive knowledge of the fashion items. We focus on fashion product portraits, and propose a system that autonomously finds the garment region in the image and retrieves the color information and patterns of the item. This system contains three modules: First, image segmentation is deployed to partition the image into perceptually meaningful areas, and then some features are designed to differentiate the garment, region from nongarment region. Secondly, we propose a classifier module based on unsupervised clustering methods to select the garment region based on the feature vector. For the last module, we study current color naming systems, and color naming schemes on fashion websites, and propose a computational model that matches the color coordinates with the pre-defined color labels in the marketplace. Compared with other methods, thanks to the unsupervised learning methods that we use, our approach does not require a huge amount of training data labeled by human subjects.
The Fels method is a well-known method for assessing the skeletal maturity from hand-wrist X-ray images. This method estimates the skeletal maturity age by manually grading multiple indicators for different hand-wrist bones. Due to the large number of indicators that need to be measured, this is a time-consuming task, especially with large databases of X-ray images. Furthermore, it can be a very subjective task that depends on the observer. Therefore, the need for automation of this process is in high demand. In this study, we have proposed a semi-automatic method to grade a sub-set of Fels indicators. This method is composed of four main steps of pre-processing, ROI extraction, segmentation, and Fels indicator grading. The most challenging step of the algorithm is to segment different bones in the Fels regions of interest (wrist, Finger I, III and V ROIs) which have been done using local Otsu thresholding and active contour filtering. The result of segmentation is evaluated visually on a subset of Fels study data set.
We present a click-based interactive segmentation for indoor scenes, which allows the user to select an object or region within the scene in a few clicks. The goal for the click-based approach is to provide the user with a simple method to reduce the amount of input required for segmentation. We first present an effective global segmentation strategy, which provides a rough separation of different textures. The user, then, places a few clicks to segment the target. A novel Trimap assignment strategy is proposed to utilize the click information. To study the performance of our method, psychophysical experiments were conducted to compare our click-based approach with other existing methods.
OCT (Optical coherence tomography) has become a popular method for macular degeneration diagnosis. The advantages over other methods are: OCT is noninvasive, it has a high penetration and it has a high resolution. However, the always present speckle noise and the low contrast differences make it hard to segment the layers for the measurements correctly. The aim of this paper is to show the importance of optimizing the retinal segmentation process. Actual automatic segmentation algorithms are capable of detecting up to eleven layers in real time, but often fail at images with (strong) macular degeneration, which are complicating the separation of the layers from each other. This paper sums up some actual aspects of developments in retinal segmentation and shows the limits of actual algorithms. As a comprehensive test process for this paper, we tested all common image processing algorithms and implemented found promising, modern OCT segmentation methods. The result is a wide scale analysis which can be used as a roadmap for optimizing the process of retinal segmentation. Promising algorithms were found with the Canny edge detector, graph cuts and dynamic programming. Combining these algorithms results, the graph-, gradient-, intensity information, and decreasing the search region step by step has shown to be a fast and reliable solution. All tests were using 2D image data, 3D data could be used as well but plays no role in this paper. The testing process includes pre-filtering for image denoising, which can be done fast and is creating better preconditions for the segmentation process.