Regular
No keywords found
 Filters
Month and year
 
  19  3
Image
Pages 1 - 2,  © Society for Imaging Science and Technology 2016
Digital Library: EI
Published Online: February  2016
  31  2
Image
Pages 1 - 8,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 23 years, the competition has challenged undergraduate, graduate and Ph. D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.

Digital Library: EI
Published Online: February  2016
  34  1
Image
Pages 1 - 8,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

The computational framework, basedon the conformal camera, is developedfor processingvisual information duringsmooth pursuit movements of a robotic eye. During smooth pursuit, the image of the tracked object remains nearly stationary while the image of a stationary background sweeps across the image plane of the camera. The background's image transformation derived in the second-order approximation enable the anticipation of the perceptual outcome of the camera pursuit. This can be used to support the correct visual information of the moving object in front of the stationary background, These results complement the author's previous study on the predictive image processingfor visual stability duringthe conformalcameramovementsresembling primate's saccadiceye rotations. The visual information processing algorithms that can support visual stability during smooth pursuit and saccadic movements of an anthropomorphic robotic camera are neededfor an autonomousrobot efficient interactions with the real world, in real time.

Digital Library: EI
Published Online: February  2016
  225  29
Image
Pages 1 - 9,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

An automatic system to extract terrestrial objects from aerial imagery has many applications in a wide range of areas. However, in general, this task has been performed by human experts manually, so that it is very costly and time consuming. There have been many attempts at automating this task, but many of the existing works are based on class-specific features and classifiers. In this article, the authors propose a convolutional neural network (CNN)-based building and road extraction system. This takes raw pixel values in aerial imagery as input and outputs predicted three-channel label images (building–road–background). Using CNNs, both feature extractors and classifiers are automatically constructed. The authors propose a new technique to train a single CNN efficiently for extracting multiple kinds of objects simultaneously. Finally, they show that the proposed technique improves the prediction performance and surpasses state-of-the-art results tested on a publicly available aerial imagery dataset. c 2016 Society for Imaging Science and Technology.

Digital Library: EI
Published Online: February  2016
  26  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

Depth information is one of the most important elements in generating three-dimensional (3D) content. Stereo matching methods estimate depth information using the binocular characteristic. The estimated depth information is typically represented by a disparity value. Therefore, two slightly different viewpoints are used to find the disparity value. However, in the homogeneous region, corresponding point finding is problematic since the area is textureless. In order to solve this problem, we propose a pixel based cost computation method using weighted distance information for cross-scale stereo matching. The proposed method uses a hierarchical structure to accurately estimate disparity values in the homogeneous region. We also employ the distance information to complement the pixel based cost function. The experiment results show that the proposed method exceeds the conventional cross-scale stereo matching in terms of produces accurate disparity values.

Digital Library: EI
Published Online: February  2016
  132  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

It is widely assumed that texture is generally characterized locally by two complementary aspects, a pattern and its strength. Based on this assumption and using Local Binary Pattern (LBP) operator as texture descriptor, this work aims to implement an automatic weighting of the local blocks or regions characterizing a given face image. The work reports an improved version of the margin-based iterative search Simba algorithm to feature extrac- tion for face recognition. The main contribution is twofold: (i) we extend the margin-based iterative search algorithm (Simba) to the Chi-square distance that computes dissimilarities between histograms. (ii) since we are interested in studying the relevance of individual blocks or local regions characterizing a given face image, we also extended the Simba algorithm so that one can com- pute the weights of each attribute as well as of subsets of attributes or blocks. The resulting weight vector has been used initially for an automatic selection of attributes and/or blocks for face recognition with supervised learning based on k-nearest neighbors classifier. Besides, in order to improve the performance of the face recognition task we also made use of the Simba weight vector to weight the distance measures adopted by the k-NN classifier. The experimental results clearly show that the selection based on the automatic weighting outperforms the classification based in all the features. Furthermore, selecting blocks is more effective than selecting attributes, and Chi-square distance per- forms appreciably better than Euclidean one.

Digital Library: EI
Published Online: February  2016
  23  0
Image
Pages 1 - 6,  © Society for Imaging Science and Technology 2016
Volume 28
Issue 10

Visual place recognition is an interesting technology that can be used in many domains such as localizing historical photos, autonomous navigation and augmented reality. The main stream of research in that domain was based on the use of local invariant features like SIFT. Little attention was given to region descriptors which can encompass local and global visual appearances. In this paper, we provide an empirical study on a particular visual descriptor: covariance matrices. In order to enhance the discriminative power of the descriptor, multi-block based descriptors are designed and compared. We show further experimental results on matching test images with reference images acquired in dense urban scenes in the streets of the city of Paris. Experiments show that the multi-block based matching algorithms can lead to both high accuracy and scalability.

Digital Library: EI
Published Online: February  2016