This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.
This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.
Grid mapping is widely used to represent the environment surrounding a car or a robot for autonomous navigation. This paper describes an algorithm for evidential occupancy grid (OG) mapping that fuses measurements from different sensors, based on the Dempster-Shafer theory, and is intended for scenes with stationary and moving (dynamic) objects. Conventional OGmapping algorithms tend to struggle in the presence of moving objects because they do not explicitly distinguish between moving and stationary objects. In contrast, evidential OG mapping allows for dynamic and ambiguous states (e.g. a LIDAR measurement: cannot differentiate between moving and stationary objects) that are more aligned with measurements made by sensors. In this paper, we present a framework for fusing measurements as they are received from disparate sensors (e.g. radar, camera and LIDAR) using evidential grid mapping. With this approach, we can form a live map of the environment, and also alleviate the problem of having to synchronize sensors in time. We also designed a new inverse sensor model for radar that allows us to extract more information from object level measurements, by incorporating knowledge of the sensor’s characteristics. We have implemented our algorithm in the OpenVX framework to enable seamless integration into embedded platforms. Test results show compelling performance especially in the presence of moving objects.
Naturalistic driving studies typically utilize a variety of sensors, including radar, kinematic sensors, and video cameras. While the main objective of such sensors is typically safety focused, with a goal of recording accidents and near accidents for later review, the instrumentation provides a valuable resource for a variety of transportation research. Some applications, however, require additional processing to improve the utility of the data. In this work, we describe a computer vision procedure for calibrating front view cameras for the Second Strategic Highway Research Project. A longitudinal stability study of the estimated parameters across a small sample set of cameras is presented along with a proposed procedure for calibrating a larger number of cameras from the study. A simple use case is presented as one example of the utility of this work. Finally, we discuss plans for calibrating the complete set of approximately 3000 cameras from this study.