Spot photometers measure the luminance that is emitted or reflected from a small surface area in a physical environment. Because the measurement is limited to a “spot,” capturing dense luminance readings for an entire environment is impractical. In this paper, we provide preliminary results demonstrating the potential of using an off-the-shelf commercial camera to operate as a 360° luminance meter. Our method uses the Ricoh Theta Z1 camera, which provides a full 360° omnidirectional field of view and an API to access the camera’s minimally processed RAW images. Working from the RAW images, we describe a calibration method to map the RAW images under different exposures and ISO settings to luminance values. By combining the calibrated sensor with multi-exposure high-dynamic-range imaging, we provide a cost-effective mechanism to capture dense luminance maps of environments. Our results show that our luminance meter performs well when validated against a significantly more expensive spot photometer.
Inventory management and handling in warehouse environments have transformed large retail fulfillment centers. Often hundreds of autonomous agents scurry about fetching and delivering products to fulfill customer orders. Repetitive movements such as these are ideal for a robotic platform to perform. One of the major hurdles for an autonomous system in a warehouse is accurate robot localization in a dynamic industrial environment. Previous LiDAR-based localization schemes such as adaptive Monte Carlo localization (AMCL) are effective in indoor environments and can be initialized in new environments with relative ease. However, AMCL can be influenced negatively by accumulated odometry drift, and is also reliant primarily on a single modality for scene understanding which limits the localization performance. We propose a robust localization system which combines multiple sensor sources and deep neural networks for accurate real-time localization in warehouses. Our system employs a novel deep neural network architecture consisting of multiple heterogeneous deep neural networks. The overall architecture employs a single multi-stream framework to aggregate the sensor information into a final robot location probability distribution. Ideally, the integration of multiple sensors will produce a robust system even when one sensor fails to produce reliable scene information.