Recent work in image deblurring aided by inertial sensor data has shown promise. Separate work has also shown that deep learning techniques are useful for the image deblurring problem. Due to a lack of a proper dataset, however, deep learning techniques have not yet to be successfully applied to image deblurring when inertial sensor data is also available. This paper proposes to generate a synthetic training and testing dataset that includes groundtruth and blurry image pairs as well as inertial sensor data recorded during the exposure time of each blurry image. To simulate the real situations, the proposed dataset called DeblurIMUDataset considers synchronization issue, rotation center shift, rolling shutter effect as well as inertial sensor data noise and image noise. This dataset is available online.
We introduce a new image dataset for object detection and 6D pose estimation, named Extra FAT. The dataset consists of 825K photorealistic RGB images with annotations of groundtruth location and rotation for both the virtual camera and the objects. A registered pixel-level object segmentation mask is also provided for object detection and segmentation tasks. The dataset includes 110 different 3D object models. The object models were rendered in five scenes with diverse illumination, reflection, and occlusion conditions.
In this paper we present a set of multispectral images covering the visible and near-infrared spectral range (400 nm to 1050 nm). This dataset intends to provide spectral reflectance images containing daily life objects, usable for silicon image sensor simulations. All images were taken with our acquisition bench and a particular attention was brought to processings in order to provide calibrated reflectance data. ReDFISh (Reflectance Dataset For Image sensor Simulation) is available at: http://dx.doi.org/10.18709/perscido.2020.01.ds289.