To achieve one of the tasks required for disaster response robots, this paper proposes a method for locating 3D structured switches’ points to be pressed by the robot in disaster sites using RGBD images acquired by Kinect sensor attached to our disaster response robot. Our method consists of the following five steps: 1)Obtain RGB and depth images using an RGB-D sensor. 2) Detect the bounding box of switch area from the RGB image using YOLOv3. 3)Generate 3D point cloud data of the target switch by combining the bounding box and the depth image.4)Detect the center position of the switch button from the RGB image in the bounding box using Convolutional Neural Network (CNN). 5)Estimate the center of the button’s face in real space from the detection result in step 4) and the 3D point cloud data generated in step3) In the experiment, the proposed method is applied to two types of 3D structured switch boxes to evaluate the effectiveness. The results show that our proposed method can locate the switch button accurately enough for the robot operation.
We present a method for synchronizing three-dimensional (3D) point cloud from 3D scene with estimation using a 3D Lidar and an RGB camera. These 3D points sensed by the 3D Lidar are not captured at the same time, which makes it difficult to measure the correct shape of the object in a dynamic scene. In our method, we generate synchronized 3D points at arbitrary times using linear interpolation in four-dimensional space, time and space. For interpolating the 3D point, we obtain corresponding 3D point matching with the pixel value captured by the RGB camera in a continuous frame. The experimental results demonstrate the effectiveness of the presented method by depicting a synchronized 3D point cloud that is correctly shaped.