Towards the actualization of a disaster response robot that can locate and manipulate a drill at an arbitrary position with an arbitrary posture in disaster sites, this paper proposes a method that can estimate the position and orientation of the drill that is to be grasped and manipulated
by the robot arm, by utilizing the depth camera information acquired by the depth camera. In this paper’s algorithm, first, using a conventional method, the target drill is detected on the basis of an RGB image captured by the depth camera, and 3D point cloud data representing the target
is generated by combining the detection results and the depth image. Second, using our proposed method, the generated point cloud data is processed to estimate the information on the proper position and orientation for grasping the drill. More specifically, a pass through filter is applied
to the generated 3D point cloud data obtained by the first step. Then, the point cloud is divided, and features are classified so that the chuck and handle are identified. By computing the centroid of the point cloud for the chuck, the position for grasping is obtained. By applying Principal
Component Analysis, the orientation for grasping is obtained. Experiments were conducted on a simulator. The results show that our method could accurately estimate the proper configuration for the autonomous grasping a normal-type drill.