The rapid development of Unmanned Aerial Vehicle (UAV) technology, -also known as drones- has raised concerns on the safety of critical locations such as governmental buildings, nuclear stations, crowded places etc. Computer vision based approach for detecting these threats seems as a viable solution due to various advantages. We envision an autonomous drone detection and tracking system for the protection of strategic locations. It has been reported numerous times that, one of the main challenges for aerial object recognition with computer vision is discriminating birds from the targets. In this work, we have used 2-dimensional scale, rotation and translation invariant Generic Fourier Descriptor (GFD) features and classified targets as a drone or bird by a neural network. For the training of this system, a large dataset composed of birds and drones is gathered from open sources. We have achieved up to 85.3% overall correct classification rate.
In automatic picking by robot, it is the important to estimate the grasping parameters (grasping position, direction and angle) of the object. In this paper, we propose a method for approximating an object with primitive shape in order to estimate the grasping parameters. The basic idea of this research is to approximate the object by object primitive (hexahedron/cylinder/sphere), based on the object's surface. First, we classify the surface shape that constitutes the object using 3D-Deep Neural Network. Then, we approximate the object with object primitive using the recognition result of 3D-DNN. After that, we estimate the grasping parameters based on preset grasping rules. The success rate of approximating the object primitive with our method was 94.7%. This result is 6.7% higher than the 3D ShapeNets using 3D-DNN. Also, as an experimental result of grasping simulation using Gazebo, the success rate of grasping with our method was 85.6%.
This paper presents an accurate and robust surgical instrument recognition algorithm to be used as part of a Robotic Scrub Nurse (RSN). Surgical instruments are often cluttered, occluded and displaying specular light, which cause a challenge for conventional vision algorithms. A learning-through-interaction paradigm was proposed to tackle this challenge. The approach combines computer vision with robot manipulation to achieve active recognition. The unknown instrument is firstly segmented out as blobs and its poses estimated, then the RSN system picks it up and presents it to an optical sensor in an established pose. Lastly the unknown instrument is recognized with high confidence. Experiments were conducted to evaluate the performance of the proposed segmentation and recognition algorithms, respectively. It is found out that the proposed patch-based segmentation algorithm and the instrument recognition algorithm greatly outperform their benchmark comparisons. Such results indicate the applicability and effectiveness of our RSN system in performing accurate and robust surgical instrument recognition.
In this paper, we propose an accurate and robust video segmentation method. The main contributions are threefold: (1) multiple cues (appearance and shape) are explicitly used and adaptively combined to determine segment probability; (2) motion is implicitly used to compute the shape cue; and (3) the segment labeling is improved by utilizing geodesic graph cuts. Experimental results show the effectiveness of the proposed method. © 2016 Society for Imaging Science and Technology.