Active Eye-in-Hand Data Management to Improve the Robotic Object

Published in Computers, 2019

Recommended citation: S. Pourya Hoseini A., Janelle Blankenburg, Mircea Nicolescu, Monica Nicolescu, David Feil-Seifer. "Active Eye-in-Hand Data Management to Improve the Robotic Object." Computers, 2019. https://www.mdpi.com/2073-431X/8/4/71/htm

Adding to the number of sources of sensory information can be efficacious in enhancing the object detection capability of robots. In the realm of vision-based object detection, in addition to improving the general detection performance, observing objects of interest from different points of view can be central to handling occlusions. In this paper, a robotic vision system is proposed that constantly uses a 3D camera, while actively switching to make use of a second RGB camera in cases where it is necessary. The proposed system detects objects in the view seen by the 3D camera, which is mounted on a humanoid robot’s head, and computes a confidence measure for its recognitions. In the event of low confidence regarding the correctness of the detection, the secondary camera, which is installed on the robot’s arm, is moved toward the object to obtain another perspective of the object. With the objects detected in the scene viewed by the hand camera, they are matched to the detections of the head camera, and subsequently, their recognition decisions are fused together. The decision fusion method is a novel approach based on the Dempster–Shafer evidence theory. Significant improvements in object detection performance are observed after employing the proposed active vision system. Article chosen as issue cover.