Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach

This paper presents a real-time, object-independent grasp synthesis methodwhich can be used for closed-loop grasping. Our proposed Generative GraspingConvolutional Neural Network (GG-CNN) predicts the quality and pose of graspsat every pixel. This one-to-one mapping from a depth image overcomeslimitations of current deep-learning grasping techniques by avoiding discretesampling of grasp candidates and long computation times. Additionally, ourGG-CNN is orders of magnitude smaller while detecting stable grasps withequivalent performance to current state-of-the-art techniques. The light-weightand single-pass generative nature of our GG-CNN allows for closed-loop controlat up to 50Hz, enabling accurate grasping in non-static environments whereobjects move and in the presence of robot control inaccuracies. In ourreal-world tests, we achieve an 83% grasp success rate on a set of previouslyunseen objects with adversarial geometry and 88% on a set of household objectsthat are moved during the grasp attempt. We also achieve 81% accuracy whengrasping in dynamic clutter.