Robot Uses Evil Alter Ego to Learn Reliable Grasping

Adversarial grasping helps robots learn better ways of picking up and holding onto objects

Image: CMU
CMU and Google researchers are using an adversarial training strategy to improve manipulation tasks such as grasping. The adversary can be a robot that tries to disrupt another robot grasping an object (as in the photo above), or it can be one of the arms of a dual-armed robot trying to snatch the object from the other arm.

You know what’s really tedious and boring? Teaching robots to grasp a whole bunch of different kinds of objects. This is why roboticists have started to turn to AI strategies like self-supervised learning instead, where you let a robot gradually figure out on its own how to grasp things by trying slightly different techniques over and over again. Even with a big pile o’ robots, this takes a long time (thousands of robot-hours, at least), and while you can end up with a very nice generalized grasping framework at the end of it, that framework doesn’t have a very good idea of what a good grasp is.
 
The problem here is that most of the time, these techniques measure grasps in a binary-type fashion using very basic sensors: Was the object picked up and not dropped? If so, the grasp is declared a success. Real-world grasping doesn’t work exactly like that, as most humans can attest to: Just because it’s possible to pick something up and not drop it does not necessarily mean that the way you’re picking it up is the best way, or even a particularly good way. And unstable, barely functional grasps mean that dropping the object is significantly more likely, especially if anything unforeseen happens, a frustratingly common occurrence outside of robotics laboratories.

With this in mind, researchers from Carnegie Mellon University and Google decided to combine game theory and deep learning to make grasping better. Their idea was to introduce an adversary as part of the learning process—an “evil robot” that does its best to make otherwise acceptable grasps fail.

The concept of adversarial grasping is simple: It’s all about trying to grasp something while something else (the adversary, in research-speak) is making it difficult to do so: