Robots Learn Object Properties Through Touch and Movement, No Cameras Needed
Researchers from MIT, Amazon Robotics, and the University of British Columbia have developed a method that enables robots to identify an object's properties, such as weight, softness, or contents, using only internal sensors. The technique, described in a paper available on the arXiv preprint server, leverages proprioception, which involves the robot's ability to sense its own movement and position through its joint encoders. These sensors detect the rotational position and speed of the robot's joints during interaction, allowing the robot to "feel" the heaviness of an object similarly to how a human senses the weight of a dumbbell through muscle tension. The key to this innovative approach is a simulation process that combines models of both the robot and the object it is interacting with. By utilizing differentiable simulation, the researchers can predict how small changes in an object's properties, such as mass or softness, affect the robot's joint movements. The NVIDIA Warp library, an open-source tool, was used to build these simulations. The algorithm compares the simulated movements with the actual joint encoder data from the robot to identify the object's properties. Remarkably, the system can make these identifications in a matter of seconds and only requires one real-world trajectory of the robot's motion. This low-cost and data-efficient method can be particularly useful in environments where cameras or external sensors are less effective, such as dark basements or areas with rubble after an earthquake. The researchers tested their method to learn the mass and softness of objects, but it has the potential to determine other properties like the moment of inertia or the viscosity of fluids inside containers. Unlike methods that depend on computer vision, this technique is less likely to fail when encountering new or unseen objects and environments because it does not require extensive datasets for training. Peter Yichen Chen, an MIT postdoc and lead author of the paper, envisions a future where robots can autonomously explore and interact with their surroundings, identifying the properties of everything they encounter. He emphasizes that this is just the beginning and foresees expanding the method to more complex robotic systems, such as soft robots, and objects with intricate dynamics, like sloshing liquids or granular media. Chen and his co-authors, including MIT postdocs Chao Liu and Pingchuan Ma, along with colleagues from Amazon Robotics and MIT professors Daniela Rus and Wojciech Matusik, aim to combine their approach with computer vision to create a multimodal sensing technique, further enhancing the robot's capabilities. The researchers plan to improve robot learning, enabling them to quickly develop new manipulation skills and adapt to environmental changes. The International Conference on Robotics and Automation will feature this groundbreaking research, which demonstrates a significant step forward in robotics by showing that robots can accurately infer properties like mass and softness using only internal sensing. Industry insiders have praised the research for addressing a long-standing challenge in robotics. According to Miles Macklin, senior director of simulation technology at NVIDIA, determining physical properties from limited or noisy measurements has been difficult. However, this work shows that robots can achieve accurate inference using only their internal joint sensors, making it a robust and efficient solution. The potential applications are vast, and this method could significantly enhance the abilities of robots in various settings, from industrial automation to disaster response operations. MIT, known for its leadership in robotics and artificial intelligence, continues to push the boundaries of what robots can do autonomously. This research exemplifies the institution's commitment to advancing robotics through innovative and practical solutions.
