Researchers at the University of Cambridge have developed an affordable, energy-efficient robotic hand capable of grasping and holding various objects by employing wrist movement and tactile sensation in its ‘skin.’ Instead of having independently moving fingers, the soft, 3D-printed robotic hand performs complex movements using passive motion. The hand’s ‘skin’ contains sensors that enable it to predict if it would drop objects by grasping them in different ways. This research is published in the journal Advanced Intelligent Systems.
“In earlier experiments, our lab has shown that it’s possible to get a significant range of motion in a robot hand just by moving the wrist,” said co-author Dr. Thomas George-Thuruthel, who is now based at University College London (UCL) East. “We wanted to see whether a robot hand based on passive movement could not only grasp objects, but would be able to predict whether it was going to drop the objects or not, and adapt accordingly.”
The passive movement of the robotic hand makes it simpler to control and more energy-efficient compared to fully motorized robotic fingers. The adaptable design has potential applications in creating low-cost robots with more natural movements that can grasp a wide array of objects. Recent advances in 3D printing techniques have facilitated the integration of soft components into robotics design, adding complexity to energy-efficient systems.
Recreating the complexity and adaptability of the human hand in a robot is a significant research challenge. Most advanced robots today cannot perform manipulation tasks that are simple for young children. For instance, humans naturally know the appropriate force to use when picking up an egg, but it is challenging for a robot to do so without breaking or dropping it. Fully actuated robot hands, with motors in each finger joint, also require substantial energy.
At Cambridge’s Department of Engineering, Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory has been working on a solution: a robotic hand that can grasp various objects with the correct pressure and minimal energy use. The team used a 3D printed anthropomorphic hand embedded with tactile sensors to enable the hand to sense the objects it touched. The hand’s movement was passive and wrist-based.
“This kind of hand has a bit of springiness to it: it can pick things up by itself without any actuation of the fingers,” said first author Dr. Kieran Gilday, who is now based at EPFL in Lausanne, Switzerland. “The tactile sensors give the robot a sense of how well the grip is going, so it knows when it’s starting to slip. This helps it to predict when things will fail.”
“The sensors, which are sort of like the robot’s skin, measure the pressure being applied to the object,” said Dr. George-Thuruthel. “We can’t say exactly what information the robot is getting, but it can theoretically estimate where the object has been grasped and with how much force.”
Over 1200 tests were conducted with the robotic hand to assess its ability to grasp small objects without dropping them. Initially, the robot was trained using small 3D-printed plastic balls and human demonstrations for pre-defined actions. Through trial and error, the robot learned the most effective grip. After training with the balls, the robot attempted to grasp different objects, such as a peach, a computer mouse, and a roll of bubble wrap, successfully grasping 11 of the 14 objects.
“The robot learns that a combination of a particular motion and a particular set of sensor data will lead to failure, which makes it a customizable solution,” said Dr. Gilday. “The hand is very simple, but it can pick up a lot of objects with the same strategy.”
Fully actuated robotic hands are not only energy-intensive but also pose complex control challenges. The Cambridge-designed passive hand, with its limited sensors, is easier to control, offers a broad range of motion, and simplifies the learning process.
“The big advantage of this design is the range of motion we can get without using any actuators,” said Professor Iida. “We want to simplify the hand as much as possible. We can get lots of good information and a high degree of control without any actuators so that when we do add them, we’ll get more complex behavior in a more efficient package.”
Future developments could include adding computer vision capabilities or teaching the robot to utilize its environment, allowing it to grasp an even wider range of objects. This research was funded by UK Research and Innovation (UKRI) and Arm Ltd. Fumiya Iida is a Fellow of Corpus Christi College, Cambridge.