Based on its appearance, you’d be forgiven for thinking that Baxter the Robot is nothing special – while it has a pair of mechanical arms and a child-like expression on a face that looks a bit like an Etch-a-Sketch, it could radically change the way AI systems learn from their mistakes.
Developed by Rethink Robotics, Baxter is an industrial robot designed to perform a variety of tasks, such as line loading, machine tending, packaging and material handling. However, it is the method which it uses to correct errors that makes this innocent-looking machine special, as it can effectively read a person’s mind to determine if it has done something wrong.
As detailed in research published online earlier this week, Baxter uses a system created by the wizards at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University, Baxter is able to scan the brain activity of a human collaborator to obtain feedback on its performance, which allows it to correct errors without any additional physical input.
On Monday, Wired explained how the technology works using an example. Baxter picked up a can of spray paint and moved to place it in a box marked “wire.” However, it was able to detect its error thanks by reading the brainwaves of a nearby human colleague and adjusted its robotic arm to place the can in a box marked “paint” instead. Upon receiving positive feedback because of his performance, the robot’s expression changed to that of a grin.
The system detects a particular signal in a person’s brain when he or she sees the robot make a mistake, and communicates that information to Baxter, who then works to fix its error. In effect, the human is telepathically scolding the machine for making a mistake, the website noted.
Technology could improve factory robots, driverless cars
In actuality, the system is a little more complicated than that. As the study authors explained in their newly-published paper, the technology uses EEG-measured error-related potentials (ErrPs) to provide feedback to the machine, since they occur naturally when an unexpected mistake has been made and could be applied to a closed-loop robotic control system.
“We decode ErrP signals from a human operator in real time to control a Rethink Robotics Baxter robot during a binary object selection task,” they wrote. “We also show that utilizing a secondary interactive error-related potential signal generated during this closed-loop robot task can greatly improve classification performance, suggesting new ways in which robots can acquire human feedback.”
As part of their research, they conducted a series of experiments that involved participants who had not been specially trained in robotics technology, but who nonetheless were able to use their ErrP signals to correct Baxter when he messed up. The outcome “demonstrates the potential for EEG-based feedback methods to facilitate seamless robotic control” and “moves closer towards the goal of real-time intuitive interaction,” the researchers added.
Study co-author and MIT robotics expert Daniela Rus told Wired that the new technique offers a more natural way of controlling machines, as the robot adapts to what its human controller wants it to do based solely on his or her brainwaves. The technology could eliminate needing to input a series of typed commands or provide lengthy verbal instructions to a mechanical worker.
“As you watch the robot, all you have to do is mentally agree or disagree with what it is doing. You don’t have to train yourself to think in a certain way – the machine adapts to you, and not the other way around,” Rus added in a statement. “Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word. A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars, and other technologies we haven’t even invented yet.”
Image credit: MIT CSAIL