Using Casual Voice Commands To Program Robots
Gerard LeBlond for redorbit.com – Your Universe Online
Robots of today can do a variety of tasks, but still have to be programmed to do them. The technology that goes into producing these robots has progressed to the point of understanding voice commands. However, if the instructor leaves out any key instructions, the robot could not complete the task effectively.
This could all change in the near future.
Ashutosh Saxena, an assistant professor of computer science at Cornell University is teaching robots to understand natural language. Such robots are also programmed to account for missing information and adapt in order to complete the task. Saxena, along with graduate students Dipendra K. Misra and Jaeyong Sung, will be describing the novel methods at the Robotics: Science and Systems conference at the University of California, Berkeley, July 12-16.
The robot’s programming language has typical commands such as find (pan); grasp (pan); carry (pan, water tap); fill (pan, water); and so on. The software used in the robot translates a human sentence into a language the robot understands, like “Fill a pan with water, put it on the stove, heat the water. When it’s boiling, add the noodles.” However, the instructor didn’t say the command to “Turn on the stove.” With the software, the robot can fill in the missing step and complete the task correctly.
The robot is equipped with a 3D camera and using the vision software developed in Saxena‘s lab, it scans the environment and identifies the object in the surroundings. The robot has also been programmed to associate various object with its capabilities — like a pan can be poured into or from, and a stove can heat objects placed on them. It can also locate a pan, find a faucet, fill the pan, set the pan on the stove and if the command is given to heat the water, it can find a stove or microwave to do so. Even if the next day, the pan or robot is in a different location, it can perform the same task.
Previous attempts for solving missing command problems were tested with a set of templates for common actions one word at a time. The new research called “machine learning” trains the robot’s computer brain to associate entire commands with flexibility defined actions. Animated video simulations are fed into the computer that were created by humans — similar to playing a video game. It also uses recorded voice commands from several different speakers.
The computer takes the information and stores combinations of similar commands, matching them to a variety of outcomes. So the robot hears, “Take the pot to the stove,” “Carry the pot to the stove,” “Put the pot on the stove,” “Go to the stove and heat the pot” and so on, it will compare the commands given to what it has heard before and adjust to the highest probable match. Once a match is achieved the video supplies a plan of action and the robot can find the sink and stove and match the recorded action to carry the pot from one to another.
The robot was tested giving it a task of preparing some ramen noodles and making affogato — a dessert from Italy combining coffee and ice cream. The command was to “Take some coffee in a cup. Add ice cream of your choice. Finally, add raspberry syrup to the mixture.”
The robot performed the task 64 percent of the time correctly, even when various commands were spoken or the environment was different, and was able to account for missing steps. The success rate was three to four times better than previous methods, but the researchers said, “There is still room for improvement.”
On the “Tell Me Dave” website, you can teach a simulated robot to perform kitchen tasks and add your input. There will be a crowdsource library of instructions for the robots at the University. “With crowdsourcing at such a scale, robots will learn at a much faster rate,” Saxena said.
Visiting researcher, Aditya Jami, is helping “Tell Me Dave” to balance the library of commands to millions of examples.
SHOP NOW: LEGO Mindstorms EV3 31313