Quantcast

Crowdsourcing Leads To Faster Learning Among Robots

June 27, 2014
Image Caption: The UW’s robot builds a turtle model. Credit: U of Washington

Brett Smith for redOrbit.com – Your Universe Online

As proven by sites such as Yahoo! Answers or even Wikipedia, crowdsourcing has been shown to be a useful way to amass knowledge or find an answer to a difficult question.

Now, researchers at the University of Washington have shown that robots can also use this phenomenon to learn how to complete a task, according to a new study being presented at the 2014 Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation.

“We’re trying to create a method for a robot to seek help from the whole world when it’s puzzled by something,” said study author Rajesh Rao, an associate professor of computer science and engineering at the UW. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”

Engineers have developed robots to the point where they can learn a skill from a human teacher, but the learning process requires constant repetition. Using crowdsourcing, the same robot could learn a skill much faster after it got the basic concept down, the study researchers said.

“Because our robots use machine-learning techniques, they require a lot of data to build accurate models of the task. The more data they have, the better model they can build. Our solution is to get that data from crowdsourcing,” said Maya Cakmak, a UW assistant professor of computer science and engineering.

To develop their crowdsourcing technique, the researchers began by asking volunteers to build a simple two-dimensional replica out of plastic bricks – such as a car, tree, or snake. Next, the team asked the robot to build a comparable object. However, derived from the few examples supplied by the volunteers, the robot was not able to build complete models.

To teach the robot how to properly build the replicas, the researchers then recruited people on Amazon Mechanical Turk, a crowdsourcing website, to build very similar models of a car, tree, turtle, snake and others. From over 100 crowd-generated replicas of each shape, the robot sought out the best models to construct based on degree of difficulty to build, resemblance to the original and the online community’s evaluations of the models. The robot then built the best models.

The researchers said this type of learning, called “goal-based imitation,” allows a robot to “watch” a person construct a particular model, infer the critical qualities the model must have, then build a model that looks like the original, but could very well be simpler so it’s easier for the robot to make.

“The end result is still a turtle, but it’s something that is manageable for the robot and similar enough to the original model, so it achieves the same goal,” Cakmak explained.

The researchers said participants frequently favored crowdsourced versions that looked the most like their original designs. Generally speaking, the robot’s last models were less complicated than the starting designs, and it was capable of successfully constructing these models, which wasn’t always true when starting with the study participants’ primary designs.

The researchers said they are currently looking into teaching robots more complicated tasks, for example locating and fetching things in a multi-story building.


Source: Brett Smith for redOrbit.com - Your Universe Online



comments powered by Disqus