Quantcast

Newly Developed Algorithm Allows Robots To Better Collaborate

June 25, 2014
Image Credit: maxuser/Thinkstock.com

Brett Smith for redOrbit.com – Your Universe Online

In the not-too-distant future, robots will work together as teams to gather information on a subject and collectively analyze that information to gain insight or generate a computational model.

Now, researchers at MIT have developed a novel algorithm in which dispersed robots or computers gather data and analyze it independently. Pairs of robots or computers could exchange analyses and build a more robust joint analysis.

According to a study to be presented at the Uncertainty in Artificial Intelligence conference in Quebec City this July, the MIT algorithm outperformed a standard algorithm that works on data collected at a single location.

“A single computer has a very difficult optimization problem to solve in order to learn a model from a single giant batch of data, and it can get stuck at bad solutions,” said study author Trevor Campbell, a graduate student in aeronautics and astronautics at MIT. “If smaller chunks of data are first processed by individual robots and then combined, the final model is less likely to get stuck at a bad solution.”

The study team explained their solution as a way for a team of robots to explore and learn about an office building. As they move from room to room, each robot builds its own inventory of kinds of rooms and their contents. However, discrepancies are likely to enter during the process. For example, one robot might enter a kitchen while the coffeemaker is hidden by an open refrigerator door and omit coffeemakers from its inventory of kitchen items.

When two robots in this system meet, they would ideally compare notes to check for mistakes and omission – yet this can also lead to problems. For example, one robot may have deduced that sinks and pedal-operated trashcans are differentiating features of restrooms, a different robot that they’re distinct features of kitchens.

In the new MIT algorithm, the robots would compare lists and the access a source list to compare it to the other lists. Each robot does this again with every new encounter, slowly building more and more accurate models.

The highly complex algorithm behind this process is laid out in detail by the MIT team in their paper titled, “Approximate Decentralized Bayesian Inference.”

“The way that computer systems learn these complex models these days is that you postulate a simpler model and then use it to approximate what you would get if you were able to deal with all the crazy nuances and complexities,” Campbell said. “What our algorithm does is sort of artificially reintroduce structure, after you’ve solved that easier problem, and then use that artificial structure to combine the models properly.”

The new algorithm could also be used by disperse servers independently analyzing millions of documents. These analyses would be combined to form a more comprehensive model.

“Distributed computing will play a critical role in the deployment of multiple autonomous agents, such as multiple autonomous land and airborne vehicles,” said Lawrence Carin, a professor of electrical and computer engineering at Duke University who was not directly involved in the study. “The distributed variational method proposed in this paper is computationally efficient and practical.”


Source: Brett Smith for redOrbit.com - Your Universe Online



comments powered by Disqus