MIT Develops Unprecedented System That Lets Robots Work Together
redOrbit Staff & Wire Reports – Your Universe Online
Researchers at MIT have developed a new system that combines simple control programs to enable fleets of robots or other “multiagent systems” to collaborate in extraordinary ways.
Due to the difficulty of designing control programs for “multiagent systems” – groups of robots or networks of devices with different functions – engineers have typically restricted themselves to scenarios in which reliable information about the environment can be presumed, or to relatively simple collaborative tasks that can be clearly specified in advance.
However, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will soon present a new system that stitches existing control programs together to allow multiagent systems to collaborate in much more sophisticated ways.
The engineers, who will present their system at the International Conference on Autonomous Agents and Multiagent Systems in May, said their work focuses on mitigating uncertainty. This could include, for example, factoring in the odds that a particular communications link might drop, or that an algorithm might inadvertently steer a robot into a dead end. The new system would account for this uncertainty, and automatically circumventing these events.
For small collaborative tasks, the system can guarantee that its combination of programs is optimal, and will yield the best possible results given the uncertainty of the environment and the limitations of the programs themselves.
The researchers said they are currently testing their system in a simulation of a warehousing application, where small groups of iRobot Creates – programmable robots that have the same chassis as the Roomba vacuum cleaner – are required to retrieve arbitrary objects from indeterminate locations, collaborating as needed to transport heavy loads.
“In [multiagent] systems, in general, in the real world, it’s very hard for them to communicate effectively,” said Christopher Amato, a postdoc in CSAIL and first author of a paper about the work.
“If you have a camera, it’s impossible for the camera to be constantly streaming all of its information to all the other cameras. Similarly, robots are on networks that are imperfect, so it takes some amount of time to get messages to other robots, and maybe they can’t communicate in certain situations around obstacles,” he said in a statement.
An agent may not even have perfect information about its own location, such as which aisle of the warehouse it is actually in, Amato added.
“When you try to make a decision, there’s some uncertainty about how that’s going to unfold,” he said.
“Maybe you try to move in a certain direction, and there’s wind or wheel slippage, or there’s uncertainty across networks due to packet loss. So in these real-world domains with all this communication noise and uncertainty about what’s happening, it’s hard to make decisions.”
The new MIT system, which Amato developed alongside co-authors Leslie Kaelbling, professor of Computer Science and Engineering at MIT, and fellow posdoc George Konidaris, takes three inputs. One is a set of low-level control algorithms, or “macro-actions,” which control agents’ behaviors collectively or individually. The second is a set of statistics about those programs’ execution in a particular environment, while the third is a structure for valuing different outcomes.
The system would take these inputs, and decide how to best combine macro-actions to maximize the system’s value function. It might use all the macro-actions, or only a tiny subset, and perhaps in ways that a human designer would never have imagined.
The researchers gave an example of a scenario in which each robot has a small bank of colored lights that it can use to communicate with its counterparts if a wireless link goes down.
“What typically happens is, the programmer decides that red light means go to this room and help somebody, green light means go to that room and help somebody,” Amato explained. “In our case, we can just say that there are three lights, and the algorithm spits out whether or not to use them and what each color means.”
The researchers’ work frames the problem of multiagent control as something known as a partially observable Markov decision process, or POMDP.
“POMDPs, and especially Dec-POMDPs, which are the decentralized version, are basically intractable for real multirobot problems because they’re so complex and computationally expensive to solve that they just explode when you increase the number of robots,” said multirobot systems expert Nora Ayanian, an assistant professor of computer science at the University of Southern California.
“So they’re not really very popular in the multirobot world,” she added.
“Normally, when you’re using these Dec-POMDPs, you work at a very low level of granularity,” said Ayanian in a statement. “The interesting thing about this paper is that they take these very complex tools and kind of decrease the resolution. This will definitely get these POMDPs on the radar of multirobot-systems people.”
“It’s something that really makes it way more capable to be applied to complex problems,” she concluded.