Quantcast

Researchers Give Robots The Gift Of Navigation

February 16, 2012

MIT researchers have developed a new system that could one day allow robots to navigate any surrounding or terrain, without needing any input from humans.

The algorithm developed by the team allows robots to continuously map their 3D environment by using a low-cost camera.

The system developed by the researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) could also allow blind people to work their way through buildings like hospitals and shopping malls without assistance.

Maurice Fallon, a research scientist at CSAIL who is helping to develop these systems, said in order for a robot to explore unknown environments, it must be able to map them as they move around.

“If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map,” Fallon told Helen Knight, news correspondent for MIT.

The new algorithm allows robots to constantly update a map as they learn new information.

The team tested the approach on robots equipped with expensive laser scanners in previous research, but they had since shown that the system can be used with a low-cost camera.

They incorporated Microsoft’s Kinect controller’s sensor into a robot, using the sensor’s visible-light video camera and infrared depth sensor to scan the surroundings.

The robot then built a 3D model of the walls of the room and the objects within it.  When it passes through the same area again, it compares the features of the new image it has created, with all the previous images it has taken until it finds a match.

The system also estimates the robot’s motion, using on-board sensors that measure the distance its wheels have rotated.

Once the system combines all the data, and determines where within the building the robot is positioned, it can navigate around any new features that have appeared since the previous picture was taken.

The team tested the system on a robotic wheelchair developed by Willow Garage known as PR2.

They found that PR2 was able to locate itself within a 3D map of its surroundings while traveling at up to 5-feet per second.

Fallon said the algorithm could allow robots to travel around office or hospital buildings, planning their own routes with little or no input from humans.

Seth Teller, head of the Robotics, Vision and Sensor Networks group at CSAIL, said the system could be used as a wearable visual aid for blind people as well.

“There are also a lot of military applications, like mapping a bunker or cave network to enable a quick exit or re-entry when needed,” he said. “Or a HazMat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later.

Radu Rusu, a research scientist at Willow Garage who was not involved in this project, said the algorithm opens up plenty of possibilities.

“This opens up exciting new possibilities in robot research and engineering, as the old-school ℠flatland´ assumption that the scientific community has been using for many years is fundamentally flawed,” he said.

Image Caption: The researchers used a PR2 robot, developed by Willow Garage, with Microsoft’s Kinect sensor to test their system. Image: Hordur Johannsson

On the Net:


Source: RedOrbit Staff & Wire Reports



comments powered by Disqus