Computer Teaches Itself Common Sense By Analyzing Web Images
November 20, 2013

Computer Teaches Itself Common Sense By Analyzing Web Images

[ Watch the Video: How Do You Teach A Computer Common Sense? ]

Peter Suciu for - Your Universe Online

Randomly surfing the web to look at pictures probably won’t make anyone smarter, but researchers at Carnegie Mellon University have created a new computer program that can search the web 24 hours a day, seven days a week and from this can teach itself common sense.

The computer program is called the Never Ending Image Learner (NEIL) and it was designed to search for images and do its best to understand these images on its own. This is computationally intensive, and the program runs on two clusters of computers that include 200 processing cores.

As NEIL grows a visual database it is expected to gather common sense on what is being dubbed a “massive scale.”

This might seem rather basic, but as the designers noted on the NEIL website, “How does a computer know what a car looks like? How does it know sheeps [sic] are white? Can a computer learn all these just by browsing images on the Internet? We believe so!”

Grammar aside, the designers have already shown some unique findings that could be chalked up to common sense, such as “Deer can be a kind of / look similar to Antelope,” and “Trading Floor can be / can have Crowded.”

This is possible because NEIL has been able to leverage recent advances in computer vision that enable computer programs to identify and label objects that appear in images. Through this it can identify and label objects in images, which allows it to characterize scenes and to even recognize specific attributes including colors, lighting and materials. It is able to do so with a minimum of human supervision.

This data that is generated will in turn be used to further enhance the ability of computers to understand the visual world. NEIL can further make associations between these things to obtain what could be deemed “common sense” information.

“Images are the best way to learn visual properties," said Abhinav Gupta, assistant research professor in Carnegie Mellon's Robotics Institute, via a statement. “Images also include a lot of common sense information about the world. People learn this by themselves and, with NEIL, we hope that computers will do so as well.”

One of the key motivations for researchers in devising the NEIL project was to create the world’s largest visual structured knowledge base. With this objects, scenes, actions, attributes and even contextual relationships can be labeled and cataloged.

Previously projects, including ImageNet and Visipedia, have tried to compile this visual data with human assistance, but the Internet is so vast that the researchers believe the only way to be successful would be to help teach computers to do it. While the amount of images online could seem daunting to a human, it could actually make the computers better learners.

“What we have learned in the last 5-10 years of computer vision research is that the more data you have, the better computer vision becomes,” Gupta said.

The other part of this project is that it could help teach people how to teach computers.

“People don't always know how or what to teach computers,” Gupta added. “But humans are good at telling computers when they are wrong.”