Communicating With Computers Goes To The Next Level
October 10, 2012

Communicating With Computers Is Evolving To The Next Level

Michael Harper for — Your Universe Online


Computers are fantastic machines, no matter the size, category or application. For all the conveniences computers have given the modern world, they´re still largely dependent on the human element, as they can only perform those tasks that we tell them to. Even the automated tasks have to be programmed by someone.

Thus, the importance of communication between human and machine via feedback and interfaces. As technologies advance and computers become more sophisticated, computer scientists and developers strive to move beyond the primitive keyboard, mouse and “modern” touch interfaces as well as look for other ways to have computers respond to our commands.

This week, two separate research teams have come up with new ways to have this conversation with machines, including the ability to turn any flat surface into a multi-touch interface as well as a way for computers to guide the user´s hand as they look for a particular object.

Researchers from Purdue University have developed a system in which a person´s hands and finger gestures can be recognized on walls, tables and other surfaces without the need for external screens or displays.

Niklas Elmqvist is an assistant professor of electrical and computer engineering at Purdue University and co-authored the paper which explains how this technology can be used to interact with computers on a myriad of surfaces.

“Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology,” said Elmqvist in a press release.

“You can use any surface, even a dumb physical surface like wood. You don´t need to install expensive LED displays and touch-sensitive screens.”

In addition to interacting with a machine on a regular wall through a series of gestures and postures, this new “extended multi-touch” system can even distinguish different hands based on unique, individual traits. This kind of recognition means multiple people can use the same surface at the same time. The system can even distinguish the difference between left and right hands, allowing for two-handed controls.

While testing this new system, the Purdue research team found it is 98% accurate in determining hand posture, a critical aspect when identifying gestures. This new system could have many different applications, says Karthik Ramani, Purdue´s Donald W. Feddersen Professor of Mechanical Engineering.

“You could use it for living environments, to turn appliances on, in a design studio to work on a concept or in a laboratory, where a student and instructor interact.”

Like many other futuristic ideas, this system currently uses Microsoft´s Kinect camera, which is capable of capturing motion and movement in a three-dimensional space. “We project a computer screen on any surface, just a normal table covered with white paper,” explained Ramani.

“The camera sees where your hands are, which fingers you are pressing on the surface, tracks hand gestures and recognizes whether there is more than one person working at the same time.”

Since the Kinect camera can also detect how far away from a surface the users hand is, it´s able to accurately distinguish gestures, such as calling up a menu, just by hovering a hand over the surface and even input text just by simply writing (without ink, of course) on the surface.

The Purdue team hopes that as the cameras improve, so too will the accuracy with which this system receives its input. According to Ramani, the models they´ve created thus far are a great starting point for the future of this kind of system.


Elsewhere in the world, researchers at Helsinki Institute for Information Technology (HIIT) and the Max Planck Institute for Informatics have shown how the human-computer conversation can take place on the other side of the proverbial table, as computers send feedback to the user.

Using vision-based hand tracking and vibration feedback, these Finnish researchers have developed a glove which can actually guide a user´s hand when looking for a specific object, such as a box of bolts in a warehouse or a particular title at the library. The prototype glove uses vibrations to “tug” the users hand in the right direction.

Ville Lehtinen of HIIT is the main researcher on this project. In the press release announcing a study of this technology, he explained: “The advantage of steering a hand with tactile cues is that the user can easily interpret them in relation to the current field of view where the visual search is operating. This provides a very intuitive experience, like the hand being ℠pulled´ toward the target.”

One of the main benefits to this technology is just how cheaply this device can be built using off-the-shelf components such as vibrotactile actuators, a glove and, of course, a Microsoft Kinect sensor. Lehtinen and his team also published an algorithm which can find the most efficient way to reach a certain item based on the distance between it and the hand.

The Finnish team wanted to put this prototype through its paces, gradually increasing the complexity of the searches it was asked to do while throwing in a few distractions to simulate real-world experiences. “In search tasks where there were hundreds of candidates but only one correct target, users wearing the glove were consistently faster, with up to three times faster performance than without the glove,” said Dr. Antti Oulasvirta, also from the Max Planck Institute for Informatics.

The researchers also mention in addition to helping workers and shoppers find the correct items, this glove could also help pedestrians navigate within unfamiliar territories.

Technology continues to move forward at an ever quickening pace. Therefore, the way that humans interact with technology will inevitably evolve as well. Microsoft´s Kinect has already thrown open the doors on all sorts of innovation in the field of interface research. With these two systems in place, computers could one day guide us on how to use them as well as further integrate themselves into any surface we need, furthering not only functionality but design as well.

The conversation between man and machine is ongoing, but in the future, the way in which we speak may be dramatically different, with an air of familiarity and a deep understanding of one another.