Quantcast

Holographic 3D Telepods May Revolutionize Videoconferencing

May 4, 2012
Image Credit: Roel Vertegaal, Human Media Lab at Queen’s University

[ Watch the Video ]

Michael Harper for RedOrbit.com

A Queen´s University researcher has taken inspiration from Star Trek to create an other-worldly way to meet with one another. Using a series of strategically placed projectors and Microsoft Kinect image sensors, Dr. Roel Vertegaal and his team at the Human Media Lab in Canada have created a human-scale 3D videoconferencing pod, allowing people in different places to meet with one another, just as if they were in the same room at the same time.

Taking a slight jab at other popular videoconferencing options, Dr. Vertegaal said, “Why Skype when you can talk to a life-size 3D holographic image of another person?”

Dr. Vertegaal and his team are calling this new technology “TeleHuman,” and it very much resembles the famous holodeck from Star Trek. In order to speak with one another, 2 people stand in front of long, life-sized cylindrical tubes. These tubes not only display a 3D image of the other person, but also capture 3D video. Projectors on the inside create a 3D image to display on the outside of the tube while cameras and tracking technology, such as the Kinect sensors, convert the data into another 3D image.

Since the pods are fitted to collect 3D data, those using TeleHuman are able to see 360 degree images and can even walk around a TeleHuman pod to see the other person´s side or back.

Dr. Vertegaal said he and his team were able to assemble the devices mostly using existing hardware, like a 3D projector, convex mirrors and of course, the life-sized acrylic tubes.

The Human Media Lab has other plans for this next-generation technology past videoconferencing.

The BodiPod has a similar application, presenting an interactive 3D image of the human body on the acrylic tubes, allowing users to study a full size, virtual model of the human body in a 360 degree format. The model can also be explored through a series of gestures and voice commands. For example, using a downward-swiping gesture, a student can remove individual layers of tissue from the model. In “X-Ray mode,” the closer a student gets to the model, the deeper parts of the human anatomy can be seen, revealing bone structure, muscle structure, and other internal organs. Voice commands, such as “show brain,” or “show heart” will automatically display a 3D model of those organs.

In their scientific papers, The Human Media Lab directly mention wanting to create better, more immersive experiences than those found in simple Skype and FaceTime videoconferencing, as well as the more in-depth and sophisticated Polycom RealPresence immersive systems.

While these videoconferencing systems don´t offer 3D projection, the Polycom RealPresence system, for example, does offer life-sized immersive experiences, broadcasting high-end audio and video meant to make users feel as if they are in the same room.

Dr. Vertegaal and his team plan to load up their TeleHuman and BodiPod tubes and head south to Austin, Texas next week for CHI 2012, the premier international conference on human and computer interaction.


Source: Michael Harper for RedOrbit.com



comments powered by Disqus