Disney Touches On Augmented Reality With New Technologies
Enid Burns for redorbit.com – Your Universe Online
Disney has a long history with augmented reality, beginning with some of the attractions in its theme parks. Now a Disney unit, Disney Research, is ready to present a number of new augmented reality technologies later this week in Los Angeles at SIGGRAPH 2012, the International Conference on Computer Graphics and Interactive Techniques.
The first technology Disney plans to present is REVEL technology. Developed by Disney Research in Pittsburgh, the technology uses reverse electrovibration to provide tactile feedback to a touch screen or even ordinary objects. The technology will provide haptic feedback to games, apply texture to projected images on surfaces, and even create “please touch” museum displays. Reverse electrovibration can also help the disabled by projecting texture onto walls to provide direction signals or provide other information.
Reverse electrovibration sends a very low current through the participant’s body, lower than that of an electric shock one experiences in dry environments during the winter. The current then sends a signal to the fingertips to create the feeling of a vibration or texture on an object that the person is interacting with. The object must be coated with an insulator-covered electrode, which Disney Research calls REVEL Skin. Anodized aluminum objects or capacitive touch screens can also be used without modification. Disney Research published a paper and produced a video explaining the technology.
“Augmented reality to date has focused primarily on visual and auditory feedback, but less on the sense of touch,” said Olivier Bau, a postdoc at Disney Research, Pittsburgh. “Sight and sound are important, but we believe the addition of touch can create a really unique and magical experience.”
Disney Research in Pittsburg has also looked to vegetation for more interactive technologies. Botanicus Interactus is a technology that will allow plants to control certain interactions with computers, tablets, phones or other devices. The research team developed a system where a single wire is placed in the soil near the plant, which can detect if and where a plant is touched. A touch might trigger an application to open on a nearby computer or initiate playback of a song or playlist on an iPod or other device.
“Computing is rapidly fusing with our dwelling places and, thanks to touchpads and Microsoft Kinect, interaction with computers is increasingly tactile and gestural,” Ivan Poupyrev, senior research scientist at Disney Research, Pittsburgh explained. “Still, this interaction is limited to computing devices. We wondered – what if a broad variety of everyday objects around us could interact with us?” A demonstration is provided in a video from Disney Research.
Disney Research, working with researchers at Carnegie Mellon University and the LUMS School of Science and Engineering in Pakistan, developed what they’re calling Bilinear spatiotemporal basis models to advance computer animations of faces and bodies. The technology aids animators in creating subtle movements such as expressions on faces, gesticulations on bodies and the draping of clothes. The model developed by the research teams simultaneously takes into account both space and time. The method creates a much more compact, powerful and easy-to-manage model. It also potentially advances animations used in moves, television and video games, which often suffer from facial emotions and stiff movements.
“Simply put, this lets us do things more sensibly with less work,” said Yaser Sheikh, assistant research professor in Carnegie Mellon’s Robotics Institute.
Another development Disney Research is presenting at SIGGRAPH 2012 will also provide advancement to animation in video and video games alike. Markerless motion capture helps capture a person or object for animation without the need for markers to provide a base for a wireframe.
The technique, which Disney Research, Pittsburgh developed in collaboration with Brown University, captures 3D poses of actors using “biped controllers” to incorporate the underlying physics of the motion. The biped controllers can also be manipulated to carry out other actions. A 3D motion capture from a single camera of someone walking across a room can be animated into a character walking down a slope or crossing a slippery surface. It can be altered from the original motion capture.
“We didn’t want to break these problems up, but to do them simultaneously,” said Leonid Sigal, research scientist at Disney Research, Pittsburgh. “Given the current technology, I don’t think you can do them separately.”
Disney Research, Pittsburgh will present all of these technologies at SIGGRAPH, which takes place August 5 through 9 in Los Angeles.