Quantcast

Recreating Live Events From Multiple Mobile Cameras In 3D

April 24, 2014
Image Credit: Thinkstock.com

Brett Smith for redOrbit.com – Your Universe Online

Anyone who has been to a large concert lately probably knows that many fans are busy capturing video of the stage during the show.

An EU-funded project called SCENENET aims to take this side-effect of the ubiquity of smartphones and turn it into something positive – a three-dimensional recreation of the concert experience.

The effort is being led by Chen and Nizan Sagiv, two Israel-based researchers who devised the idea while attending a Depeche Mode concert in Tel Aviv several years back.

“While I was busy looking at the show, Nizan was watching the crowds,” Chen said in a recent statement. “He could not help noticing the huge number of faint lights from mobile phone screens. People were taking videos of the show. Nizan thought that combining all the videos taken by individuals into a synergetic, enhanced and possibly 3D video could be an interesting idea. We discussed the concept for many months, but it looked too futuristic, risky and complicated.”

After developing a collaborative team with European-based engineers, the SCENENET team was awarded $1.8 million by the European Commission. Coordinated by Chen’s and Nizan’s Israel-based SagivTech, the project is slated to run until January 2016 and includes four European partners: the University of Bremen, Steinbeis Innovation, European Research Services, all in Germany, and Switzerland’s EPFL.

The first year of the project included the development of mobile infrastructure, a system for labeling video files and their transmission to the cloud. The engineers also formulated basic resources for a human-computer interface that will enable users to see the 3D video from any viewpoint ‘in the arena’ and edit the video themselves. This will help build online groups to promote the content and relive the concert experience as a community, the team said. The partners are currently looking to analyze privacy and intellectual property rights during the next phase of the project.

“We have at the end of the first year, and sooner than expected, built the entire SCENENET pipeline based on current state-of-the-art components,” Chen said.

The SCENENET team said they had to overcome multiple technological challenges: on-device pre-processing that needs tremendous computer power, effective transmission of the video, advancement of correct and rapid processes for registration between the video streams, and 3D modeling – all of which need to run at near instantaneous rates.

“We believe that the various components that make up SCENENET, e.g. registration of images and 3D reconstruction, have great potential for mobile computing and cloud computing, Thus SCENENET offers a huge technological breakthrough – in its whole and also via each of its components,” Chen added.

The researchers said the system may well also be used to recreate other events in 3D, like breaking media or sports activities, or in the tourism or surveillance industries. The engineers are also considering shooting stationary, in addition to active, items from various angles, to develop directions that can be sent on to 3D printers.

“SCENENET revolves around mobile cameras and 3D vision,” the researchers said. “The invasion of mobile cameras and their continuously improved quality has meant we are flooded with images we want to enhance and show off. Many devices that ‘understand’ visual inputs are being developed – Google Glass, for instance – where most of this work is based on image processing and computer vision. 3D vision is becoming more important for better visualization of the world on one hand, and easier analysis of the world on the other hand.”


Source: Brett Smith for redOrbit.com - Your Universe Online



comments powered by Disqus