Quantcast

Take The Bore Out Of Long Videos Using New LiveLight Application

June 25, 2014

Alan McStravick for redOrbit.com – Your Universe online

The Vine video sharing app ensured people could become instant filmmakers without having to worry too much about the hassle of editing. Sure there are some interesting camera tricks that take some time and effort to make appear seamless over the allotted six-second films, but on the whole, Viner’s tend to approach the art of film-making in the same way Seal Team 6 performs high-level target extractions: Get in and get out.

But what happens when your videos, captured on your cellphone, Google Glass or GoPro exceed the cinema brevité that is non-threatening to the modern attention span? New research by computer scientists from Carnegie Mellon University may have the answer.

Called LiveLight, the team’s invention is a video-highlighting technique that is meant to sift through a recorded video and present only the good parts to the viewer. This revolution in video-watching could likely eliminate the parenthetical instructions under a video that recommend which minute and second we should skip to in order to see the really interesting bits. LiveLight will do this by evaluating the action in the video, seeking out a visual novelty that stands out from the repetition of the many frames around it. A summary is then produced that gives the viewer a fairly good idea of what occurs in the video without having to sit through the monotony or, worse yet, miss the good parts thanks to fast-forwarding in frustration.

The team does concede their method is not yet comparable to an edited video but they see it being particularly helpful in the review of long video of an event, a security camera feed or dash cam footage from inside a police cruiser. Previously, the only way to review this video was by sitting, watching and waiting for something to happen.

For the filmmaker on the go, LiveLight will likely be welcomed by users of GoPro cameras or Google Glass for a useful and potentially cost-saving application it offers. LiveLight is able to automatically digest video from those devices and quickly upload thumbnail trailers to your preferred social media platform. This process could save money for the user who doesn’t enjoy unlimited data from their mobile provider. Also, as noted above, save for a select few, not too many people really enjoy the editing process that would take viewing a video from tedium to exciting.

The inventors of LiveLight have created a startup – PanOptus Inc – which they are using to take both the personal and commercial versions of their auto-summarization application to the retail market.

As the team explains, the video summary is able to occur in what they call ‘quasi-real-time’ with just a single pass through the video. With just a typical laptop or desktop computer, LiveLight will be able to summarize one hour of recorded video in approximately one to two hours. The computer scientists note that in a more powerful computing facility, that time could be reduced to just minutes.

Expect to hear more about this LiveLight technology as Eric P. Xing, professor of machine learning, and Bin Zhao, a PhD student in the Machine Learning Department of Carnegie Mellon will be presenting their work tomorrow at the Computer Vision and Pattern Recognition Conference in Columbus, Ohio.

“The algorithm never looks back,” said Zhao, whose research specialty is computer vision. As the video is processed by the algorithm, a stored dictionary of the “viewed” content is created. From this point, the algorithm compares individual frames to one another to determine if they are similar or dissimilar to one another, such as routine traffic on a highway. Frames rated as similar to one another simply don’t make the cut for the LiveLight summary. Items that pop into frame that break the monotony of previously recorded material, such as an erratic car or traffic accident, ultimately would make the LiveLight cut.

The LiveLight algorithm is capable of operating completely autonomously. However, the team explains that people can be brought in to assist in the compilation of a summary if they prefer. When someone wants to interact with the program, LiveLight will present a ranked list of sequences it has determined are novel compared to the bulk of frames in the video. The human editor can then select among them to help create the final video. Too, the human editor has the ability to restore footage that LiveLight might have excluded from its final summary. These could be important for adding context or providing visual transitions before and after sequences of interest.

“We see this as potentially the ultimate unmanned tool for unlocking video data,” Xing said in a statement. As noted at the beginning of this article, we are seeing a revolution in the creation of the amateur auteur. However, the exhaustive process of editing is far less enjoyable than pointing and recording. As a result, larger volumes of video are going unwatched.

“The interesting moments captured in those videos thus go unseen and unappreciated,” Xing concluded.

Support for this research was provided, in part, by Google, the National Science Foundation, the Office of Naval Research and the Air Force Office of Scientific Research.


Source: Alan McStravick for redOrbit.com - Your Universe online



comments powered by Disqus