Quantcast

Smartphone Chip Could Herald A New Era Of Photography

February 20, 2013
Image Credit: Photos.com

Lee Rannals for redOrbit.com — Your Universe Online

Your Instagram photos might be getting a quality upgrade one day, thanks to a new processor chip developed by MIT scientists.

A team at MIT´s Microsystems Technology Laboratory built a new chip that can instantly convert your smartphone photographs into professional-looking images, by creating more realistic or enhanced lighting. This new chip can be integrated with any smartphone, tablet computer, or digital camera.

“We wanted to build a single chip that could perform multiple operations, consume significantly less power compared to doing the same job in software, and do it all in real time,” Rahul Rithe, a graduate student in MIT´s Department of Electrical Engineering and Computer Science, said in a statement.

One of the operations the chip performs is known as High Dynamic Range, or HDR, imaging. This function is designed to compensate for limitations on the range of brightness that can be recorded by existing digital cameras.

With this function, the chip’s processor automatically takes three separate “low dynamic range” images with the camera, including a normal exposure, an overexposure and an underexposure. After the three photos are taken, they are merged together.

This concept is not new, and it is actually something that Apple first made available back on their iPhone 4. Also, photographers have been using this method of photography for years, to help bring out certain aspects of a photo that just a normal exposure might not unveil.

However, one thing that is different about the functionality of HDR on this chip the researchers brought to the table is speed. The speed of the HDR function on the chip processor is fast enough that it could potentially be applied to video.

Flashes that come standard on smartphones, and even normal digital cameras, tend to wash out faces, and give undesirable lighting in an image. The MIT chip helps to enhance the lighting in a darkened scene more realistically than flash photography. For this function, the processor takes two images, one with a flash and one without, and then splits both into a base layer and a detailed layer. Finally, it merges the images together to preserve the natural ambiance from the base layer of the non-flash shot, while extracting the details from the picture taken with the flash.

The chip also helps to cancel out noise in an image, blurring any undesired pixels with its surrounding neighbors. By using a bilateral filter, they were able to preserve outlines that normally get washed out with competing functions currently available.

“As algorithms such as bilateral filtering become more accepted as required processing for imaging, this kind of hardware specialization becomes more keenly needed,” said Michael Cohen of Redmond, Washington-based Microsoft Research

He said that it is a nicely crafted component that can bring computational photography applications onto more energy-starved devices.

Last year, researchers announced that they had created a 50 gigapixel camera, which is the equivalent of using 6,000 Apple iPhones. Just three percent of the camera consists of the optical elements, while the rest is made of the electronics and processors.

With camera technology still on the rise, only time will tell what our future smartphones will be capable of when it comes to digital photography.


Source: Lee Rannals for redOrbit.com – Your Universe Online



comments powered by Disqus