Quantcast
Last updated on April 17, 2014 at 8:34 EDT

50 Gigapixel Camera May Be The Future Of Photography

June 20, 2012
Image Credit: Duke University Imaging and Spectroscopy Program

Lee Rannals for redOrbit.com

Imagine having a camera that could take a broad picture of surrounding landscape, but would be able to crop down to view a single ladybug sitting on a leaf. That’s just about what Duke University and the University of Arizona researchers have come up with.

Scientists have developed a 50 gigapixel camera, which is essentially like using more than 6,000 Apple iPhones to capture an image with the same resolution.

The 50,000 megapixel camera may be more of a pipe dream right now, but scientists believe as technology continues to shrink down, it could be available to the general public within five years.

The megapixel rating of a camera shows just how much resolution a picture will have once an image has been taken.  Pixels are the individual “dots” of data, so the higher the pixel count, the higher the resolution.

The team lead David Brady explained how they were able to develop the ultra-resolution gigapixel camera.

“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later.”

He said the main challenge for the researchers was developing high-performance, low-cost micro camera optics and components.

“While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics,” he said.

The prototype camera is two-and-half square feet, and 20 inches deep. Only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors.

The researchers said the area featuring the processors and electronics is what would need to be cut-down in order to make it a more practical use for everyday photographers.

“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said. “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”

The University of Arizona team helped to develop the software that combines the images from the microcameras into one large 50 gigapixel image.

Michael Gehm, team leader and assistant professor of electrical and computer engineering at the University of Arizona, said supercomputers face similar problems as to developing the optics.

“Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive,” Gehm said. “Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements.”

He likened the way the optics work together with its lenses to how a computer network works.

“A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations,” Gehm said. “Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”

The researchers published details of the new camera in the journal Nature.


Source: Lee Rannals for redOrbit.com