Quantcast

New System Takes High-Resolution 3D Images From Half Mile Away

April 5, 2013
Image Caption: 3-D images of a mannequin (top) and person (bottom) from 325 meters away. The left-hand panels show close-up photos of the targets taken with a standard camera. In the center are 3-D images of these targets taken by the scanner from 325 meters away. On the right is a color-coded map showing the number of photons that bounce off the targets and return to the detector, with black indicating a low number of photons. Notice that human skin does not show up well using the scanner: the mannequin’s face includes depth information, but the person’s face does not. Credit: Optics Express.

Lee Rannals for redOrbit.com – Your Universe Online

Scientists writing in the Optical Society’s (OSA) open-access journal Optics Express say they have created a new camera system that creates high-resolution 3D images from over half a mile away

The technique, known as time-of-flight (ToF), is already being used in machine vision, navigation systems for autonomous vehicles, and other applications, but many of these systems have a short range and struggle to image objects that do not reflect laser light. The researchers, led by Gerald Buller, a professor at Heriot-Watt University in Edinburgh, Scotland, say they have overcome these limitations.

Buller says the ToF imaging system can gather high-resolution, 3-D information about objects that are typically very difficult to image up to 0.6-mile away. The system works by sweeping a low-power infrared laser beam rapidly over an object. After this, it records the round-trip flight time of the photons in the beam as they bounce off the object and arrive back at the source. The system can resolve depth on the millimeter scale over long distances using a detector that counts individual photons.

Heriot-Watt University Research Fellow Aongus McCarthy, the first author of the paper, said that although other approaches can have exceptional depth resolution, the new system is able to image objects like items of clothing that do not reflect laser pulses easily.

“Our approach gives a low-power route to the depth imaging of ordinary, small targets at very long range,” McCarthy says. “Whilst it is possible that other depth-ranging techniques will match or out-perform some characteristics of these measurements, this single-photon counting approach gives a unique trade-off between depth resolution, range, data-acquisition time, and laser-power levels.”

The researchers say the primary use of the system will be used for scanning static, man-made targets like vehicles, which could be used to determine speed and direction.

One key characteristic of the system is the long wavelength of laser light the researchers chose, at 1,560 nanometers. This wavelength is longer than visible light and travels more easily through the atmosphere.

The team says the scanner is good at identifying objects hidden behind clutter, but not human skin. However, it would be able to detect humans if they were sweating, according to McCarthy.

He says the device could eventually scan and image objects as far away as six miles away.

“It is clear that the system would have to be miniaturized and ruggedized, but we believe that a lightweight, fully portable scanning depth imager is possible and could be a product in less than five years,” McCarty said.

Next, the team needs to work on the lag time, because it currently takes about five to six minutes from the onset of scanning until a depth image is created. Most of the lag is due to the slow processing time of the team’s computer resources.

“We are working on reducing this time by using a solid-state drive and a higher specification computer, which could reduce the total time to well under a minute. In the longer term, the use of more dedicated processors will further reduce this time,” McCarthy said.

Texas Instruments Incorporated (TI) demonstrated its own ToF technology at the 2013 Consumer Electronics Show. The new 3D ToF image sensor chipset integrates SoftKinetic´s DepthSense pixel technology and runs SoftKinetic’s iisu middleware for finger, hand and full-body tracking. The chipset is inside 3D cameras that control a laptop and a smart TV to access and navigate movies, games and other content.


Source: Lee Rannals for redOrbit.com - Your Universe Online



comments powered by Disqus