Quantcast

NASA Reveals Herculean Process Of Handling Big Data

October 18, 2013
Image Caption: The center of the Milky Way galaxy imaged by NASA's Spitzer Space Telescope is displayed on a quarter-of-a-billion-pixel, high-definition 23-foot-wide (7-meter) LCD science visualization screen at NASA's Ames Research Center in Moffett Field, Calif. Credit: NASA/Ames/JPL-Caltech

Brett Smith for redOrbit.com – Your Universe Online

If you’ve ever downloaded a movie from the Internet, you know that large amounts of data can take time to transfer and process using a standard computer. Now imagine the herculean task NASA faces in trying to process the constant flood of data streaming in from the simple Voyager probe signal emanating from outside our Solar System to the hi-res images being downloaded from various orbiting telescopes.

“Scientists use big data for everything from predicting weather on Earth to monitoring ice caps on Mars to searching for distant galaxies,” said Eric De Jong of NASA’s Jet Propulsion Laboratory (JPL) and principal investigator for NASA’s Solar System Visualization project.

De Jong’s project team converts NASA mission science into visualization products that researchers can use for various projects.

“We are the keepers of the data, and the users are the astronomers and scientists who need images, mosaics, maps and movies to find patterns and verify theories,” De Jong said.

To manage the massive amount of data from space, NASA first needs a place to store the data. After the torrents of information are stored, the space agency needs a way to visualize the data in a palatable way.

That’s where De Jong’s team comes in. The staff at the Solar System Visualization project is continually developing new ways to visualize data. For example, each image from NASA’s Mars Reconnaissance Orbiter, which contains about 120 megapixels apiece, is used by the project to create a movie. De Jong’s team also creates countless computer graphics and animations that allow scientists and the public to better understand the Red Planet.

“Data are not just getting bigger but more complex,” said De Jong. “We are constantly working on ways to automate the process of creating visualization products, so that scientists and engineers can easily use the data.”

NASA also prioritizes making their vast quantities of data easily searchable.

“If you have a giant bookcase of books, you still have to know how to find the book you’re looking for,” said Steve Groom, manager of NASA’s Infrared Processing and Analysis Center at the California Institute of Technology, Pasadena.

The center, which archives data from a number of NASA astronomy missions, allows users to access data all at once, enabling a search for large-scale patterns.

“Astronomers can also browse all the ‘books’ in our library simultaneously, something that can’t be done on their own computers,” Groom said.

“No human can sort through that much data,” noted Andrea Donnellan of JPL, who has a similar task for the NASA-funded QuakeSim project, which aggregates massive data sets to study earthquake processes.

The QuakeSim project’s images and data plots allow scientists to learn how earthquakes happen and extend long-term pre-emptive strategies. The data includes GPS data for hundreds of locations in California, where countless measurements are taken. The project scientists engineer software tools to assist users coming through the flood of data.

As NASA’s pool of assets continues to grow, it will need to develop new methods to manage the flow. As new tools develop so will the space agency’s capacity to understand our universe and the world.


Source: Brett Smith for redOrbit.com - Your Universe Online



comments powered by Disqus