Quantcast

Researchers Reverse Engineer The Rules Of The Forest

July 25, 2013
Image Credit: Thinkstock.com

Lee Rannals for redOrbit.com – Your Universe Online

Researchers writing in the Proceedings of the National Academy of Sciences have designed a new algorithm that will help determine how the rules of the forest came to be.

Predicting possible outcomes from a set of rules that contain uncertain factors is done using a stochastic prediction. However, scientists have found it more difficult to understand what the rules were by simply observing the outcomes, until now.

The team published new insight into automated stochastic inference that could unravel the hidden laws in fields as diverse as molecular biology to population ecology to basic chemistry. With this new algorithm, they have devised a way to take intermittent samples and infer the likely reactions that led to that result. For example, the algorithm could take the number of prey and predatory species in a forest once a year and find what rules took place to lead to it.

Essentially, the team’s method works backward from traditional stochastic modeling, which typically uses known reactions to simulate possible outcomes. With this new approach, the team takes outcomes and comes up with reactions.

“This could be very useful if you wanted to learn the driving rules for not just foxes and rabbits, but any evolving system with interacting agents,” said Hod Lipson, associate professor of mechanical and aerospace engineering and of computing and information science at Cornell University. “There is a whole lot of science that is based on this kind of modeling.”

In one experiment, the team studied the fluctuating numbers of microorganisms in a closed ecosystem. The algorithm came up with reactions that correctly identified the predators, the prey and the dynamic rules that defined their interactions. Their key insight was to look at relative changes in the concentration of the interacting agents, irrespective of the time at which such changes were observed.

“We figured out that there’s what’s called an invariant geometry, a geometrical feature of the data set that you can uncover even from sparse intermittent samples, without knowing any of the underlying rules,” Chattopadhyay said. “The geometry is a function of the rules, and once you find that out, there is a way to find out what the reactions are.”

Lipson said the bigger picture of the study was to give scientists a better tool for taking massive amounts of data and coming up with insightful explanations.

“This is a tool in a suite of emerging ‘automated science’ tools researchers can use if they have data from some experiment, and they want the computer to help them understand what’s going on – but in the end, it’s the scientist who has to give meaning to these models,” Lipson said.


Source: Lee Rannals for redOrbit.com – Your Universe Online



comments powered by Disqus