Quantcast
Last updated on April 20, 2014 at 8:28 EDT

Solar Flare Prediction Systems Not As Reliable As Previously Thought

July 6, 2013
Image Caption: An M-class flare appears on the lower right of the sun on June 7, 2013. This image was captured by NASA's Solar Dynamics Observatory in the 131 Angstrom wavelength, a wavelength of UV light that is particularly good for seeing flares and that is typically colorized in teal. Credit: NASA/SDO

John P. Millis, Ph.D. for redOrbit.com – Your Universe Online

The residents of Earth have a vested interest in being able to accurately predict solar activity, particularly events such as solar flares and coronal mass ejections (CMEs). These eruptions spew energetic charged particles into the solar system at fantastic rates, which have the potential to damage orbiting satellites and even ground based power grids. Our first line of defense against solar activity is knowing when to temporarily shut down systems or cover sensitive equipment.

Unsurprisingly, there are a number of systems that are used to predict solar activity using various measurements of our Sun. But how well do they work?

New research from Dr. Shaun Bloomfield at Trinity College Dublin, Ireland, indicates that many of the systems are not as good as previously thought. This is mostly because the method that we were using to assess them was, itself, flawed.

“The most important aspect of any type of forecast is how it performs,” said Bloomfield in a recent statement. “If we always say, ‘flare expected today’, we will have successfully predicted all flares. However, we would be crying wolf and be completely wrong on most days, as flares can occur quite far apart in time. We need be accurate in both our predictions of when flares will occur and when they won’t for this to be of real value to society.”

As a result his team is proposing a new assessment tool to evaluate flare monitoring systems. Simply, the True Skill Statistic (TSS) is calculated by taking the fraction of correct flare forecasts out of all flares observed, minus the fraction of false alarms out of all non-flares observed.

“The benefit of the TSS over other ratings scores is that it is not changed by the number of flares or non-flares observed. We can make a proper comparison of forecast systems, regardless whether they have made 50 or 5000 predictions. Even so, surveys with small data sets are still prone to noise and their results must be considered less reliable,†said Bloomfield.

Applying the TSS to seven monitoring systems (there are others, but the data needed to calculate the TSS was not available), the team found that some of the more complicated systems, such as those utilizing adaptive systems like artificial neural networks, did no better than those that simply measured the shape of sunspots.

“If we are to move forward in developing a standard ratings system for flare predictions that produces meaningful results, we need to encourage solar forecasters to be more open about publishing their results. As well as the number of flares correctly predicted, we need to know numbers of correct non-flare predictions, false alarms and missed flares. If these differences in flare statistics are not taken into account properly, some methods can appear to perform better than others when in reality they are the same or worse,” said Bloomfield.

Bloomfield presented the findings this week at the Royal Astronomical Society’s National Astronomy Meeting in St. Andrews, Scotland.


Source: John P. Millis, Ph.D. for redOrbit.com – Your Universe Online