Misinformation Can Spread Like Wildfire On Twitter
March 19, 2014

Twitter Has Pros And Cons During Crisis

redOrbit Staff & Wire Reports - Your Universe Online

Twitter can be enormously helpful during an emergency, but it can be just as detrimental when misinformation is rapidly spread during a crisis.

Researchers from University of Washington studied last year’s Boston Marathon bombings and found that significant amounts of misinformation spread widely on Twitter despite efforts by users to correct rumors that were inaccurate.

The bombings, which took place on April 15, 2013 when two explosions occurred near the finish line of the Boston Marathon, killed three people. Three days later, the FBI released photographs and video surveillance of two suspects, and sought the public’s help in identifying the men, prompting considerable speculation across mainstream media outlets and social media sites, particularly Twitter. After a shooting on the Massachusetts Institute of Technology campus and a manhunt, one of the suspects was shot dead and the other arrested the evening of April 19.

Throughout the entire series of events, a flood of tweets were published on Twitter using hashtags such as #boston, #prayforboston, #mit and #manhunt. Additionally, a series of inaccurate rumors surfaced that quickly spread before corrections began appearing. Even then, the corrective tweets were minimal when compared with the volume of tweets that had spread the misinformation.

“We could see very clearly the negative impacts of misinformation in this event,” said study author Kate Starbird, assistant professor at University of Washington’s Department of Human Centered Design & Engineering.

“Every crisis event is very different in so many ways, but I imagine some of the dynamics we’re seeing around misinformation and organization of information apply to many different contexts. A crisis like this allows us a chance to see it all happen very quickly, with heightened emotions.”

Starbird, whose research focuses on the use of social media in crisis events, began recording the stream of tweets about 20 minutes after the finish line bombing. She and her team compiled the entire dataset – some 20 million tweets – to fill in gaps when the sheer volume of tweets coming in was too large to capture in real-time.

The researchers analyzed the text, timestamps, hashtags and metadata in 10.6 million tweets to first identify rumors, then code tweets related to the rumors as either “misinformation,” “correction” or “other.”

For instance, the team analyzed the rumor that an 8-year-old girl had died in the bombings. The researchers first identified tweets containing the words “girl” and “running,” then pared that down to roughly 92,700 that were related to the rumor. They then found that about 90,700 of these tweets were spreading misinformation, while only about 2,000 were corrections.

Although the Twitter community offered corrections within an hour of the rumor’s appearance, the misinformation nevertheless persisted long after correction tweets had faded away, the researchers said.

“An individual tweet by itself is kind of interesting and can tell you some fascinating things about what was happening, but it becomes really interesting when you understand the larger context of many tweets and can look at patterns over time,” said Jim Maddock, a UW undergraduate student in Human Centered Design & Engineering and history, who performed most of the computational data analysis for the study.

Previous research analyzing the spread of misinformation on Twitter during the 2010 earthquake in Chile found that Twitter users actually crowd-corrected the rumors before they gained traction. However, that study excluded all retweets, which the University of Washington team found to be a substantial portion of the tweets spreading misinformation.

The researchers said they hope to develop a real-time tool that could let users know when a particular tweet is being questioned by other tweets as untrue. The new tool would not attempt to determine whether a tweet was actually true or not, but would merely track instances where one tweet is contested by another.

“We can’t objectively say a tweet is true or untrue, but we can say, ‘This tweet is being challenged somewhere, why don’t you research it and then you can hit the retweet button if you still think it’s true,’” Maddock said.

“It wouldn’t necessarily affect that initial spike of misinformation, but ideally it would get rid of the persisting quality that misinformation seems to have where it keeps going after people try to correct it.”

The researchers are now studying the relationship between various website links within tweets, and the quality of information that was spread during the Boston bombings. They also are conducting interviews with bystanders who were close to the finish line to see what effect proximity had on information sharing.

The researchers presented their findings earlier this month at iConference 2014 in Berlin, where they received a top award for their work.