New Research Article: Twitter Bots Played Disproportionate Role Spreading Misinformation During 2016 Election
From Indiana University:
An analysis of information shared on Twitter during the 2016 U.S. presidential election has found that automated accounts — or “bots” — played a disproportionate role in spreading misinformation online.
The study, conducted by Indiana University researchers and published today in the journal Nature Communications,” analyzed 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017 — a period that spans the end of the 2016 presidential primaries and the presidential inauguration on Jan. 20, 2017.
Among the findings: A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the “low-credibility” information on the network. These accounts were also responsible for 34 percent of all articles shared from “low-credibility” sources.
The study also found that bots played a major role promoting low-credibility content in the first few moments before a story goes viral.
The brief length of this time — two to 10 seconds — highlights the challenges of countering the spread of misinformation online. Similar issues are seen in other complex environments like the stock market, where serious problems can arise in mere moments due to the impact of high-frequency trading.
“This study finds that bots significantly contribute to the spread of misinformation online — as well as shows how quickly these messages can spread,” said Filippo Menczer, a professor in the IU School of Informatics, Computing and Engineering, who led the study.
[Clip]
To explore election messages currently shared on Twitter, Menczer’s research group has also recently launched a tool to measure “Bot Electioneering Volume.” Created by IU Ph.D. students, the program displays the level of bot activity around specific election-related conversations, as well as the topics, user names and hashtags they’re currently pushing.
Learn More, Read the Complete Publication Announcement
Direct to Full Text Research Article Discussed Above: The Spread of Low-Credibility Content by Social Bots (via Nature Communications)
From the Abstract:
The massive spread of digital misinformation has been identified as a major threat to democracies. Communication, cognitive, social, and computer scientists are studying the complex causes for the viral diffusion of misinformation, while online platforms are beginning to deploy countermeasures. Little systematic, data-based evidence has been published to guide these efforts. Here we analyze 14 million messages spreading 400 thousand articles on Twitter during ten months in 2016 and 2017. We find evidence that social bots played a disproportionate role in spreading articles from low-credibility sources. Bots amplify such content in the early spreading moments, before an article goes viral. They also target users with many followers through replies and mentions. Humans are vulnerable to this manipulation, resharing content posted by bots. Successful low-credibility sources are heavily supported by social bots. These results suggest that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.
Filed under: Data Files, News, Patrons and Users
About Gary Price
Gary Price (gprice@gmail.com) is a librarian, writer, consultant, and frequent conference speaker based in the Washington D.C. metro area. He earned his MLIS degree from Wayne State University in Detroit. Price has won several awards including the SLA Innovations in Technology Award and Alumnus of the Year from the Wayne St. University Library and Information Science Program. From 2006-2009 he was Director of Online Information Services at Ask.com.