Advertisement

The buzz term “fake news” was popularized during the 2016 Presidential election, when lies by any other name spread across social media networks, including Twitter, quite like never before. Many blamed “bots” – computerized accounts – for the rapid proliferation.

But a new study in the journal Science concludes that it was humans who accelerated the lies even quicker than the “bots.”

“Contrary to conventional wisdom robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it,” write the scientists, from the Massachusetts Institute of Technology.

The researchers looked at 126,000 stories that had been retweeted on Twitter more than 4.5 million times by roughly 3 million people. The scientists then verified the nature of the information based on six independent fact-checking organizations like Snopes, PolitiFact, and FactCheck, who had to be in virtual agreement on the bogus “facts.”

The professors found “cascades” of rumors and misinformation that swamped the actual happenings in the world.

While the truth rarely diffused to more than 1,000 people, the top one percent of the “fake news” routinely made its way to between 1,000 and 100,000 sets of eyes, they add.

Other statistics from the number crunching: false news stories are 70 percent more likely to be retweeted than truth; true stories take six times the amount of time to reach 1,500 people on average than lies do; and the unbroken retweet chains of falsehoods reach “cascade” depth between 10 and 20 times faster than facts.

“We found that falsehood defuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude,” said Sinan Aral, one of the MIT authors of the latest paper.

The professors’ look into the “fake news” phenomenon started with the 2013 Boston Marathon terror attack, when the Boston-based academics monitored the amount of “breaking” misinformation that emerged in the hours after the attack. One such piece of information identified a missing Brown University student as a suspect in the attack, and apparently began on Reddit before spreading to other social media networks. Eventually the student’s body was found a week after the attack, days after the actual terrorists were captured.

Another paper, also in Science, collects the expertise of 15 academics from various Ivy League schools and other institutions of higher learning. The group of academics call for a multidisciplinary counterattack against false information on the Internet – and especially calls for cooperation from Facebook, Twitter, and other companies that are the gatekeepers of the software. Among the suggestions: increased teaching in high schools about false information, and boosting algorithms to control the spread of lies.

“The challenge is there are so many vulnerabilities we don’t yet understand and so many different pieces that can break or be gamed or manipulated when it comes to fake news,” said Filippo Menczer, a professor at Indiana University – Bloomington, and one of the authors. “It’s such a complex problem that it must be attacked from every angle.”

Advertisement
Advertisement