A new study of user behavior on Facebook around the 2020 election is likely to strengthen critics’ long-standing arguments that the company’s algorithms help to fuel the spread of misinformation over more trustworthy sources.
The forthcoming peer-reviewed study by researchers at New York University and the Université Grenoble Alpes in France has found that from August 2020 to January 2021, news publishers known for putting out ‘misinformation’ got six times the amount of likes, shares, and interactions on the platform as did trustworthy news sources.
Ever since “fake news” on Facebook became a public concern following the 2016 presidential election, publishers who traffic in misinformation have been repeatedly shown to be able to gain major audiences on the platform. But the NYU study is one of the few comprehensive attempts to measure and isolate the misinformation effect across a wide group of publishers on Facebook, experts said, and its conclusions support the criticism that Facebook’s platform rewards publishers that put out misleading accounts.
The study “helps” add to the growing body of evidence that, despite a variety of mitigation efforts, misinformation has found a comfortable home and an engaged audience on Facebook,” said Rebekah Tromble, director of the Institute for Data, Democracy and Politics at George Washington University, who reviewed the study’s findings.
Facebook hit back and said that the study confirmed the number of people who engaged with the content and not the number of people who viewed it. “This report looks mostly at how people engage with content, which should not be confused with how many people actually see it on Facebook,” said Facebook spokesman Joe Osborne.
“When you look at the content that gets the most reach across Facebook, it is not at all like what this study suggests.” He added that the company has 80 fact-checking partners covering more than 60 languages that work to label and reduce the distribution of false information.