January 15, 2025
4 read me
Does checking schemes work? Here’s what the science says
Communication and disinformation researchers reveal the value of fact-checking, where perceived biases come from, and what Meta’s decision might mean.

Meta plans to ditch its third-party fact-checking program in favor of X-like “community notices.”
PA Images/Alamy Stock Photo
It is said that a lie can fly halfway around the world while the truth puts on boots. That journey to fight online falsehoods and misinformation got a little more difficult this week, when Facebook’s parent company Meta announced plans to end the platform’s fact-checking program, which was founded in 2016 and pays independent groups to verify selected articles and posts.
The company said the move was to combat political bias and censorship of the verifiers. “Experts, like everyone, have their own predictions and views. This has been reflected in the choices some have made about what and how facts should be verified,” Joel Kaplan, Meta’s head of global affairs. wrote on January 7.
nature They spoke with communications and disinformation researchers about the value of fact-checking, where perceived biases come from, and what Meta’s decision might mean.
About supporting science journalism
If you like this article, please consider supporting our award-winning journalism subscribe. By purchasing a subscription, you’re helping to ensure a future of impactful stories about the discoveries and ideas that shape our world.
Positive influence
When it comes to helping people convince people that information is true and trustworthy, “fact checking works,” says Sander van der Linden, a social psychologist at the University of Cambridge in the UK who acted as an unpaid consultant on Facebook’s verification. program in 2022. “Research provides very consistent evidence that fact-checking at least partially reduces misperceptions about false claims.”
For example, a 2019 meta-analysis on the effectiveness of fact-checking in more than 20,000 people found a “significantly positive overall effect on political beliefs.”
“Ideally, we would like people not to create wrong perceptions in the first place,” added van der Linden. “But if we have to work with people already exposed, then reducing it is as good as it’s going to get.”
According to Jay Van Bavel, a psychologist at New York University in New York, fact-checking is ineffective when an issue is polarized. “If you check the facts around Brexit in the UK or the election in the United States, the facts don’t work very well there,” he says. “Partly because the partisans don’t want to believe things that make the party look bad.”
But even if fact-checking doesn’t seem to change people’s minds on contentious issues, they can help, says Alexios Mantzarlis, a former fact-checker who directs the Safety, Trust and Security Initiative at Cornell Tech in New York.
On Facebook, articles and posts deemed false by fact-checkers are flagged with a warning. The platform’s suggestion algorithms also show it to fewer users, Mantzarlis says, and people are more likely to ignore flagged content rather than read and share it.
Flagging posts as problematic can also affect other users who aren’t covered by research on the effectiveness of fact-checking, says Kate Starbird, a computer scientist at the University of Washington in Seattle. “Measuring the direct impact of labels on users’ beliefs and actions is different from measuring the broader implications of having these fact checks in the information ecosystem,” he added.
More misinformation, more red flags
Regarding biases among meta fact-checkers, Van Bavel agrees that checking disinformation from the political right and flagging it as problematic — on Facebook and other platforms — more often than disinformation from the left. But he offers a simple explanation.
“In large part, it’s because conservative misinformation is something that’s spreading more,” he says. “When one party, at least in the United States, is spreading the most disinformation, the fact-checkers seem to be biased because they get called out a lot more.”
There is data to support this. A study published in nature showed last year that although conservative political figures on X, formerly Twitter, were more likely than liberals to cancel from the platform, they were more likely to share information from news sites that a representative group considered low-quality. laymen
“If you wanted to know whether a person has been exposed to misinformation online, the best predictor is whether they are politically conservative,” says Gordon Pennycook, a psychologist at Cornell University in Ithaca, New York, who worked on the analysis. .
Things to do
Meta CEO Mark Zuckerberg has said that instead of third-party fact-checking, Facebook could adopt a system like “community notes” used by X, in which corrections and context are collected from users and added to posts.
Research shows that these systems can also work to correct misinformation, up to a point. “The way it’s implemented in X doesn’t work very well,” says van der Linden. He pointed to an analysis last year that found community notices about X were often added too late to reduce engagement on problematic posts because they came after false claims had already spread widely. Keith Coleman VP of Product X he told Reuters last year that community notice “sets the bar high for notices to be effective and to maintain trust.”
“Crowdsourcing is a useful solution, but it depends on how it will be implemented in practice,” added van der Linden. “Replacing fact-checking with community notices seems like it would make things a lot worse.”
This article is and was reproduced with permission first published On January 10, 2025.