27 September 2023

Facing facts: How Facebook hopes to curb the spread of fake news

Start the conversation

Jacob Kastrenakes* says Facebook has moved to punish groups for repeatedly spreading fake news on its platforms.


Facebook announced a handful of updates last week that are designed to reduce the reach of harmful content across its platform.

One of the most notable changes is to groups: groups that “repeatedly share misinformation” will now be distributed to fewer people in the News Feed.

That’s an important change, as it was frequently group pages that were used to distribute propaganda and misinformation around the 2016 US elections.

Another change could help cut low-quality publications from the News Feed overall.

Facebook says it’s now starting to measure whether publishers are a big deal in general, instead of just popular on the platform, when determining how much News Feed promotion they should get.

It sounds a little bit like the way Google tends to rank search results: if a website is frequently linked to by other sites, the system starts to learn that it’s a trusted source.

This will help Facebook to tell whether a publisher is broadly valued, or if they’ve just figured out how to game the News Feed.

Facebook is also rolling out new blocking features and verified badges to Messenger.

Facebook is also making some small changes around fact-checking stories.

The Associated Press is now going to start fact-checking some videos in the US, and Facebook will start including “Trust Indicators” when users click to see context around a publication.

Those indicators come from The Trust Project, a group built by news organisations that makes those determinations.

Some features on WhatsApp designed to reduce misinformation are also coming to Messenger.

Facebook says it’s already started to roll out forward indicators, to let people know when a message has been forwarded to them, and “context buttons,” so people can look up more details around information they’ve been sent.

Facebook has been chipping away at its misinformation and propaganda problems since shortly after the 2016 US elections.

Its efforts have involved adding fact-checkers, limiting the spread of problematic stories, and trying to highlight when stories have been flagged as fake.

But Facebook platforms are still facing the same problems today, as bad actors continue to find ways to abuse the system.

In addition to making changes to limit the spread of misinformation, Facebook is also making some small changes designed to keep users safe.

Those include:

  • A more detailed blocking tool on Messenger.
  • Bringing verified profile badges to Messenger.
  • Letting people remove posts and comments from a group, even after leaving the group.
  • Adding more information around “Page quality,” including its “status with respect to clickbait” (in case you couldn’t tell from the headlines, I guess).
  • Reducing the spread of content that doesn’t quite warrant a ban on Instagram (Facebook says “a sexually suggestive post” might be allowed to remain on followers’ feeds, but not appear in Explore).
  • Increasing scrutiny of groups’ moderation when determining if a group is violating rules.

* Jake Kastrenakes is Reports Editor for The Verge. He tweets at @jake_k.

This article first appeared at www.theverge.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.