27 September 2023

Why Facebook’s misinformation problem goes deeper than you think

Start the conversation

Russell Brandom* says in a new report, researchers have set out a prescription for fixing Facebook’s problem with the spread of misinformation.


In the face of the coronavirus outbreak, Facebook’s misinformation problem has taken on new urgency.

Last week, Facebook joined seven other platforms in announcing a hard line on virus-related misinformation, which they treated as a direct threat to public welfare.

But a report published a day later by Ranking Digital Rights made the case that Facebook’s current moderation approach may be unable to meaningfully address the problem.

According to the researchers, the problem is rooted in Facebook’s business model: data-targeted ads and algorithmically optimised content.

We spoke with one of the co-authors, senior policy analyst Nathalie Maréchal, about what she sees as Facebook’s real problem — and what it would take to fix it.

The report makes the case that the most urgent problem with Facebook isn’t privacy, moderation, or even antitrust, but the basic technology of personalised targeting.

Why is it so harmful?

“Somehow, we’ve ended up with an online media ecosystem that is designed not to educate the public or get accurate, timely, actionable information out there, but to enable advertisers … to influence as many people in as frictionless a way as possible,” Maréchal says.

She says the same ecosystem that is really optimised for influence operations is also what we use to distribute news and public health information, connect with our loved ones, and so on.

“The system works to various extents at all those different purposes,” Maréchal says.

“But we can’t forget that what it’s really optimised for is targeted advertising.”

She says the main problem is that ad targeting itself allows anyone with the motivation and the money to spend it to break apart finely tuned pieces of the audience and send different messages to each piece.

“And it’s possible to do that because so much data has been collected about each and every one of us in service of getting us to buy more cars, buy more consumer products, sign up for different services, and so on,” Maréchal says.

“Mostly, people are using that to sell products, but there’s no mechanism whatsoever to make sure that it’s not being used to target vulnerable people to spread lies.”

“What our research has shown is that while companies have relatively well-defined content policies for advertising, their targeting policies are extremely vague.”

“You can’t use ad targeting to harass or discriminate against people, but there isn’t any kind of explanation of what that means.”

“And there’s no information at all about how it’s enforced.”

“At the same time, because all the money comes from targeted advertising, that incentivises all kinds of other design choices for the platform, targeting your interests and optimising to keep you online for longer and longer.”

“It’s really a vicious cycle where the entire platform is designed to get you to watch more ads and to keep you there, so that they can track you and see what you’re doing on the platform and use that to further refine the targeting algorithms.”

Maréchal says part of the goal is to have more transparency over how ads are targeted.

The report calls for greater transparency and audit ability for content recommendation engines — the algorithm that determines your newsfeed content or what the next video on YouTube is.

She says it’s about explaining what the logic is, or what the algorithm is optimised for.

“Is it optimised for quality? Is it optimised for scientific validity?” Maréchal says.

“We need to know what it is that the company is trying to do.”

“And then there needs to be a mechanism whereby researchers … maybe even an expert Government Agency further down the line, can verify that the companies are telling the truth about these optimisation systems.”

Maréchal says viral content shares certain characteristics that are mathematically determined by the platforms.

The algorithms look for whether this content is similar to other content that has gone viral before, among other things, and if the answer is yes, it will be boosted because this content will get people engaged.

“The boosting of organic content has the same driving logic behind it as the ad targeting algorithms,” Maréchal says.

“One of them makes money by actually having the advertisers pull out the credit cards, and the other kind makes money because it’s optimised to keeping people online longer.”

She says if there is less algorithmic boosting that is optimised for the company’s corporate profit margins, misinformation should be less widely distributed.

“People will still come up with crazy things to put on the internet,” Maréchal says.

“But there is a big difference between something that only gets seen by five people and something that gets seen by 50,000 people.”

“We’ve been asking the platforms to be transparent about these kinds of things for more than five years,” Maréchal says.

“And they’ve been making progress in disclosing a bit more every year.”

“But there’s a lot more detail that civil society groups would like to see.”

* Russell Brandom is Policy Editor at The Verge.

This article first appeared at www.theverge.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.