27 September 2023

Fake fail: How social media platforms are failing to weed out the fakes

Start the conversation

Kate Cox* says researchers have proven that fake social media accounts are easy to buy and make, and platforms are bad at removing them.


It’s no secret that every major social media platform is chock-full of bad actors, fake accounts, and bots.

The big companies continually pledge to do a better job weeding out organised networks of fake accounts, but a new report confirms what many of us have long suspected: they’re pretty terrible at doing so.

The report came last week from researchers with the NATO Strategic Communication Centre of Excellence (StratCom).

Through the four-month period between May and August of this year, the research team conducted an experiment to see just how easy it is to buy your way into a network of fake accounts and how hard it is to get social media platforms to do anything about it.

The research team spent €300 (about A$490) to purchase engagement on Facebook, Instagram, Twitter, and YouTube, the report explains.

That sum bought 3,520 comments, 25,750 likes, 20,000 views, and 5,100 followers.

They then used those interactions to work backward to about 19,000 inauthentic accounts that were used for social media manipulation purposes.

About a month after buying all that engagement, the research team looked at the status of all those fake accounts and found that about 80 per cent were still active.

So, they reported a sample selection of those accounts to the platforms as fraudulent.

Then came the most damning statistic: three weeks after being reported as fake, 95 per cent of the fake accounts were still active.

“Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms,” the researchers concluded.

“Self-regulation is not working.”

Too big to govern

The social media platforms are fighting a distinctly uphill battle.

The scale of Facebook’s challenge, in particular, is enormous.

The company boasts 2.2 billion daily users of its combined platforms.

Broken down by platform, the original big blue Facebook app has about 2.45 billion monthly active users, and Instagram has more than one billion.

Facebook frequently posts status updates about “removing coordinated inauthentic behaviour” from its services.

Each of those updates, however, tends to snag between a few dozen and a few hundred accounts, pages, and groups, usually sponsored by foreign actors.

That’s barely a drop in the bucket just compared to the 19,000 fake accounts that one research study uncovered from one $490 outlay, let alone the vast ocean of other fake accounts out there in the world.

The issue, however, is both serious and pressing.

A majority of the accounts found in this study were engaged in commercial behaviour rather than political troublemaking.

But attempted foreign interference in both a crucial national election in the UK this month and the high-stakes US federal election next year is all but guaranteed.

The US Senate Intelligence Committee’s report on social media interference in the 2016 US election is expansive and thorough.

The Committee determined Russia’s Internet Research Agency (IRA) used social media to “conduct an information warfare campaign designed to spread disinformation and societal division in the United States,” including targeted ads, fake news articles, and other tactics.

The IRA used and uses several different platforms, the Committee found, but its primary vectors are Facebook and Instagram.

Facebook has promised to crack down hard on coordinated inauthentic behaviour heading into the 2020 US election, but its challenges with content moderation are by now legendary.

Working conditions for the company’s legions of contract content moderators are terrible, as repeatedly reported — and it’s hard to imagine the number of humans you’d need to review literally trillions of pieces of content posted every day.

Using software tools to recognise and block inauthentic actors is obviously the only way to capture it at any meaningful scale, but the development of those tools is clearly also still a work in progress.

* Kate Cox covers tech policy issues, including privacy and antitrust for Ars Technica.

This article first appeared at arstechnica.com/tech-policy

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.