26 September 2023

Facing facts: How Facebook has been cleaning up its services

Start the conversation

Tony Romm* says Facebook has taken action against tens of millions of posts for breaking rules on hate speech, harassment and child exploitation.


Facebook took action against tens of millions of posts, photos and videos over the past six months for violating its rules that prohibit hate speech, harassment and child sexual exploitation – illustrating the vast scale of the tech giant’s task in cleaning up its services from harm and abuse.

The company revealed the data about its policy enforcement to the world as part of its latest transparency report, which it said reflected its still-improving efforts to use artificial intelligence to spot harmful content before users ever see it and outwit those who try to evade its censors.

The report did not break down the actions by country.

During the second and third quarter of 2019, Facebook said it removed or labelled more than 54 million pieces of content it deemed violent and graphic, 18.5 million items determined to be child nudity or sexual exploitation, 11.4 million posts that broke its rules prohibiting hate speech and 5.7 million uploads that ran afoul of bullying and harassment policies.

The company also detailed for the first time its efforts to police Instagram, revealing that it took aim at more than 1.2 million photos or videos involving child nudity or exploitation and 3 million that ran afoul of its policies prohibiting sales of illegal drugs over that six-month period.

In all four categories, Facebook took action against more content between 1 April and 30 September than it did in the six months prior.

Previously, the company targeted nearly 53 million pieces of content for excessive violence, 13 million for child exploitation, 7.5 million for hate speech and 5.1 million for bullying.

Facebook attributed some of the spike in violations to its efforts to tighten its rules and more actively search and find abusive posts, photos and videos before users report them.

Speaking to reporters on 13 November, Facebook CEO, Mark Zuckerberg warned against concluding that “because we’re reporting big numbers, that must mean there’s so much more harmful content happening on our service than others”.

“What it says is we’re working harder to identify this and take action on it,” he said.

Still, Facebook’s latest transparency report arrives as regulators around the world continue to call on the company – and the rest of Silicon Valley – to be more aggressive in stopping the viral spread of harmful content, such as disinformation, graphic violence and hate speech.

A series of high-profile failures over the past year have prompted some lawmakers to threaten to pass new laws holding tech giants responsible for failing to police their sites and services.

The calls for regulation intensified after the deadly shooting in Christchurch, New Zealand, in March.

Video of the gunman attacking two mosques spread rapidly on social media – including Facebook – evading tech companies’ expensive systems for stopping such content from going viral.

Last week, Facebook offered new data about that incident, reporting that it had removed 4.5 million pieces of content related to the attack between 15 March – the day it occurred – and 30 September, nearly all of which it spotted before users reported it.

Facebook also touted recent improvements in its use of artificial intelligence.

Facebook detected 80 per cent of the hate speech it removed before users did, a lower rate than other areas but still an improvement for the tech giant, which has struggled to take swift action against content that targets people on the basis of race, gender, ethnicity or other sensitive traits.

In presenting the data, Zuckerberg took a shot at other tech companies for their decision to publish far less data about the content they take down and the means by which they remove it.

The Facebook chief didn’t mention Google – which owns YouTube – and Twitter by name.

But his proposed solution – new regulation around transparency reporting – would affect those two competitors and the rest of Silicon Valley.

“As a society, we don’t know how much of this harmful content is out there and which companies are making progress,” he said.

* Tony Romm is a technology policy reporter at The Washington Post. He tweets at @TonyRomm.

This article first appeared at www.washingtonpost.com

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.