26 September 2023

Sharing the love: Why there is value in Facebook’s hate speech test

Start the conversation

Rachel Kaser* says the test Facebook is preparing to deploy to help detect and crack down on hate speech is a better idea than many sceptics might think.


Early last week, several Facebook users noticed a new option on all their Facebook posts: a small button at the bottom asking “Does this post contain hate speech?”

It appeared on the most innocuous posts, including cat photos and invitations to boba tea parties.

If you selected the “yes” option, you’d be directed to another choice.

Those options were “Hate speech”, “Test P1”, “Test P2” and so forth.

Clearly the quiz was not meant for primetime, and Facebook later confirmed it was a test they were preparing that went live prematurely.

To say this attracted some derision and scepticism would be a bit of an understatement.

Derision because the quiz was obviously a test, and scepticism for the purpose of it.

What can Facebook’s users do to define hate speech that experts and Facebook themselves haven’t already done?

I say it’s worth finding out.

The quiz may have been prompted in part by questions posed to CEO, Mark Zuckerberg during his Congressional testimony last month — specifically, he was asked by Senator John Thune what his company was doing to improve its detection of hate speech.

Zuckerberg said he and his team were trying to develop AI familiar enough with the nuances of human speech to catch it, but that wouldn’t happen for another five to 10 years.

Until then, he said, they would have to rely on human reporting.

I’m not suggesting this is something Facebook should deploy to all users — the potential for abuse is too obvious to ignore.

But if you deployed it to a randomly selected group of users, you’d be more likely to come up with results not too tainted by bias — Facebook’s or anyone else’s.

To put it another way: this wouldn’t be a permanent fixture, with “hate speech” hovering under every innocuous food image on the site.

But if it were deployed to some users for a while, and they reported everything they considered hate speech, then you would, with some margin of error, have a pool of user responses to the important question of what hate speech is.

If properly reviewed by human eyes, the feedback of Facebook’s users on a sensitive issue which directly affects them could be invaluable.

For a better example of how this could look, check out Facebook’s infamous two-question survey from earlier this year, which was created so Facebook users could help the site identify trustworthy news sources.

According to a Facebook spokesperson, the test wasn’t for everyone, and you couldn’t opt into it.

It would be run with different sets of people who represented a cross-section of users.

This ensured the site would get a variety of opinions, but not the overwhelming opinion of every last one of its billions of users.

A simple question like this doesn’t necessarily solve anything.

But when the question of what constitutes unlawful and harmful speech affects Facebook’s users, it stands to reason you’d at least ask some of them the question.

* Rachel Kaser is a reporter for The Next Web. She tweets at @rachelkaser.

This article first appeared at thenextweb.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.