27 September 2023

Moderate success: How an automatic moderator is cleaning up the net

Start the conversation

Tanya Basu* says although flawed, Reddit’s automoderator is a sign of the future for dealing with violent and offensive material on the internet.


Photo: tommaso79

For the past four years, Shagun Jhaver has moderated several subreddits, diligently scrolling through pages and blocking posts that violate community rules or are outright offensive.

A PhD student at Georgia Tech whose research focuses on content moderation, Jhaver wondered if an automatic moderator could help him save not only time but also the mental toll of sifting through psychologically draining content.

So along with three colleagues, he set out to figure out if an automatic moderator, in this case, AutoMod – actually worked.

The team personally moderated several pages on Reddit and then conducted interviews with 16 other moderators of some of the most popular subreddits on the site – including r/photoshopbattles, r/space, r/explainlikeimfive, r/oddlysatisfying, and r/politics, each of which has millions of subscribers.

All rely on AutoMod to help them moderate.

Social-media platforms like Facebook, Instagram, and YouTube have long relied on human moderators to manually comb through content and remove violent and offensive material that ranges from racist and sexist hate speech to graphic video of mass shootings.

Often working on contract, at minimum wage with few benefits, moderators can find themselves pulling long hours while being pummelled with content that takes a serious toll on their mental health.

Automoderators are an attempt to mitigate the tedium and negative effects of such work.

Developed by Redditor Chad Birch as a way to augment his ability to moderate the r/gaming channel, AutoMod is a rule-based tool for identifying words that violate a certain page’s posting policies.

It’s since gone into wide use — Reddit adopted it sitewide in 2015, and the hugely popular game-streaming platforms Twitch and Discord followed suit soon after.

Whether AutoMod is actually a time-saver is questionable, though.

On the one hand, automoderators are very good at what they do – if they’re programmed to find swear words, they will find and block posts that contain them without fail.

They can send notifications to posters about problematic content, which Jhaver says is “educational,” in that authors can learn what was wrong with whatever they posted.

That’s not a small feat.

As Jhaver and his colleagues note, about 22 per cent of all submissions on Reddit between March and October 2018 were removed.

That comes out to about 17.4 million posts in that period.

But let’s say the word is important for context in the post – a discussion in 2016 of soon-to-be US President Donald Trump’s infamous comment about grabbing a woman’s genitals, for example.

Such posts would get flagged because of the offensive language, even though discussing that language is the point of the post in the first place.

Jhaver says this frustrates users, who have to go back and ask moderators to reinstate the post.

And in a social media world where troubling content increasingly consists of offensive memes, live streams of shootings, or other visual, textless content, AutoMod’s reliance on finding keywords is a big liability.

Robert Peck, a moderator for the large subreddits r/pics and r/aww, knows this all too well.

Each of those pages is image driven, and each has millions of followers posting far more content than anyone could be reasonably asked to sift through.

Still, he says that even though it cannot analyse images, AutoMod has made his work easier.

“Users add descriptors to images directly, and we can check those titles,” he says.

“We look for account fattening or spam that have accounts that automate posts.”

“They often use parentheses.”

“We can tell AutoMod to look for those patterns.”

Like it or not, AutoMod and its ilk are the future of social platform moderating.

It will probably always be imperfect, because machines are still a long way from truly understanding human language.

But this is what automation is supposed to be all about: saving people time on tedious or objectionable tasks.

Being able to concentrate on posts that require a human touch makes a moderator’s job that much more valuable and allows both moderators and posters to focus on having better conversations.

It won’t solve the problem of people posting nasty, malicious, or otherwise deleterious content – that will still be one of the thorniest problems afflicting the modern internet.

But it is making a difference.

Peck says he’s grateful for AutoMod’s ability to help him “batch process” posts.

“It’s a powerful piece of technology and quite user friendly – nowhere near the difficulty of programming an equivalent bot,” he says.

“[AutoMod] is my most powerful tool, and I’d be lost without it.”

* Tanya Basu is a senior reporter on humans and technology for Technology Review. She tweets at @tanyabasu.

This article first appeared at www.technologyreview.com

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.