26 September 2023

Tweet sorrow: How Twitter hopes its users will play by the rules

Start the conversation

Jonathan Vanian reveals how Twitter hopes that showing its users unspecified rules about proper behaviour will lead to reduced harassment and vile comments.


By Jonathan Vanian*

Photo: freestocks.org

Twitter is testing a decidedly simple strategy to stop widespread bullying and abusive comments on its service — showing users its rules.

Twitter CEO, Jack Dorsey said earlier this month that Twitter was trying the tactic to “diminish abuse,” which has plagued Twitter’s users for years despite the company’s repeated promises to clean up the problem.

The new study will be led by outside researchers including the Dangerous Speech Project, a research group that studies how public speech could incite bad and even violent behaviour.

In a post announcing the new test, Susan Benesch and J. Nathan Matias, academic researchers who will help conduct the study, explained that it will likely involve showing Twitter users unspecified rules about proper behaviour that could lead to reduced harassment and the number of vile comments.

“Social norms, which are people’s beliefs about what institutions and other people consider acceptable behaviour, powerfully influence what people do and don’t do,” the pair wrote.

As an example, they cited outside research and “early evidence” from a previous study Matias conducted on Internet messaging board Reddit that involved showing readers of Reddit’s “r/science” forum rules for commenting.

Some of those included warnings like “no abusive, offensive, or spam comments,” and “no personal anecdotes.”

Matias’s Reddit study, conducted over 29 days during which he and Reddit moderators screened 2,214 discussions on the science forum, found that by making rules easy to see (via “sticky comments,” as they are called on Reddit), Reddit users were 7.3 per cent “more likely to follow the rules.”

Still, Matias concedes in his Reddit study that there are still many unknowns.

For example, it’s unclear if posting rules to other Reddit forums like politics or video games, which can be breeding grounds for controversial comments, would result in similar positive results.

Nevertheless, it seems that Twitter will try something similar to the Reddit study.

The researchers said they wanted to announce the study before it begins because they want “to be as transparent as possible.”

However, they declined to share the details about the study because doing so may jeopardise “the integrity of the results.”

Twitter did not say when the study will conclude, but the researchers said they would publish their findings and post their methodology on a public research website so that others can try to replicate it to confirm their results.

As for the belief that Twitter should simply delete the accounts of people who continually post racist, abusive, or otherwise hostile comments, the researchers said doing so wouldn’t stop others from posting similar hateful comments.

“It’s like recalling unsafe foods without preventing new food from being contaminated and sold — or towing away crashed cars without trying to make new cars safer,” the researchers wrote.

“Abuse can’t be solved by any single method, since it’s posted in so many forms, by a wide variety of people, and for many reasons.”

* Jonathan Vanian is an editor and writer. He tweets at @JonathanVanian.

This article first appeared at fortune.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.