27 September 2023

Toxic chatter: Why the internet trolls have won and there’s little we can do

Start the conversation

Brian X. Chen* says that when it comes to online comments and discourse and trying to limit their toxicity, the average user has only limited power.


Image: Alengo

Over the last decade, commenting has expanded beyond a box under web articles and videos and into social networking sites like Facebook and Twitter.

That has opened the door to more aggressive bullying, harassment and the ability to spread misinformation — often with difficult real-life consequences.

Case in point: the right-wing conspiracy site Infowars.

For years, the site distributed false information that inspired internet trolls to harass people who were close to victims of the Sandy Hook school shooting.

Last week, after much hemming and hawing about whether to get involved, some giant tech firms banned content from Infowars.

(Twitter did not, after determining Infowars had not violated its policies.)

What does that show us?

That you as an internet user have little power over content you find offensive or harmful online.

It’s the tech companies that hold the cards.

Given the way things are going, our faith in the internet may erode until we distrust it as much as we do TV news, said Zizi Papacharissi, a professor of communication at the University of Illinois-Chicago.

“I think we are the ones who are breaking it, because we never completely learned how to use it,” she said of the internet.

“We break it and we fix it again every day. At some point it will crack.”

Why are internet comments so hopelessly bad, and how do we protect ourselves?

Even though there is no simple fix, there are some measures we can aim to take.

Why are people so toxic online?

There are many theories about why the internet seems to bring out the worst in people.

Ms Papacharissi said that in her 20 years of researching online behaviour, one conclusion has remained consistent: people use the internet to get more of what they do not get enough of in everyday life.

So, while people have been socialised to resist being impulsive in the real world, on the internet they cave to their temptations to lash out.

“The internet becomes an easy outlet for us to shout something and feel for a moment fulfilled even though we’re really just shouting out into the air,” she said.

This is nothing new, of course.

The internet is simply a more accessible, less moderated space.

Daniel Ha, a founder of Disqus, a popular internet comment tool used by many websites, said the quality of comments vary widely depending on the content and the audience.

For example, there are videos about niche topics, like home improvement, that invite constructive commentary from enthusiasts.

But there are others, such as a music video from a popular artist or a general news article, which ask people from all around the world to comment.

That’s when things can get especially unruly.

Comments can be terrible simply because many people are flawed.

It’s up to the content providers and tech platforms to vet their communities and set rules and standards for civilised discussion.

That is an area where many resource-strained news publications fall short: they often leave their comments sections unmoderated, so they become cesspools of toxic behaviour.

It is also an area where tech companies like Facebook and Twitter struggle, because they have long portrayed themselves as neutral platforms that do not wish to take on the editorial roles of traditional publishers.

What about fake comments?

Tech companies have long employed various methods to detect fake comments from bots and spammers.

Yet security researchers have shown there are workarounds to all these methods.

Some hackers are now getting extremely clever about their methodologies.

When the US Federal Communications Commission was preparing to repeal net neutrality last year, there were 22 million comments posted on its site, many of which expressed support for the move.

Jeff Kao, a data scientist, used a machine-learning algorithm to discover that 1.3 million comments were likely fakes posted by bots.

Many comments appeared to be very convincing, with coherent and natural-sounding sentences, but it turned out that there were many duplicates of the same comments, subbing out a few words for synonyms.

“If you read through different comments one by one, it’s hard to tell that some are from the same template,” he said.

“But if you use these machine learning algorithms, you can pick out some of these clusters.”

What can I do?

For the issue of spoofed comments, there is a fairly simple solution: You can report them to the site’s owner, which will likely analyse and remove the fakes.

Other than that, don’t take web comments at face value.

Mr Kao said he always tries to view comments in a wider context.

Look at a commenter’s history of past posts, or fact-check any dubious claims or endorsements elsewhere on the web.

But for truly offensive comments, the reality is that consumers have very little power to fight them.

Tech companies like YouTube, Facebook and Twitter have published guidelines for what types of comments and material are allowed on their sites, and they provide tools for people to flag and report inappropriate content.

Yet once you report an offensive comment, it is typically up to tech companies to decide whether it threatens your safety or violates a law — and often harassers know exactly how offensive they can be without clearly breaking rules.

Historically, tech companies have been conservative and fickle about removing inappropriate comments, largely to maintain their positions as neutral platforms where people can freely express themselves.

In the case of Infowars, Apple, Google and Facebook banned some content from the conspiracy site after determining it violated their policies.

Twitter’s chief executive, Jack Dorsey said last week that the company did not suspend the accounts belonging to Infowars because its owner, Alex Jones did not violate any rules.

“If we succumb and simply react to outside pressure, rather than straightforward principles … we become a service that’s constructed by our personal views that can swing in any direction,” he said in a tweet.

Beyond reporting comments individually, you could also use an online petition tool like Change.org to demand that tech companies remove offensive content.

When publishers and tech companies fail to address inappropriate comments, Ms Papacharissi recommended an exercise in self-discipline.

“Think before you read,” she said.

“Think before you speak.”

“And you don’t always have to respond.”

“A lot of things do not deserve a response.”

“Sometimes not responding is more effective than lashing out.”

* Brian X. Chen is a consumer technology writer and author of the Tech Fix column for The New York Times. He tweets at @bxchen.

This article first appeared at www.nytimes.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.