27 September 2023

Mind control: How social media is manipulating us

Start the conversation

Bryan Menegus* says a US Senate hearing has been told just how dangerous the addictive qualities of social media platforms are becoming.


Image: Gremlin

At a preliminary US Senate hearing last week on the subject of potentially putting legislative limits on the persuasiveness of technology — a diplomatic way of saying the addiction model the internet uses to keep people engaged and clicking — Tristan Harris, the Executive Director of the Center for Humane Technology, told lawmakers that while rules are important, what needs to come first is public awareness.

Algorithms and machine learning are terrifying, confusing, and somehow also boring to think about.

However, “one thing I have learned is that if you tell people ‘this is bad for you’, they won’t listen,” Harris stated.

“If you tell people ‘this is how you’re being manipulated,’ no one wants to feel manipulated.”

According to Harris: “It starts with techniques like ‘pull to refresh’, so you pull to refresh your newsfeed.”

“That operates like a slot machine.”

“It has the same kind of addictive qualities that keep people in Las Vegas hooked.”

“Other examples are removing stopping cues.”

“So if I take the bottom out of this glass and I keep refilling the water or the wine, you won’t know when to stop drinking.”

“That’s what happens with infinitely scrolling feeds.”

Of course, the addictive qualities of platforms are also a result of their so-called network effects, where they grow in power exponentially based on how many people are already on them.

With the introduction of likes and followers, Harris said, “it was much cheaper, instead of getting your attention, to get you addicted to getting attention from other people”.

He said that’s why “there’s a follow button on each profile”, to make you “come back every day because you want to see ‘do I have more followers than I did yesterday?’”.

The longer you spend in these ecosystems, the more machine learning systems can optimise themselves against user preferences.

There are quite a few negative externalities associated with solving for engagement alone.

“It’s calculating what is the thing it can show you that gets the most engagement, and it turns out that outrage, moral outrage, gets the most engagement,” Harris said.

“It was found in a study that for every word of moral outrage you add to a tweet it increases your retweet rate by 17 per cent.”

“In other words, the polarisation of our society is actually part of the business model.”

“As recently as just a month ago on YouTube, if you did a map of the top 15 most frequently mentioned verbs or keywords on the recommended videos, they were: hates, debunks, obliterates, destroys.”

“That kind of thing is the background radiation that we’re dosing two billion people with.”

“If you imagine a spectrum on YouTube,” Harris told the Committee.

“On my left side there’s the calm, Walter Cronkite section of YouTube.”

“On my right side there’s crazy town: UFOs, conspiracy theories, Bigfoot, whatever.”

“If I’m YouTube and I want you to watch more, which direction am I going to send you?”

“I’m never going to send you to the calm section; I’m always going to send you towards crazy town.”

“So now you imagine two billion people, like an ant colony of humanity, and it’s tilting the playing field towards the crazy stuff.

Worse yet, it works staggeringly well.

YouTube, in particular, gets around 70 per cent of its traffic from recommendations, which are powered by these sorts of algorithms.

And altering what information people get is catastrophically bad in and of itself.

The implementation of algorithms isn’t limited to tweets and YouTube videos.

As AI Now Institute policy research director Rashida Richardson told the Committee, the problem with these systems is they’re based on datasets that reflect our current conditions, which means they also reflect any current imbalances.

“Amazon’s hiring algorithm was found to have gender-disparate outcomes, and that’s because it was learning from prior hiring practices,” Richardson said.

“They are primarily developed and deployed by a few power[ful] companies and therefore shaped by these companies’ values, incentives, and interests.”

“While most technology companies promise that their products will lead to broad societal benefits, there’s little evidence to support these claims and in fact mounting evidence points to the contrary.”

Abandoning these systems is made as difficult as possible, too.

Harris gave the example of Facebook: “If you say ‘I want to delete my Facebook account’ it puts up a screen that says ‘are you sure you want to delete your Facebook account, the following friends will miss you’ and it puts up faces of certain friends.”

“Now, am I asking to know which friends will miss me? No.”

“Does Facebook ask those friends are they going to miss me if I leave? No.”

“They’re calculating which are the five faces that are most likely to get you to hit ‘cancel’.”

And much of the data needed to power these systems, according to Harris, does not even require the arguably uninformed consent of allowing our data to be collected.

“Without any of your data I can predict increasing features about you using AI,” Harris said.

“All I have to do is look at your mouse movements and click patterns.”

“Based on tweet text alone we can know your political affiliation with about 80 per cent accuracy.”

“[A] computer can calculate that you’re homosexual before you might know you’re homosexual.”

Summarising how bad things have become in stark terms, Harris said: “Imagine a world in which priests only make their money by selling access to the confession booth to someone else, except in this case Facebook listens to two billion peoples’ confessions, has a supercomputer next to them, and is calculating and predicting confessions you’re going to make before you know you’re going to make them.”

There was one other quote that was particularly striking from last week’s hearing, but it didn’t come from any doomsaying tech expert.

It came from Montana Senator Jon Tester, addressing the witnesses: “I’ll probably be dead and gone and probably be thankful for it when all this shit comes to fruition.”

While Harris, Richardson and others may be able to rattle off dozens of examples of massive abuse of user trust and negative impacts of these technologies, it speaks volumes that a sitting Senator would prefer death to the future we’re currently building.

* Bryan Menegus is a senior writer for Gizmodo. He tweets at @BryanDisagrees.

This article first appeared at www.gizmodo.com.au.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.