27 September 2023

Cryptic enigma: Balancing the ups and downs of encrypted messaging

Start the conversation

Casey Newton* says a recent conference highlighted what’s at stake as tech platforms move towards private messaging.


Image: Pete Linforth

Around the world, countries and corporations are rethinking their relationship with encryption.

In the wake of terrorist attacks, legislation in India and Australia has sought to give law enforcement access to encrypted communications, in moves that could threaten the security of encryption around the world.

In the United States, Apple has staked its reputation on protecting encrypted communications even when they belong to terrorists — while Facebook pledged this year to shift the company to private messaging.

The moves have exposed obvious tensions between free speech and safety.

In an effort to move the discussion forward, the Stanford Internet Observatory held a conference earlier this month in which tech platforms, Government Agencies, nongovernmental organisations, civil rights activists, and academics met to hash it out.

I was among a handful of journalists who attended the event, and I came away mostly encouraged that all sides are determined to find a workable balance — even though it seemed clear that each group would strike that balance somewhat differently.

Government Agencies want to maintain what they call “lawful access” to communications when needed for investigations, even if it means hacking into devices.

Civil rights groups (represented by the Electronic Frontier Foundation) warned that law enforcement is building a powerful surveillance operation and is increasingly arguing in court that they shouldn’t need a warrant to snoop on our communications.

Tech platforms want to promote democratic free speech of the variety that produced Black Lives Matter and the #MeToo movement while also helping to catch terrorists and child predators.

And nongovernmental organisations, such as those who work on protecting exploited children, worry that efforts to protect speech with encryption will make catching those predators much harder.

An example from the US National Center for Missing and Exploited Children (NCMEC) drove the point home.

The organisation has long operated a tip-line by which people can report coming across child pornography and other incidents of abuse.

In the late 1990s, the tip-line received 200 to 300 reports per week, said Michelle DeLaune, NCMEC’s Chief Operating Officer.

But as the internet gained adoption, and platforms began collaborating with the organisation, reports to the tip-line exploded.

In 2018, NCMEC received more than 18 million reports of exploitative imagery.

Strikingly, 99 per cent of those reports come directly from the tech platforms.

Through the use of artificial intelligence, hashed images, and partnerships between companies, we’re now arguably much better informed about the scope and spread of these images — and are better equipped to catch abusers.

A Facebook executive says the company bans a whopping 250,000 WhatsApp accounts a month for sharing child exploitation imagery.

And a representative of GCHQ, the UK intelligence agency, said that in the UK last year 2,500 people were arrested due to NCMEC reports.

“That’s what we lose if we get this wrong,” said Crispin Robinson, GCHQ’s Technical Director for Cryptoanalysis.

Meanwhile, the flip side of this discussion — the potential for government abuse of these tools — is on full display in Hong Kong.

Maciej Ceglowski, the brilliantly acerbic writer-thinker-entrepreneur, recently returned from a month in the city-state reporting on the protests.

He described how young pro-democracy protesters organise on Telegram, with the largely leaderless movement coordinating via in-app polls.

Curiously, he said, the app has become popular even though its messages are not end-to-end encrypted by default.

But it allows users to find nearby protesters, to speak to thousands of them at once, and to send disappearing messages that make prosecuting them harder if they are arrested — and that has been enough to make it an anchor of the pro-democracy movement.

Ceglowski’s talk underscored a point made throughout the day’s talks: that something can be secure even if it’s not encrypted, and something can be unsafe even if it is.

As with so much in our conversations about technology, security, and democracy, encryption debates can be emotional in a way that undercuts nuance.

Alex Stamos, who came to the Internet Observatory from Facebook and who organised the event, reminded the audience that encryption solves another, growing problem for platforms.

As countries demand that they remove more speech from their servers, it becomes desirable to remove products from speech debates entirely.

A company can’t moderate what it can’t see — and so it may increasingly have an incentive not to see it.

There are lots of public-minded reasons a company like Facebook wants to promote encryption — but there are nakedly self-interested ones, too.

No one in the room offered a simple solution for squaring all these circles.

But it was heartening, at least, that they came into the room and were willing to have at least some of these discussions in public.

* Casey Newton writes The Interface for The Verge. He tweets at @CaseyNewton.

This article first appeared at www.theverge.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.