27 September 2023

Beyond the breach: Why Google’s failing wasn’t the crime, but the cover-up

Start the conversation

Russell Brandom* says the backlash against the bug that has killed off Google+ is less about a breach of information than a breach of trust with users.


For months, Google has been trying to stay out of the way of the growing tech backlash, but last week, the dam finally broke with news of a bug in the rarely used Google+ network that exposed the private information of as many as 500,000 users.

Google found and fixed the bug back in March, around the same time the Cambridge Analytica story was heating up in earnest.

But with the news only breaking now, the damage is already spreading.

The consumer version of Google+ is shutting down, privacy regulators in Germany and the US are already looking into possible legal action, and former US Securities and Exchange Commission (SEC) officials are publicly speculating about what Google may have done wrong.

The vulnerability itself seems to have been relatively small in scope.

The heart of the problem was a specific developer API (Application Programming Interface) that could be used to see non-public information.

But crucially, there’s no evidence that it actually was used to see private data, and given the thin user base, it’s not clear how much non-public data there really was to see.

The API was theoretically accessible to anyone who asked, but only 432 people actually applied for access (again, it’s Google+), so it’s plausible that none of them ever thought of using it this way.

The bigger problem for Google isn’t the crime, but the cover-up.

The vulnerability was fixed in March, but Google didn’t come clean until seven months later when The Wall Street Journal got hold of some of the memos discussing the bug.

The company seems to know it messed up — why else nuke an entire social network off the map? — but there’s real confusion about exactly what went wrong and when, a confusion that plays into deeper issues in how tech deals with this kind of privacy slip.

Part of the disconnect comes from the fact that, legally, Google is in the clear.

There are lots of laws about reporting breaches — primarily the EU’s General Data Protection Regulation (GDPR) and various laws in different countries — but by that standard, what happened to Google+ wasn’t technically a breach.

Those laws are concerned with unauthorised access to user information, codifying the basic idea that if someone steals your credit card or phone number, you have a right to know about it.

But Google just found that data was available to developers, not that any data was actually taken.

With no clear data stolen, Google had no legal reporting requirements.

As far as the lawyers were concerned, it wasn’t a breach, and quietly fixing the problem was good enough.

There is a real case against disclosing this kind of bug, although it’s not quite as convincing in retrospect.

All systems have vulnerabilities, so the only good security strategy is to be constantly finding and fixing them.

As a result, the most secure software will be the one that’s discovering and patching the most bugs, even if that might seem counterintuitive from the outside.

Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that do the most to protect their users.

(Of course, Google has been abruptly disclosing other companies’ bugs for years under Project Zero, which is part of why critics are so eager to jump on the apparent hypocrisy. But the Project Zero crew would tell you that third-party reporting is a completely different dance, with disclosure typically used as an incentive for patching and as a reward for white-hat bug-hunters looking to build their reputation.)

That logic makes more sense for software bugs than social networks and privacy issues, but it’s accepted wisdom in the cybersecurity world, and it’s not a stretch to say it guided Google’s thinking in keeping it could keep this bug under wraps.

But after Facebook’s painful fall from grace, the legal and the cybersecurity arguments seem almost beside the point.

The contract between tech companies and their users feels more fragile than ever, and stories like this one stretch it even thinner.

The concern is less about a breach of information than a breach of trust.

Something went wrong, and Google didn’t tell anyone.

Absent the Journal reporting, it’s not clear it ever would have.

It’s hard to avoid the uncomfortable, unanswerable question: what else isn’t it telling us?

It’s too early to say whether Google will face a real backlash for this.

If anything, the small number of affected users and relative unimportance of Google+ suggest it won’t.

But even if this vulnerability was minor, failures like this pose a real threat to users and a real danger to the companies they trust.

The confusion about what to call it — a bug, a breach, a vulnerability — covers up a deeper confusion about what companies actually owe their users when a privacy failure is meaningful and how much control we really have.

These are crucial questions for this era of tech, and if the past week is any indication, they’re questions the industry is still figuring out.

* Russell Brandom is Policy Editor at The Verge. He tweets at @russellbrandom.

This article first appeared at www.theverge.com.

Start the conversation

Be among the first to get all the Public Sector and Defence news and views that matter.

Subscribe now and receive the latest news, delivered free to your inbox.

By submitting your email address you are agreeing to Region Group's terms and conditions and privacy policy.