Jeff Bercovici* says the power to influence users makes Facebook a tool for mass population control, and CEO Mark Zuckerberg’s utopian goals only increase the danger.
Mark Zuckerberg’s marathon date with the US Congress is over.
It might as well never have happened.
For two days, Facebook’s CEO stuck to his bland talking points, claimed a degree of ignorance about his company’s operations that strains credulity, and made sure to preface every answer with “Senator” or “Congresswoman.”
And for two days, no one landed a punch on him, for all the predictable reasons.
Politicians eager to appear tough cut off Zuckerberg’s (pictured) answers, gifting him the opportunity to keep his mouth shut.
They asked questions that felt like gotchas but weren’t, like Senator Dick Durbin’s grandstanding demand for the name of the hotel Zuckerberg was staying in.
They betrayed a deep ignorance of how Facebook, and the internet in general, works, inviting Zuckerberg to deploy set-piece responses from deep within his comfort zone.
(“Senator, actually, Facebook doesn’t sell anyone’s data …”)
And that’s when they weren’t breaking character to thank Facebook for making so much money off its users and deploring their own jobs as regulators.
But there’s a deeper reason no one on Capitol Hill was able to call out Facebook in a way that stuck.
No one there truly understands what makes Facebook, among all the big tech companies, uniquely dangerous.
Almost no one anywhere does.
Understandably, in the wake of revelations that a shady firm working for Donald Trump’s presidential campaign got backdoor access to as many as 87 million Facebook profiles, much of the questioning focused on Facebook’s privacy policies.
But Facebook isn’t alone in vacuuming up all the data on consumers it can legally get its hands on.
Google amasses intel about its users at least as sensitive as anything Facebook knows, and considerably more of it.
(When someone wants to know if that weird sore might be from an STD, that’s a query for a search window, not a status update.)
Apple can guess where you’re going before you even get in your car by tracking your iPhone’s location.
Amazon can literally hear what’s going on in your living room — or your bedroom, if you have an Echo smart speaker in there.
What sets Facebook apart isn’t the data it collects.
It’s that it collects all this data on its users while simultaneously seeking to influence their behaviour.
No other company is doing both of these things on a scale remotely close to Facebook, and mixing the two of them is like combining nitric acid and glycerol: the result is something hard to handle safely.
Every product changes its users’ behaviour in some way; that’s practically what it means to be a product.
But while Facebook is very good at engineering behaviours that are good for its business — adding more friends, sharing more information with them, spending more time interacting with their content — it doesn’t stop there.
The company has also induced users to vote who otherwise wouldn’t have.
It got people to become organ donors.
Now it’s trying to get people to become more active in their local organisations and support their local newspapers.
None of these things sounds terribly sinister.
Rather, they’re expressions of a vaguely utopian worldview that infects much of Zuckerberg’s thinking.
Because he thinks “human nature is basically positive,” if more people express their ideas or vote or volunteer, the results will ipso facto be basically positive.
But Zuckerberg himself has said being too “focused on the positive” for the first 10 years of Facebook’s existence blinded the company to much of the abuse it was enabling as well as to emergent effects of social media like hyper-polarisation.
And Facebook doesn’t even stop at modifying behaviour.
It tinkers with users’ thoughts and emotions as well.
A notorious “emotional manipulation” study showed that the company could make users feel better or worse by altering the contents of their News Feeds.
Recently, Facebook announced it will tweak its algorithms to encourage more “meaningful interactions” between friends because those cause its users to experience positive emotions, whereas passive content consumption leaves them feeling worse afterward.
Francois Chollet, a computer scientist who works on deep learning at Google, believes Facebook’s ability to both measure and alter its users’ behaviour is dangerous, raising the spectre of “mass population control.”
That’s because machine learning algorithms — an area where both Google and Facebook have invested heavily, and which Facebook uses to fine-tune the content of each user’s News Feed — are highly effective at connecting inputs and outputs in a recursive optimisation loop.
“The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume,” Chollet tweeted.
Given Zuckerberg’s fuzzy good intentions, there’s no reason to think he’d ever dream of using power like this for anything other than good.
That’s scary enough.
We’ve already seen how a technology intended to make the world “more open and connected” instead divided it up into hostile tribes.
Whatever laws Congress does or doesn’t pass in the wake of Zuckerberg’s testimony, he’ll remain subject to the law of unintended consequences.
The only way to diminish the danger is to diminish the power.
* Jeff Bercovici is San Francisco Bureau Chief for Inc. He tweets at @jeffbercovici.
This article first appeared at www.inc.com.