Chris Stokel-Walker* says keeping the internet running smoothly during the Coronavirus crisis is not just business as usual.
Jamie Goate first started setting up phone connections and broadband links 20 years ago, when he troubleshot dial-up internet connections and routed copper phone wires back to junction boxes.
But he didn’t get his biggest job until about a month ago.
The UK National Health Service (NHS) had called his current employer, Openreach, to help it set up field hospitals for treating COVID-19 patients inside existing conference centres.
“They didn’t really understand what they needed,” says Goate.
“It was just a call to arms.”
Goate arrived on the site within an hour of receiving the call.
Normally, Openreach promises to deliver connectivity within 30 days.
Goate’s team identified and delivered a solution within 15 hours, working around a fast-moving construction site.
It wasn’t a simple job: The team installed more than 1,000 voice over internet protocol (VoIP) telephones and an ultrafast Wi-Fi connection needed to control the ventilators keeping patients alive.
It also created direct-routed connections to two other city-centre hospitals so that patient data could be accessed at the field hospital through a private network the NHS runs and installed two separate ethernet lines to provide high capacity for other services.
“Without the connectivity, the hospital wouldn’t be up and running,” says Goate.
Broadband engineers, installers, and other people who keep the internet working, like Goate, are considered essential workers in many of the stay-at-home orders issued by governments around the world.
They’re a key but often overlooked piece of the team that keeps hospitals online, emergency services connected, supermarket distribution centres viable, and locked-down populations connected.
And all of that requires monumental changes in the infrastructure of our internet.
Akamai, a company that powers content delivery networks (CDNs) — a technology that allows people to watch videos, download software, and visit websites more quickly — for more than 400 banks worldwide and 50 per cent of Fortune 500 companies has had to adapt quickly to keep its clients online during the crisis.
The company has 275,000 servers across 136 different countries, and in March 2019, it handled an average of 82 terabits of traffic per second across its CDNs.
Twelve months later, traffic had doubled to 167 terabits per second.
“We’re trying to stay ahead of the curve and deploy more servers and equipment,” says Christian Kaufmann, Akamai’s head of network technology.
But increasing internet capacity requires shipping hardware from one part of the world to another — and with many countries locked down, it has proven difficult to future-proof data centres with extra servers.
“In some countries, we have the standard of critical infrastructure because banks and retail go over our platforms,” says Kaufmann.
“There, it’s easier for us to deploy.”
“Other countries don’t see it the same way, and it’s harder to deploy.”
The company is confident, though, that the immediate danger of its infrastructure failing has passed.
For a while, things looked grim while China, where many parts for hardware used in their servers are made, was in lockdown, but the lockdown has since been lifted.
“Thank God, for most parts we had a replacement or spare stock,” says Kaufmann.
Individual services are also feeling the strain of keeping things online.
Netflix said last month that it had quadrupled capacity with internet service providers for its streaming service while also reducing the bitrate on its videos after a request from the European Union to keep the efficiency of telecommunications at a maximum.
YouTube and Amazon Prime have also reduced the quality of their streams from the highest definition to deal with increased demand as people spend more time at home and require entertainment.
Perhaps no company has had to handle a greater strain on its infrastructure than video conferencing software Zoom, whose user base ballooned from 10 million people at the end of December to more than 200 million at the end of March.
The platform is also built in such a way that if there’s a capacity bottleneck at the data centre nearest a user, the traffic will be routed to other nearby centres that are less busy.
Imagine data as a car moving through rush-hour traffic: If there’s an alternative, quicker route, the data will take it.
Zoom has servers running inside 17 different data centres around the globe.
When demand is unprecedented, Zoom can call on tens of thousands of servers provided by Amazon Web Services (AWS) within hours.
Yet the system doesn’t always work perfectly.
In early April, Zoom admitted it had mistakenly driven traffic “under extremely limited circumstances” through data centres in China.
Critics carped at this for much the same reason a user sued TikTok for allegedly transferring her data to China: Once it’s within that state’s borders, people worry the country will monitor the data’s contents.
(Users can now specify where they want their data to be routed and where they don’t.)
For years, broadband connections have been seen as a privilege, not a right — an added extra luxury.
Yet as we’re all confined to our homes and life has to continue in some form, we’re seeing them for what they are: a necessity and a vital one at that, just like heat, power, and water.
The companies delivering the mighty infrastructure that runs all our services have been working overtime to ensure it stays running.
“We recognise it’s a testing time for people being alone indoors,” says Goate, the Openreach engineer, “and when you’re vulnerable, you need a connection to the outside world.”
* Chris Stokel-Walker is a freelancer for The Guardian, The Economist, the BBC and others. He tweets at @stokel.
This article first appeared at onezero.medium.com.