So, the big news this week is that Apple and Google have launched their watered down “contact-tracing” platforms into the world. Only “contact-tracing” has now become “exposure notification,” and judging by all the evidence to-date, they’re not going to be effective enough to make a difference. Worse, they introduce a set of dangerous security risks that have not been addressed. False hopes, false starts.
Let’s clear something up from the start. Contact-tracing is not an opt-in Bluetooth app that might register your proximity to an infected person, under a very limited set of circumstances, including that you’re both fully running the app. No, contact tracing is a manually-intensive, surveillance-heavy, privacy-intrusive process where a combination of brute-force measures and meticulous attention to detail, under the purview of well-trained operatives, roots out the spread of infection.
Ever since the suggestion that the world would follow Singapore’s TraceTogether example and launch Bluetooth proximity apps, experts have been warning about their likely efficacy. Singapore’s take-up was little more than 20% of smartphone users, despite needing 80%. And, let’s face it, if Singapore can’t deliver the required installs then what hope the U.S., Germany, France and the U.K.
I have covered the take-up issue in detail before. This was a completely predictable problem, but it seems governments didn’t want to listen as plans were being drawn up. The so-called contact-tracing apps became a life raft, an escape route out of lockdown. If only hundreds of millions of people would just download and install an app and then obey its every command, all would be well.
With the best will in the world, the devil really is in the detail when it comes to this painful efficacy issue. The stark truth is that older and poorer people are unlikely to have the newish smartphones capable of running these apps—exactly the most vulnerable groups we most need to protect. As badly as politicians want this to work, it won’t, it really is as simple as that. Tens of thousands of contact-tracers need to be hired and trained. And we all need to get used to the fact that manual contact-tracing relies on prying personal information and monitoring surveillance.
Just look at the U.K. As summed up by the Times, the U.K.’s own app, which isn’t as watered down as the Google and Apple framework, has triggered “nervousness in government about whether people will comply with instructions as lockdown is lifted. Trials of a contact-tracing app have suggested they will only do so if persuaded by a human being rather than an automated message.”
And it gets worse. As ill-conceived as it is to rely on Bluetooth apps to fight a global pandemic, there are detailed security and scope-creep risks that need to be in the open, to be addressed and debated, to be understood. Absent that level of debate, we all risk an out of control “one step at a time” erosion of our security and privacy without having the chance to accept and agree the terms of the Faustian trade-off.
First, let’s deal with the obvious. Data is a highly addictive drug. We have already seen the U.K. and France push back against the decentralized platform being pushed by the Apple and Google tie-up. The opportunity to tinker around the edges of these apps, mining data for patterns and collecting additional information will be hard to resist. The concept of a nation-wide app which collects data in return for citizen benefits—the right to travel and work, perhaps—will catch on.
Second, the deployment of a hyper-scale app at record pace and volumes is an invitation for hackers and cyber attackers to ply their malicious trades. Unfamiliar users allied to a new app which will inevitably deploy with vulnerabilities is too good to be true for those threat actors. And the patchwork quilt of phones and firmware versions opens up huge risk.
Just look at the reports this week about so-called Bluetooth impersonation (BIAS) attacks, where devices are tricked into accepting a new connection that has copied the pairing signature of a previous one. With new Bluetooth apps imminent, the timing is awkward for a disclosure that Bluetooth “contains vulnerabilities enabling impersonation attacks during secure connection establishment.” The researchers warn that “any standard-compliant Bluetooth device can be expected to be vulnerable… all devices that we tested were vulnerable to the BIAS attack.”
A Bluetooth attack is complex and sophisticated—that isn’t what millions of you should now worry about. Worry instead about an imminent surge in simplistic, socially engineering text messages and emails that warn you have been near an infected patient, or that provide links to contact-tracing apps to install, or that provide information on local infections. These are inevitable and will be designed to trick you into installing malware or giving up usernames and passwords.
We also have the reality check as to what is likely to happen next. Governments need to make these digital platforms work, and in their nascent form they won’t. That will leave two options for a second phase. The first is to mandate recording the times specific individuals visit specific locations. Forget privacy, this is powerful and ensures that when a person is infected it is easy too cross-reference their last 14 days with other citizens, warning as necessary. Singapore has already gone this route with its SafeEntry program.
“Deployment will be made mandatory for places where individuals are likely to be in close proximity for prolonged periods or in enclosed spaces,” the government has said, checking-in and checking-out of workplaces and other venues “to help our contact tracers establish cluster links and transmission chains.”
The second option is to join everything together. A digital contact-tracing app becomes a QR-code pass to work and to travel, to visit stores and restaurants. The pass can be automatically revoked if a citizen fails to have a proximity app installed and running at all times, or until they have isolated or been tested in the event a proximity alert is issued. Intrusive and dystopian, yes, but effective.
This leads us to the the thorny subject of location tracking. Apple and Google have prohibited locations being captured in apps leveraging their data, and also that data needs to stay on the phones. Meanwhile, health agencies argue that without central data analysis and location fields, that data is seriously diminished—quite apart from issues of take-up and accuracy. It’s one thing to know two people came close, quite another to know it was in a location where dozens of others were at the same time. That’s a potential outbreak, one that can be registered and traced.
As I’ve said before, the issue with relying on digital contact-tracing as a life-raft is that it only works if done properly. You can’t be half-pregnant. Governments, and by proxy their citizens, will soon have to decide whether these technologies should be designed to preserve privacy or to fight the virus. They can’t do both. Inevitably, the greatest risks won’t come from the scope of well-planned deployments, but from the gaps in security and effectiveness and accountability, where there is no joined-up planning, where systems evolve a fixed view on approach and outcomes.
So, should you be worried? Yes, absolutely you should. But not because there are inherent risks to your security and privacy in the theory of what’s being done, but because the execution of deploying these systems is currently confused as it tries to achieve multiple contradictory objectives all at the same time. This will fail, and what we will end up with will miss its objectives. This will have consequences for infection rates and will leave us vulnerable and compromised along the way.