The government’s #NoPlaceToHide campaign (I’ve intentionally not included a URL) got me thinking:
if end to end encryption on popular messaging services was prohibited, how many users would be left vulnerable, with #NoPlaceToHide?
The answer is pretty much everyone, from a range of different threats.
(Here’s a primer on encryption, if you’re not sure where to start.)
Reasons why a lack of end to end encryption could leave you with #NoPlaceToHide
Here’s a non-exhaustive list of threats you might face if you cannot use end to end encryption, leaving you with #NoPlaceToHide.
You want to protect your messages from the communications provider, which has access to message content
Without end to end encryption, there is likely to be a time when your communications provider can, or could, access the content of those messages.
You are reliant on the organisation’s security controls to prevent unauthorised access.
But what about staff who have legitimate access to message content, perhaps to check systems are working?
What if that member of staff is your abuser, or your stalker? Or if your abuser or stalker befriends staff with access, and persuades them to take a peek for you.
What if it were someone from a misbehaving government, with a (literal or figurative) gun to the head of the operator’s staff?
You have #NoPlaceToHide.
End to end encryption mitigates that risk, and also protects the safety of the staff of providers from threats of imprisonment or risks to their safety.
You are worried about the communications provider, or a government, restricting what messages you can send / what photos you can share.
Perhaps you’re a sex worker, and you suffer routine discrimination by platforms who all too worried about offending the mighty power of payment processors.
If your content is not end to end encrypted, a platform could filter out messages it does not like (e.g. keyword filtering), or attempt to detect and suppress images containing nudity.
If you rely on that platform for your business, or just for your entertainment, that’s a real risk (PDF).
What if you’re not a sex worker? Is it that hard to imagine that some governments might want messages relating to particular events to be unsendable? Or to forward content to a government agency for their appraisal.
That’s only possible if the provider has access to the content.
Without end to end encryption, you have #NoPlaceToHide.
End to end encryption could force a censoring provider / government to interfere with your device - for example, by putting detection rules in the app your run on your phone - if it wanted to suppress content, or share it with a government review bureau.
This is known as “client-side scanning” and there’s an excellent (scholarly) paper explaining why this is a bad idea. It is also much easier to detect than network-level content suppression.
You are worried about the communications provider, or a government body, modifying the content of your communications.
End to end encryption preserves the integrity of communications, as well as their confidentiality.
Imagine being the target of a hostile government, which is willing to modify the content of messages sent to you - perhaps to persuade you to go to the wrong place, or to link to malware which compromises your device.
You have #NoPlaceToHide.
Because of the integrity protection of end to end encryption, modifying message content on the fly is either not possible, or much, much harder. (Again, it could be done on the app on your device; again, this is easier to detect.)
The impact of data breach involving encrypted data is much less than a breach involving encrypted data.
If you are messaging using a platform which does not offer end to end encryption, the platform could store the content of communications (e.g. for delivery when the recipient’s device comes back online) with encryption which could be removed, or indeed without encryption (an unencrypted messaging store).
You’re reliant on the provider’s own platform security, and some adversaries can be very determined.
If the content is not end to end encrypted, and the platform suffers a data breach, the unencrypted messaging store could be rich pickings for a wide range of criminals, voyeurs, and abusers.
You have #NoPlaceToHide.
If the content is end to end encrypted, the same data breach could have a markedly different impact.
Put another way, why would a platform want the additional risk, cost, and compliance challenges of securing unencrypted content if they mitigate the risk effectively by introducing end to end encryption?
Things end to end encryption does not prohibit
The government’s campaign website is rather light on detail.
In fact, there’s virtually none.
It doesn’t list any of the risks above.
Wouldn’t it be preferable if a government-funded campaign presented an accurate and complete picture?
User-led reporting to their provider
If a provider incorporates user-led reporting into their software, then any user who receives something problematic can, at the press of a button, report that content.
Because the user has decrypted the content, they are able to report it. End to end encryption does not get in the way of this.
User-led reporting parents, law enforcement etc
Along the same lines as the previous option, because a user has decrypted the content, they can share it however they wish. If they want to talk to their parents about what they’ve been sent, they can, and they can show their parents the messages / photos.
If they want to share it will law enforcement, they can.
Gathering evidence from devices to support prosecutions
If law enforcement has seized a suspect’s device, or has the device of a victim, they can extract content from it.
Since end to end encryption only protects content up until the point it has been encrypted, once it has been received and decrypted, agencies can obtain it from the recipient’s device.
There’s established capability for doing exactly this.
There might be other problems doing this - for example, encrypted storage on the device - but this is an existing problem, not linked to end to end encryption. And there are long standing laws to attempt to deal with this.
Further, under English law, the product of interception cannot be used as evidence. Law enforcement agencies need to obtain evidence in some other way. The position is the same here whether the content is exchanged in plaintext, with end-to-middle-to-end encryption, or end to end encryption.
User-led client-side scanning
If a user wants to run client-side scanning software, they can.
Perhaps they don’t want to receive unsolicited dick pics. If there was a filter for that, they could apply it.
If they wanted to automatically apply a watermark to an image, perhaps containing the recipient’s name/details (e.g. in the context of sharing nudes with someone, to attempt to mitigate the risk of non-consensual onwards sharing), to photographs that they send, they can still do that.
A tool which detects language or messaging patterns commonly used by paedophiles, and attempts (and it could only be an attempt, I imagine) to flag when that might be the case? End to end encryption does not stand in the way.
Since it happens on their device, the filtering is not impacted by end to end encryption, and since the user is choosing this filtering, it is consistent with the principle of end to end encryption too.
(What makes this different to the client-side scanning discussed above? Consent. The user is choosing to install this software. It’s one of the distinctions between security software and malware.)
Let’s say a charity is worried about a paedophile attempting to make contact with children they do not already know, to engage in messaging with them.
If the platform has access to communications metadata - data about who contacted who, and when, but not the content of what they said or sent - they can still perform analysis based on that metadata.
For example, if they saw a new sign-up to a platform engage in high volume messaging of people with whom they had not exchanged messages before, and had no responses to most messages, that might be a sign that they are a paedophile.
Or perhaps a spammer. No automated detection is a perfect science. But end to end encryption does not inhibit this type of analysis.
What if the platform had information about the ages of users (as some insist is required)? That would seemingly make this type of analysis even easier, although a “papers, please” Internet brings with it its own problems.
Collecting evidence via equipment interference
Interception - accessing people’s communications in the course of transmission - is one of a range of tools possessed by law enforcement.
The United Kingdom has avowed (disclosed publicly the existence of) equipment interference. It’s on the face of the law.
How might it be used?
To covertly download data from a target device, by remotely installing software which enables material to be extracted.
To install key logging software on a device, making it possible to track every keystroke entered by users. The authority uses the key logger to track the keystrokes used when the target logs into a relevant website. This could reveal usernames and passwords, as well as the content of messages sent.
(These are not super-secret capabilities - they are set out in paragraph 3.3 of the UK government’s public Equipment Interference Code of Practice.)
A lack of interception may make the use of equipment interference harder, or riskier (i.e. a higher risk of detection or failure). I don’t think that that is in dispute, but it is worth recognising it anyway.
Whether a state can justifying exposing everyone’s communications to additional risk to lessen the risk to equipment interference operations is a different conversation (and perhaps one for a future blogpost).