End to end encryption, and services which let you meet and message people you do not already know

In the context of a prohibitions or limits on the deployment of end to end encryption, I’ve heard the argument that one should distinguish between:

The argument posits that this latter category of services is of higher risk from a child safety point of view, since:

What’s the difference?

I’m not entirely sure (and I can’t find a good online example of the deployment of this argument, as a reference source) how the distinction between the two categories of service is drawn.

Services which enable contact between “known” participants

For example, if A discovers B’s email address or phone number, they can communicate with them via numerous different platforms, whether or not they have a pre-existing relationship. A and B do not know each other. But A must have some kind credential of B’s, to enable them to make the contact.

I think that services like WhatsApp, Signal, and email fall within this first group.

Services which let you meet and message people you do not already “know”

I believe the second category is intended to capture services where no such credential is needed, where it is enough that both parties are using the same platform. This facet is sometimes known as the “discovery” element of a service.

The common example is social networking sites with private / direct messaging.

However, applying the logic of “two people, unknown to each other, being able to communicate by virtue of being in the same place”, I suspect it also encompasses:

and indeed any other sites or services where users can gather and one can message another.

In other words, potentially a pretty expansive list of services, and not just the “big social media sites” which grab the headlines.

So I guess my first point is that, even if a debate does talk about prohibitions on end to end encryption being “restricted” or “limited” to services which enable a certain type of interaction, that could still be a massive number of sites and services.

Has that come through in the debate so far? Have proponents of restricting end to end encryption made the sheer scale of the impact of their proposals transparent?

Is it a meaningful distinction?

My gut reaction is that there is merit to discussing the distinctions here, from a threat modelling point of view.

We need nuance, and detail, not broad-sweeping statements.

At the same time, one should not - must not - start with the notion that users of these services are any the less deserving of privacy, of the sanctity of the correspondence, than anyone else.

Everyone has a right to privacy and the right to freedom of expression, not just everyone who uses a particular category of messaging application.

I’ve written before about the way in which some arguments in favour of prohibiting or limiting end to end encryption seem to disapply the right to privacy when it suits them, and the dangers of that. Privacy is not an inconvenience, which can be tossed aside.

But nor is online safety and, personally, I can see merit in discussing whether the risks afforded by some types of service justify a different approach. In other words, whether there is a meaningful distinction, and what the impact of that might be.

A discussion, not duelling monologues

What does that discussion look like? The discussion around this issue, and related issues, is often emotive, and not factual. It plays on people’s fears, and demonises the other side. All too often it is duelling monologues: two groups talking at it each, trying to land points, and not listening.

If there is to be a discussion, I would prefer it to be factual. To bring in numbers, and detail.

If there is an argument that the risk is higher in some contexts, what does that mean in practice? What are those risks, what is the likelihood of them, and what is the impact of the risk? What mitigations are available?

What are the risks if end to end encryption is prohibited?

Is there a difference in risk between IRC and a social media site? What is it, in factual, statistical terms?

If end to end encryption was not permitted, what would the consequence be in practice?

Would there be more reports of inappropriate contact?

Who fields those reports? Who acts on them? Will there be more prosecutions - if not, why not? Is there sufficient resource available for policing, and for prosecuting? And so on.

Ironically, what is needed is consideration of the end to end effects of a measure.

Simply making surveillance easier, or cheaper, or more pervasive, does not mean that there are necessarily better outcomes. There’s way more to it than that.

Without this type of detail, it’s not possible to consider properly the fundamental tests of necessity and proportionality.

Is prohibiting / limited end to end encryption for these services the end goal, or the first step?

Frankly, I’m worried about the “slippery slope”.

If it becomes acceptable, or accepted, to mandate (whether explicitly or otherwise) the prohibition of end to end encryption for some services, I am fearful that it will be used as a precedent in other scenarios too.

Is a prohibition of end to end encryption on platforms which offer “user discovery” the end goal, or just the first step?

Even if it is the end goal today, what stops it being just the first step tomorrow.

I’m not sure that there’s any way of guarding against that risk, short of not doing it in the first place.

Should that stop an informed, nuanced debate? I don’t think so. But it might be a hurdle which is very tricky to overcome.