Detecting child sex abuse imagery in end-to-end encrypted communications in a privacy-respectful manner

White text saying “safety tech challenge fund” on a yellow background

I was pleased to take part in a meeting co-hosted by DCMS and the Home Office today, as part of the government’s “Safety Tech challenge fund”.

The problem question

The key bit of the challenge is:

to prototype and evaluate innovative ways in which sexually explicit images or videos of children can be detected and addressed within end-to-end encrypted environments, while ensuring user privacy is respected

Today’s discussion was a roundtable with a focus on the “privacy is respected” aspect of the challenge.

Who was there

There was a reasonably broad range of people:

The roundtable was held under the Chatham House rule. Although, strictly, this only prohibits attendees from identifying contributions to particular individuals, I’ve decided not to identify individuals or their organisations, for two reasons:

Key themes

After introductions and opening thoughts from the governmental representatives, we had about 50 minutes of discussion.

I’m not going to attempt to reproduce the discussion. For me, the key themes coming out of it were:

(I am confident that there are other things I missed, especially as my Teams client crashed at one point, and then I had audio problems. If any other participants have blogged about it, I’ll include links here.)

The thoughts I put forward

Thoughts I left with

  1. It was a useful session, with many thoughtful, eloquent contributions. But I was unsure what actionable output arose, although perhaps that’s an unrealistic expectation for ~50 minutes of discussion. I hope that follow-up sessions could look at what “respectful of privacy” means in practice, and how those responsible for the challenging are going to assess this. Do we even have a common understanding of what “privacy” covers, or how to assess the degree of intrusion, even before we get into more typical assessments of necessity and proportionality.

  2. What, if any, changes in the law might be needed, to ensure that any intrusion is Article 8 compliant. For example, in respect of transparency. This is perhaps not strictly relevant, if there is merely an expectation that privacy companies will adopt these technologies, but there was also discussion of legislative encouragement / imposition, which would be more traditional Article 8 territory.

  3. There was a lot of technical discussion. This is really important, because a solution which does not work in practice is not a viable solution. However, in examining privacy intrusion, it is not the only thing which needs discussing, and focus on Merkle trees, or homomorphic encryption, may need to be tempered with more traditional privacy considerations.

  4. I hope that there are follow-up sessions, and I hope I can continue to contribute.

  5. A late addition, which I forgot when I initially wrote this: the focus was on what tech companies could, or should, do, rather than what police could, or should, do. I think this needs some unpicking, as there’s an implicit assertion that the prevention or detection of crime is something which private companies should be doing. Is there a school of thought which says that this should be left to the police / the state? Might one draw a distinction between stopping the sharing of content v. the identification and reporting or users, for example.