Detecting child sex abuse imagery in end-to-end encrypted communications in a privacy-respectful manner
I was pleased to take part in a meeting co-hosted by DCMS and the Home Office today, as part of the government’s “Safety Tech challenge fund”.
The problem question
The key bit of the challenge is:
to prototype and evaluate innovative ways in which sexually explicit images or videos of children can be detected and addressed within end-to-end encrypted environments, while ensuring user privacy is respected
Today’s discussion was a roundtable with a focus on the “privacy is respected” aspect of the challenge.
Who was there
There was a reasonably broad range of people:
- government representatives, from the Home Office, DCMS, the NCSC, GCHQ, and the ICO
- representatives of UK and USA (although I guess some might say they are global) civil / digital rights organisations
- academics, from a range of UK and USA institutions, and from different disciplines (computer science, and law, I think)
- me, an Internet / telecoms / tech lawyer, who has spent a lot of time on online data protection and privacy issues in practice
The roundtable was held under the Chatham House rule. Although, strictly, this only prohibits attendees from identifying contributions to particular individuals, I’ve decided not to identify individuals or their organisations, for two reasons:
- I’m not sure who was there as a representative of an organisation, and who was just there for their own, individual, expertise
- it’s not up to me to say who was there anyway! Let others blog if they wish…
Key themes
After introductions and opening thoughts from the governmental representatives, we had about 50 minutes of discussion.
I’m not going to attempt to reproduce the discussion. For me, the key themes coming out of it were:
- The challenge has, by design, a relatively narrow objective, in focussing on known abuse imagery. That’s not to say that this is the only issue, or necessarily the most pressing, but it is the one picked for this challenge.
- This is important, as what might be a proportionate interference with privacy in one context may not proportionate in another.
- Context - which permits might demand greater metadata analysis - may be less relevant in the context of tackling known images, than for tackling (for example) grooming. Not all harms will depend the same degree of intrusion to be successful.
- The demonisation of encryption is unhelpful and inappropriate. Encryption is essential in preventing online harms.
- How might technology differentiate between willing underage sexting (between children) - which may, depending on jurisdiction, fall within legal frameworks for CSAM - and abuse imagery. What might the impact be of prohibiting willing underage sexting on common platforms - might it push children onto overall less safe platforms, exposing them to greater harm.
- Proposals like Apple’s much-criticised on-device scanning are very much in scope of the challenge.
- Some of the government representatives appear to recognise the concerns around “function creep”, and the possibility for someone, or some government, to demand that a company re-uses a system designed for identifying or blocking CSAM content for other purposes.
- There was a fair amount of discussion about how this might be avoided.
- We did not get into the expectation of privacy in one’s device, and the privacy impact of adopting (or requiring) surveillance technology in a user’s private device. This would, IMHO, need considerable thought.
- There was discussion about a “victim focussed” approach.
- This sounds good, but I was not sure what it means in practice. If nothing else, it would require identification of the victims / categories of victim (is it just those who are being abused? Or is it broader?).
(I am confident that there are other things I missed, especially as my Teams client crashed at one point, and then I had audio problems. If any other participants have blogged about it, I’ll include links here.)
The thoughts I put forward
-
There appears to be an assumption that this can be achieved in a privacy-preserving manner, or a manner consistent with fundamental rights.
-
That may not be possible. Perhaps that’s an assumption which needs to be challenged.
-
The outcome may be just a “least worst” solution, and that may not be consistent with the fundamental right of privacy.
-
We talked a lot - exclusively? - about platforms and companies. It appears to be assuming centralised, or closed protocols. If that’s the intended scope, then fine.
-
Otherwise, what about those running a matrix/element server, or using over-the-top encryption rather than platform-designated encryption, or open source messaging systems, or peer to peer systems?
-
It perhaps comes down to identifying who / what harm we are trying to stop? The low hanging fruit - people using common, centralised, likely-to-be-willing-to-assist platforms, or the most serious, sophisticated, closed CSAM sharing / generation groups?
-
(A government representative pointed out afterwards that the scope of the challenge is supposed to encompass the whole spectrum of providers, not just centralised services, but the conversation appeared to be focussed on major online social networks / centralised services with significant consumer adoption.)
-
Participants in the challenge should build in a demonstration that they can comply with the legal framework as it is today.
-
They should show how they have bake the bare minimum legal compliance with data protection, privacy, and other relevant laws in their design. Or are they designing without even thinking about this?
-
Or, worse, are they going into it with an expectation of exceptionalism, expecting the government to create amendments / deviations to enable them to operate lawfully?
-
(I have written about some of the key legal issues for safety tech.)
-
Those assessing the challenge solutions should prepare and publish a methodology for assessing the proportionality of the intrusion versus the harm which is prevented or detected.
-
(This needs a lot more work, as we did not get into the practicalities of this kind of assessment.)
-
To [another contributor]’s point about focusing on the victims, it would be sensible to determine if the focus is on protecting the victims, or identifying and prosecuting criminal offences, or both (or, indeed, something else).
-
e.g. suppression of distribution of CSAM v identifying who distributed CSAM and prosecuting them.
-
Both of these things could fall under the banner of “victim focus”, but could demand / justify very different approaches and technical solutions. detecting crimes.
-
Work would also be needed on scoping who was a “victim”.
Thoughts I left with
-
It was a useful session, with many thoughtful, eloquent contributions. But I was unsure what actionable output arose, although perhaps that’s an unrealistic expectation for ~50 minutes of discussion. I hope that follow-up sessions could look at what “respectful of privacy” means in practice, and how those responsible for the challenging are going to assess this. Do we even have a common understanding of what “privacy” covers, or how to assess the degree of intrusion, even before we get into more typical assessments of necessity and proportionality.
-
What, if any, changes in the law might be needed, to ensure that any intrusion is Article 8 compliant. For example, in respect of transparency. This is perhaps not strictly relevant, if there is merely an expectation that privacy companies will adopt these technologies, but there was also discussion of legislative encouragement / imposition, which would be more traditional Article 8 territory.
-
There was a lot of technical discussion. This is really important, because a solution which does not work in practice is not a viable solution. However, in examining privacy intrusion, it is not the only thing which needs discussing, and focus on Merkle trees, or homomorphic encryption, may need to be tempered with more traditional privacy considerations.
-
I hope that there are follow-up sessions, and I hope I can continue to contribute.
-
A late addition, which I forgot when I initially wrote this: the focus was on what tech companies could, or should, do, rather than what police could, or should, do. I think this needs some unpicking, as there’s an implicit assertion that the prevention or detection of crime is something which private companies should be doing. Is there a school of thought which says that this should be left to the police / the state? Might one draw a distinction between stopping the sharing of content v. the identification and reporting or users, for example.