Two important new papers on end to end encryption/security, child safety online, and client-side scanning

In this post, I introduce you to two lengthy papers which are likely to be of interest - if you are interested in end to end encryption/security, and child safety and client-side scanning.

It really shouldn't need to be said, but anyway: I don't agree with everything in either of these papers, yet I think each makes a useful contribution.

Alec Muffett's "A Civil Society Glossary and Primer for End-to-End Encryption Policy in 2022"

Alec, who I regard as an expert in this area, has written a detailed primer on end to end encryption. It is well worth a read.

Be warned, it is long.

Disclaimer: I have submitted feedback to Alec on this (post-publication), so I am not wholly neutral.

Things I like about it

  • For a technical paper, it is accessible (aside from its sheer length). It does not baffle with maths. It's actually pretty darned interesting, even for (especially for?) someone (me!) who is pretty well versed in this stuff, and I suspect is even more so for someone who is not, and who is unfamiliar with the history of public authorities seeking lawful access to encrypted communications.
  • Alec's exploration of whether we should really be talking about "end to end security" rather than "end to end encryption" is particularly valuable and note-worthy.
  • Alec's approach of taking a consistent theme - in this case, what Alec describes as "the field model" - and assessing potential challenges and solutions in the light of that model, works well.
    • But it does require one to buy in to "the field model" as being the goal (see below).
  • The cui laboras section, which distinguishes between software working for the benefit of the user/device owner, and software which is not working for the benefit of the user/device owner, is spot on. This is often missed in discussions about "client-side scanning".
  • The section on interoperability touches on an ongoing EU-level debate, about whether major communications platforms must interoperate with others. I wrote about this years ago, before it became popular, for my LLM examining the regulation of over the top communications services, and I concluded then that it was just not necessary. My view has not changed. Alec's argument is made from a technical perspective, and I am inclined to agree with his conclusion. But whether you agree or not, this is worth a look.

Things I am less keen on

  • I am not sure I agree with Alec about metadata being out of scope because the "field model" would not protect it or because it is used to finance the service.
    • I am not sure I would stick so closely to the field model, as it appears to be fundamentally limited by what was available, in terms of protection, at the time.
  • I don't think that Alec's proposal for "legitimate surveillance" (i.e. lawful surveillance by public authorities) - physical access / access via targeted equipment interference to target devices, having used a narrowly targeted search warrant to obtain that access - is as well developed as the remainder of the paper. It is consistent with Alec's overall theme of the field model, and the conclusion is not surprising, but this section was rather less persuasive (to me, at least) that much of his other writing.
  • It is not always easy to separate fact from Alec's opinion (although he tries). For a reader well-versed in the area, this is not a particular problem, but for someone coming fresh to the area, this is less ideal.

Dr Ian Levy and Crispin Robinson's paper "Thoughts on Child Safety on Commodity Platforms"

Dr Ian Levy (who I regard as an expert in this area), and his colleague at the NCSC Crispin Robinson (who might be an expert; I don't know them), have produced and published a detailed paper on what they describe as child safety on commodity platforms. Again, it is well worth a read.

Be warned, it is long.

Disclaimer: I have submitted feedback to Ian on this (pre-publication) and I have had several discussions involving him about this topic over the past year or so, so I am not wholly neutral.

Things I like about it

  • It is a well-structured paper, starting with investigating aspects of tackling CSAM, stepping through characteristics of communications services which they authors think is relevant to a CSAM-related discussion, current and future mitigations for CSAM, before digging in to client-side scanning, AI, and machine learning, and then presenting the authors' view as to what further work is needed.
  • The taxonomy of CSAM (which the authors call "Offence Archetypes", in section 2), and the different circumstances which lead to its creation, is very valuable. I can see it leading to much needed, nuanced, debate, about how different harms might be mitigated. (All CSAM is bad; some scenarios are worse than others.)
  • Sections 4 and 5, on current mitigations and what could be done in the future, are broadly useful in terms of assisting technical debates, albeit subject to two chunky caveats below.
  • As the authors note early on, the debates in this area have been (in public, at least) rather polarised. This paper (while clearly pushing a technology-based approach to intervention and which is, perhaps, rather optimistic about the real-world ability to implement some of what is discussed) attempts at balance and, to the extent that it is factual, will be a useful tool in that debate. But, in no small part because of those two limitations, it is not an answer in itself.

Things I am less keen on

  • I struggle on a couple of fronts in particular (aside from the quite hard to read formatting):
    • the authors explicitly recognise that CSA/CSAM is a societal problem, but the paper discusses only technical solutions. Technical solutions have a role to play, but technology alone is not - cannot be - the answer. So this paper is a contribution to a broader debate, but I fear it will be held up by some as "the answer" in isolation, without that vital broader context.
    • the authors say that the solutions they propose "may be of use in end-to-end encrypted scenarios", without noting that they are (IMHO, at least), incompatible with end-to-end encryption. In my view, they all rely on breaking end-to-end encryption, even if done on the client rather than in the network. This is a stance shared by the materials surrounding DCMS's Safety Tech Challenge Fund, and it essentially amount to redefining what is "end-to-end encryption", to enable government to say that their solution does not break it. I'm still struggling with this approach.
  • There is no meaningful data protection / privacy / human rights analysis.
    • I appreciate that this is not the forte of the authors themselves, but it is a noteworthy omission given that one of the criticisms levied against technology companies is that they focus on what tech can do to the exclusion of thinking through its implications, in particular on fundamental rights. This paper does exactly that. What are the implications of the imposition of the type of technologies described here? Is that exercise to be left, again, to the Screeching Voices of the Minority?
    • Aside: I note that, in a change to their original plan, REPHRAIN, the group which DCMS is using to assess the Safety Tech Challenge Fund entries, is now going to include a "human rights assessment" in its review (see the updated evaluation criteria in this post). This expansion of scope is welcome, although it remains a real shame that there is no detailed privacy / data protection / legality analysis as part of its assessment - solutions which are not lawful are not solutions.
  • The technical solutions are (as the authors note) primarily theoretical, but what we need are real-world examples of this technology working accurately, reliably, and at scale, in practice.
    • The authors acknowledge this, when they say: "However, [the] practicality [of the techniques they go on to describe] in the context of an app running on a constrained device, integrated with a low-latency scaled messaging service, can only be proven through practical demonstration". These caveats are significant.
    • This is an area where the DCMS Safety Tech Challenge Fund might make a valuable contribution. But we really need this evidence before further discussion about the Online Safety Bill and other legislation which might be used to impose client-side scanning, since it is problematic to legislate in favour of the adoption of tech which is untested.
    • This does not negate the value of Ian and Crispin's paper - theoretical discussions are still useful - but one must be mindful of the authors' own caveats and disclaimers.

So there you go. Some light reading for the weekend, perhaps.


Author: neil

I'm Neil. By day, I run a law firm, decoded.legal, giving advice on Internet, telecoms, and tech law. This is my personal blog, so will be mostly about tech stuff, cycling, and other hobbies.

You can find me (and follow me) on Mastodon and Twitter.