An independent assessment of the UK's Safety Tech Challenge Fund without assessing legal or data protection compliance

Regular readers may recall the UK government’s “Safety Tech Challenge Fund”.

Background to the Safety Tech Challenge Fund

(You can ignore this bit if you are familiar with the Safety Tech Challenge Fund.)

The stated goal of the safety tech challenge fund is:

drive the development of innovative technologies that help keep children safe in end-to-end encrypted environments, whilst upholding user privacy.

It is doing this by offering an initial (I believe) £85,000 to five organisations, to help them develop services.

There’s plenty of money to be made from safetytech, and getting the taxpayer to partly fund your business is a useful head start. (Yes, I’m surprised that there’s no requirement to make the results of the fund available under a Free / open source licence.)

But anyway.

I’ve been involved in a number of workshop sessions (including this one, and one I’ve yet to blog about because I keep getting distracted by other things to blog about), and a “Supplier Showcase” which was (IMHO) poorly handled.

There’s another “showcase” event tomorrow, which (I understand) will be under the Chatham House rule - a very welcome move - but sadly I’d need more notice than was given to be able to attend. Oh well.

Assessing the Safety Tech Challenge Fund entries … but not for legality

One of my recommendations to DCMS was the need for an independent, external, authoritative assessment of the outcomes of the Safety Tech Challenge Fund.

A robust, rigorous assessment, as to whether the entries deliver on the brief, in a manner consistent with fundamental rights, and in accordance with the law.

The REPHRAIN project

When I heard that DCMS had appointed the REPHRAIN project to do this, I was quite excited.

I don’t know anything about the REPHRAIN project - it was not something I’d heard of before its involvement in the Safety Tech Challenge Fund - but I was pleased to hear that an independent group was going to be doing this work.

The REPHRAIN project said last week:

We are pleased to announce the release of the Scoping the evaluation of CSAM prevention and detection tools in the context of End-to-End encryption environment document for public consultation (document can be found here).

They have also asked for feedback (well “constructive comments”):

The evaluation criteria document will be open for review and community consultation for a 2 week period until Friday 08 April 2022.

A CSAM focus

The proposal has a strong - perhaps even sole - focus on CSAM (child sex abuse material).

The focus of the Safety Tech Challenge Fund, though, is much broader, and one of the entrants is promising more, so the limited scope is odd.

For example:

SafeToNet’s technology can prevent the filming of nudity, violence, pornography and child sexual abuse material in real-time, as it is being produced.

But all the example questions are focussed on CSAM.

Where’s the assessment of whether a tool can correctly identify pornography, for example?

Can it differentiate a human body from the representation of a human body (such as a well-sculpted piece of art, or a garden statute)?

The implications for fundamental rights if it gets these things wrong - in particular, for over-blocking, and unnecessarily reporting / intervention - are significant, IMHO, and to limit the evaluation purely to CSAM misses these important nuances.

A surprising hole in the evaluation: legality is not under scrutiny

But then I spotted what is, IMHO, an even more surprising hole:

The evaluation does not include … an assessment of the tools for their compliance with legal frameworks on interception of communication, specific AI authorative rules regarding jurisdiction, etc.

That’s right, folks. It is a technical assessment. This may still have a value, but it stops far short of what is needed, IMHO.

Although the blurb specifically calls out “interception of communication”, as I read through the evaluation criteria, it struck me that it is broader than that: there is no assessment of the tools from their compliance with legal frameworks.

If it doesn’t comply with the law and fundamental rights, it’s not a viable solution

Perhaps - hammer analogy time - if you’re a lawyer, then legal issues are the most prominent.

That said, I struggle to see how there can be a robust, independent review of the Safety Tech Challenge Fund participants without an assessment of whether the entries comply with the law.

It’s all well and good asking questions like “Do solutions have a data diligence process?” (whatever that means), but the omission of “does the proposed solution comply with law in the United Kingdom?” is a major error in scope, IMHO.

I’m sure that “Is there a trade-off between performance and power consumption of the proposed methods?” is a useful thing to know, but where’s the fundamental rights analysis?

There is one section on “compliance”, and what does it cover?

Did the developing organisations map out common AI requirements that can be assessed as needed (e.g. regarding jurisdiction)?

If it is going to be as limited as “map out common … requirements”, why is it limited to “AI” requirements? Surely - surely - the right question is “map out legal requirements that can be assessed as needed”?

Of course, this does not go nearly far enough. It’s not enough to know that there are legal requirements - you need to bake in complying with them from the beginning.

Otherwise, you end up trying to retrofit the law to your product, not vice versa, and then, if you find that things don’t fit properly, you probably end up lobbying the government to change the law, so that that law fits your product, rather than ensuring your product fits the law.

Similarly, there is a section on “Human-centred” requirements, which references “human and children’s rights”, but the example questions indicate a very narrow focus, not a rigorous, detailed examination:

How do the proposed tools avoid re-victimisation of victims in both existing CSAM databases used by the developed systems and newly detected CSAM? Who are the users of the system and how have they been involved in its design? Are CSAM reporting mechanisms (1) included, (2) to whom, (3) likely to be effective? How effective will a system be in preventing CSAM?

Again, the focus on CSAM plays a dominant role here. What about every other fundamental right? The right to receive and impart information? The right to privacy? If the example questions are reflective of the focus, it is not obvious how the evaluation will cover a more broad-ranging fundamental rights assessment.

There’s no robust data protection analysis

Perhaps tied in with the point above, the scope lacks a robust assessment from a data protection point of view, assuming that all solutions process personal data.

There’s reference to privacy, but not to data protection.

There’s not even the bare bones:

The kind of absolute minimum you’d expect of this kind of technology.

I know that the Information Commissioner’s Office has been involved in this project, and I’ve asked them for information about what that has looked like.

What was the project brief from DCMS?

DCMS has not - as of writing this, at least - published the brief which it gave to the REPHRAIN project.

DCMS has not - as of writing this, at least - explained the process for selecting the REPHRAIN project. There might be some kind of public tender but, if there was, I have missed it. (Feel free to correct me if you can cite it - I’ll happily link to it.)

It’s not the REPHRAIN project’s fault if they are delivering to the brief given to them by DCMS.

But these questions go to the heart of whether it is worth me submitting feedback or not. If the scope of the project is already defined, and they are just consulting on the evaluation criteria, then saying “but what about this very thing that you’ve said you’re not covering” is unlikely to be worthwhile.

I would like to see more information on whether the REPHRAIN project was free to set evaluation criteria independently, without a steer from DCMS.

To steer further comments (in particular, whether it is worth submitting feedback), I would also like to understand from REPHRAIN whether the omission of robust legality / fundamental rights / data protection assessments was intentional.

Update 2022-04-06: I’ve submitted this as feedback, via email. Let’s see what, if anything, happens.