I’ve heard the term “public safety by design” come up a few times recently, and I think we are going to hear it a lot more in 2022.
“Public safety by design”
For example, in Wired’s “The Wired World in 2022”, Dr Ian Levy - technical director of the UK’s National Cyber Security Centre, and someone for whom I have a great deal of time - says:
… we are now seeing a worrying trend of companies designing out public safety on the premise of designing in some form of privacy
In other words, that they have a duty to engage in “public safety by design”, which they are failing to meet.
He reinforces his point by means of an analogy about schools and fire doors, which I thought was clever: it is both highly emotive (i.e. it triggers visceral reactions) and also irrelevant to online safety discussions since “people trapped in burning buildings” has no direct online equivalent. That’s not what “flame wars” are.
What is “public safety”? Who decides?
“Public safety by design” or “designing in public safety” has an attractive ring to it. It’s easy to say, and is likely to get people nodding along.
Of course everyone wants public safety.
But what does it mean?
Specifically, in the context of, say, a messaging service, or a social media site, what is “public safety”?
If we are going to have a meaningful debate about a principle of “public safety by design”, it will need to happen hand-in-hand with - or, ideally, be prefaced with - a consensus about what “public safety” means.
It is all very well to be nebulous and general when talking in principle, but this is of very limited value when it comes to practice.
What tangible, concrete steps should a start-up messaging service provider be taking? What code should they be writing (or not writing)? What probing points must they build in to their architecture?
I don’t want to pick on Dr Levy, but I note a further sentence in his Wired article:
One argument against designing systems to allow for proper public safety is that…
Not “public safety”. Proper public safety.
If we are going to go into a debate with a barely-concealed “no true Scotsman” fallacy lurking under the surface, we are not going to get far. “Yes, but that’s not proper public safety.”
To my mind, the correct vehicle for imposing an obligation of this nature is clear, specific, legal requirements. Not woolly or nebulous “duties” to have regard to things or to carry out assessments, but clear and specific obligations.
(While I have concerns about some parts of the Telecommunications (Security) Act 2021 and the code of practice which sits under it, I do admire the joint effort by DCMS, the NCSC, and industry representatives to be clear and specific about what is required.)
“Public safety by design” and the Investigatory Powers Act 2016
I spend a fair amount of my professional time in the world of the Investigatory Powers Act 2016 and related legislation.
Over the course of many years now (I first started working on these issues not long after RIPA 2000 came in, and then (mostly) went out again), I’ve engaged with numerous public authorities about the threats they are trying to tackle, and the challenges that they face.
Although I suspect most of the debate around “public safety by design” will occur outside the confines of the Investigatory Powers Act 2016, arguments around “public safety by design” already crop up in connection with the IPA, and some of my thoughts in respect of that may bleed over into those broader debates.
The key bits for me, in the context of this post, are:
a) the Investigatory Powers Act 2016, and other surveillance-related powers, are exceptions to fundamental rights.
b) the Investigatory Powers Act 2016 does not, and cannot, impose a general duty to maximise surveillance potential, or to maintain outcomes which existed in the past.
The Investigatory Powers Act 2016, and other surveillance-related powers, are exceptions to fundamental rights.
Tapping someone’s phone, or accessing a record of their Internet traffic, is an interference with their right to respect for their communications.
Suppressing someone’s ability to communicate (e.g. through a telecommunications restriction order, which sits under the Serious Crime Act rather than the IPA) interferes with the right to freedom of expression.
Prohibiting / inhibiting the use of a specific form of encryption? More arguable, in my opinion, but I can see good arguments that doing so engages both the fundamental right of privacy / respect to a private life and correspondence, as well as the right to freedom of expression.
These interferences may be capable of justification (if they are in accordance with the law, are necessary in a democratic society, and are for one of a number of stated purposes), but they remain interferences.
Authorised, lawful interferences are exceptions, not the norm. And, as exceptions, they must be construed narrowly.
This is not something specific or unique to the Investigatory Powers Act 2016. Wherever an obligation of “public safety by design” sits in law, if it amounts to an interference with fundamental rights, the same considerations and safeguards are not just important but are legally essential.
Where it gets trickier / more interesting, failing to ensure “public safety by design” may also be an interference with fundamental rights. That depends, to a large degree, on what is meant by “public safety”. As I have commented before, one cannot simply override the fundamental rights of privacy and freedom of expression by point to other interferences. Any solution needs to respect all fundamental rights.
The Investigatory Powers Act 2016 does not, and cannot, a duty of “public safety by design”.
The Investigatory Powers Act 2016 is a chunky piece of legislation, covering a wide range of obligations, but it is far from limitless. Each obligation is clearly delineated and bounded, providing for limited powers to authorise (in the case of public authorities) or compel (in the case of telecommunications operators) certain, specific things.
Critically, there is no power within the Investigatory Powers Act 2016 for a Secretary of State, or any public authority, to compel a telecommunications operator to engage in “public safety by design”, or to design their networks and services in a manner which maximises, generally, surveillance potential.
A data retention notice, for example, cannot be used to compel an operator to redesign their networks and services to ensure that certain data are created for the purpose of retention. The limited powers relating to generation of communications data do not stretch that far.
A technical capability notice - a vehicle for ensuring that a telecommunications operator has the means of satisfying a warrant served on it - can have design impacts, but they are limited to imposing specific obligations, related to the warrantry and notices available to public authorities under the Investigatory Powers Act 2016.
Can a technical capability notice be used to compel a telecommunications operator to redesign its service, or fundamentally change what it is doing, to give effect to a warrant or notice? I have my views, but there’s no judicial determination on this point. Yet.
National security notices can be used to fill some of the gaps left by the other parts of the Investigatory Powers Act 2016, but their scope is limited: they are national security notices, not general “public safety” notices.
“Public safety by design” as an ethical or moral obligation?
Aha, Neil, but you are coming at this from the perspective of a lawyer. What about ethical and moral obligations?
I am not an expert in ethics, and one of the reasons for joining the Independent Digital Ethics Panel for Policing (may it rest in peace) was precisely to learn more about ethics in the kind of contexts I am discussing here.
That said, not everything which is lawful is “right”, nor is everything which is “right” also lawful. Immoral laws are depressingly easy to find.
One way of short-circuiting challenges in squaring a policy proposal with interferences with fundamental rights might be to say that “public safety by design” is not a legal obligation, but is just a moral or ethical thing.
In other words, the state is not doing anything to interfere with fundamental rights, it merely has expectations as to outcome.
Law is, of course, not the only vehicle which drives behaviour. Perception also plays a large role, whether in the sense of “brand damage” or “attracting customers” or “what my friends and family (or even strangers) think”.
Is it plausible that we will see increasing use of the phrase “public safety by design” as an attempt to influence customer / user behaviour, and thus market-led(ish) solutions, rather than a legislative solution? Perhaps, but probably not.