Policy debates and online services
The framework for the regulation of online services has never been static. For as long as I have been thinking about these topics - since the early 2000s - there has been a continuous debate about roles, responsibilities, and risks.
I get involved in many of those debates.
Sometimes publicly, sometimes not.
Sometimes I wade in, sometimes I pick my battles.
Sometimes all I can do is alternate between holding my head in my hands, and banging my head on the desk.
But, more often than not, I engage, and I engage with what I hope is the same objective as everyone else: to improve the online environment.
Purported solutions to insufficiently-defined problems
What is becoming increasingly apparent is that - perhaps now more than ever? - lots of people have suggestions for solutions, but without a clear idea of what they are trying to solve.
They throw out “big tech must be made to do [x]” pronouncements and claims that a change in the law to impose a new obligation on tech companies with fix decades-old systemic, societal problems, without a clear, documented problem statement against which their solution is to be assessed. Without sufficiently-defined parameters for what “good” looks like, or what pitfalls they need to avoid.
They proffer bills laden with duties and obligations, onerous enough to crush smaller organisations out of existence.
I bang my head on my desk when someone comes up with a novel, ground-breaking (yeah, right) idea about curtailing anonymous speech online, which is, in fact, as old as the hills and which anyone giving it more than a moment’s thought would see is both massively harmful and utterly ineffective. But I engage.
I bang my head on my desk when someone pretends that a change in law “will end anonymous abuse, because it will end abuse, full stop”, presumably in the same way that our statute book of criminal offences has ended crime, full stop, and seemingly overlooks the fact that “abuse” pre-dates the Internet and is not a problem to which there is a simple technological solution. But I engage.
In those circumstances, while I may share a common objective of improving the online environment, I may disagree quite significantly with what that might look like, or - perhaps especially - with their proposal for getting there.
In fact, I might be content with simply not making things worse through poorly-considered solutions to insufficiently defined problems.
Enter the Internet policy red team.
Red team, blue team
In the world of information security (and, I believe, military training exercises), a “red team” is a group of people tasked with performing an offensive role: attacking an organisation as if it were a malicious adversary.
In contrast, the “blue team” are the defenders, tasked with safeguarding an organisation.
The idea is that these two roles are complementary, giving an organisation a greater understanding of its weaknesses that relying on defensive security planning alone.
Importantly, both teams have the same objective: secure the best outcome for the organisation.
“Red teaming” Internet policy proposals: the underappreciated scrutineers of online regulatory discourse
Like the cybersecurity red team, the Internet policy red team delivers immense value by pointing out the holes and problems with what someone has identified as a potential solution, especially if the problem statement is vague or missing.
It might be pointing out that they appear to have forgotten about the interests of vulnerable, marginalised people in their rush to be seen to be doing something.
It might be pointing out that what they are proposing is technically impossible, or that the solution they seek is not a technical one but requires major changes to society, and that no amount of telling tech companies to make their people “nerd harder” is going to solve that.
It might be pointing out that their “solution” is so readily circumvented by anyone with a modicum of technical knowledge - knowledge which will spread around a playground as fast as, well, covid spreading around a playground - that the policy is a clear case of “something must be done, and this is something”, expending time and effort for marginal gain and potentially causing significant harm.
Each one of these, and the many other criticisms which can be levied at poorly thought-out policy proposals, has a value. They give an opportunity for improvement - increasing its effectiveness, reducing its harm, or simply making it workable at all - to the people holding the policy pen.
It’s not the role of a red team to help fix the problems they identify. Their job is adversarial, penetrating, even destructive. Their job is to identify the weaknesses, the failings, the bits which just won’t work, or which will have unintended consequences. To highlight the inadequacies, the illogical, and the impossible.
They’re not being uncooperative or non-collaborative. They are cooperating and collaborating by giving their critical feedback. By lending the benefit of their experience and expertise.
In some cases risking financial insecurity to do so. (P.S. Hire Heather!).
If they were silent, self-censoring their critical thoughts because some will brand them paedophiles or haters of children, or will attempt to discredit them as the screeching voices of the minority, then they would be uncooperative and non-collaborative. And, frankly, who could blame them? But presumably the only people who would levy such accusations are those who don’t want their policy proposals scrutinised, or who are so convinced that they are right that they don’t want to hear where they’ve gone wrong, or merely could do better.
For everyone else, and for the betterment of everyone’s use and enjoyment of the Internet and online services, go Internet policy red teams!