Online safety, doing good, and inconvenient fundamental rights

One of the joys of being an Internet lawyer is the opportunity of getting involved in the shaping of legislation which impacts the every day lives of all number of people.

This is not a new experience for me and, when I am paid to do it, it’s part of my job that I greatly enjoy.

But, much of the time, it’s not part of my job. Such is the case with the current draft Online Safety Bill. No-one is paying me to spend the time I spend commenting on the numerous issues, often making myself unpopular in doing so.

I engage because I want a better Internet for everyone

I get involved in policy debates about things I care about precisely because I care about them.

Because I want an Internet which lets people - children, adults, everyone - connect and collaborate.

Because I want children, and vulnerable and marginalised people, to be safe online, and to enjoy the kind of experience I typically get online.

Such is the tragedy of the Internet commons: organised groups of experienced lobbyists will ensure their voices are heard, while the voices of many millions of normal Internet users who have no idea what is going on in Westminster, let alone engage in policy debates, each of whom will be affected by what happens, will not.

Inconvenient fundamental rights

It’s so critically important because fundamental rights are at stake.

The right to receive and impart information.

The right to privacy.

It seems - to me, at least - that some see abrogating these rights, or derogating from them, as acceptable collateral damage if they get in the way of the achieving the good that their plan would, or might, deliver.

And their plans would, or might, deliver good things. It’s important to recognise that. I don’t think anyone I’ve interacted with is engaging in bad faith, or pushing a plan which they don’t believe has merit. They, like me, engage because they care.

I’d like to think we’re all pushing for the same thing, although, at times, I struggle to remember that.

I struggle the most when those fundamental rights are positioned as expendable, not fundamental. As rights which are no longer relevant - “analogue thinking” in a digital age - rather than recognising them for what they are: fundamental, especially in a digital age.

A solution which is inconsistent with fundamental rights is not a solution

My view is that, if you’ve only got a solution which is inconsistent with fundamental rights, you don’t have a viable solution.

You can’t simply treat fundamental rights as a political inconvenience or red tape, to be brushed aside to ensure that your plan can deliver.

What makes this a difficult pill for some to swallow is that measures which can do good things - and there’s no denying that some of the proposals put forward could, or should, do good things - might, or will, also do bad things, things which are inconsistent with fundamental rights.

I appreciate that, if you think you have a plan which will genuinely do good, it can be difficult to critically examine the harm it will cause.

And yet you must.

You can’t simply assert that, because your plan does good, anything which stops it from happening is an unworthy blocker, which must be struck down.

That’s not how fundamental rights work.

You need a plan which is consistent with all fundamental rights, not just some.

You can’t assert that “the ends justify the means”, and trample onwards.

And you certainly can’t decide that the best way of mitigating infringement of fundamental rights is to rewrite those fundamental rights to suit your needs, to give you a technical argument that your measure no longer infringes them.

That’s like saying you’ll eradicate theft by removing the law against theft. No law, no crime, no harm.

Except, of course, the actual harm - the infringement of privacy, or the curtailment of the freedom of expression - remains, even if your solution would, or might, also do good.

This works both ways

I don’t think I’ve said this before in writing, and it’s time for me to remedy that: while we cannot - must not - trample over the rights of privacy and of freedom of expression, I recognise that, today, not everyone has a good experience online.

The web is not a safe place for everyone.

The web is not a fun place for everyone.

While I don’t think that, in general, the benchmark should be to make the web, as a whole, a safe playground, any more so than a motorway should be made a safe playground, I do agree that sites and services which are held out as safe for children must be so.

I also want online environments where everyone can engage with the same freedoms, and the same outcomes, as I have when I go online, because frankly my online experience is, on the whole, nothing short of amazing. My life would not be what it is without it, and it should not be a quirk of my privilege that I benefit from that.

I also recognise that technology is not neutral. That some technologies can foster or enable harm, even if they do not cause it directly.

I’m not going to stop thinking that encryption is a good thing, but I am also not blind to the ways in which it can be used by malicious actors, just as a hammer can be a handy tool or a murderous weapon.

I think that anonymity online is vital for many vulnerable, marginalised groups. I also recognise that some take advantage of it for other reasons - although many post vitriol and abuse under their own names anyway.

I think Tor is valuable, and an essential tool for some to be able to engage safely online, because of the threats they face. I’ve also helped hunt people using Tor for some of the most heinous crimes I can imagine.

I’m pro-porn and pro-(sex)work, and vehemently against abuse imagery, and abuse, of all types - they’re completely different things. I’m sex positive, and I don’t think porn is a substitute for a proper sex education. I agree we need to think about how child can access porn, and also how to deal with an almost ubiquitous male gaze, and laws proscribing content (and, in some cases, real world consensual behaviours) which almost universally affect minorities.

Much of my working life is spent trying to prevent or detect people doing bad things online, or who leave footprints on communications systems which aid in tracking them down. Terrorists. Paedophiles. Abusers. Kidnappers. Scammers. Drug dealers. Enslavers. Murderers. I’m far from shying away from it.

Is there a way forwards?

Honestly, I don’t know.

The debate, especially in public, seems, at times, so polarised.

We’re positioned as geeks. As child-haters. As out-of-touch technonerds who care only about our Internet connections and our precious web.

It’s hard to engage against that type of backdrop.

It’s hard to stand up again and again and again and suggest that a plan needs a rethink.

Twitter is not the best place for nuanced conversations. It often seems like its a place where talking is more important than listening, where points scoring trumps reaching consensus, and where snark and sarcasm rule strong.

So I just don’t know. I’m tired. But I’m also mindful that, once we start chipping away at fundamental rights, once we start re-writing them for convenience, or to allow a particular solution through, we’ve lost. We’re not getting them back.

Worse, we’re weakening the rights of the very people we are trying to protect. Whose Internet experience we’re trying to safeguard and enhance.

And that seems like a cause worthy of attention.