New plans to protect people from anonymous trolls online: new UK government proposals

(This started life as a Twitter thread but, since I delete my tweets automatically after a few days, here’s a more permanent version of my thoughts.)

Continuing its obsession with online anonymity, the UK government is proposing to force social media sites to treat unverified users as second class citizens.

1) Consistency with fundamental rights (or lack thereof)

I am sceptical that this proposal is consistent with the fundamental right “to receive and impart information and ideas without interference by public authority”, as enshrined in Article 10, Schedule 1, Human Rights Act 1998:

Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.

No-where does it say that one’s right to impart information applies only if one has verified one’s identity to a government-mandated standard.

The measure would treat unverified users less favourably.

While it would be lawful for a platform to choose to implement such an approach, compelling platforms to implement these measures seems to me to be of questionable legality.

Article 10 is a qualified, not absolute, right. The government can, lawfully, derogate from it, in certain, limited, conditions. I doubt that what is being proposed here satisfies those conditions.

2) It may not be “mandatory”, but if you want to maximise your visibility / reach, you will need to hand over ID or prove your identity

Although the proposals stop short of requiring all users to hand over more personal details to social media sites, the outcome is that anyone who is unwilling, or unable, to verify themselves will become a second class user.

It appears that sites will be encouraged, or required, to let users block unverified people en masse.

Users who are unwilling, or unable, to verify their identity appear to be lumped in with “anonymous trolls”, and subject to a one-click invisibility button.

Despite the government’s reference to online abuse and football, 99% of accounts Twitter removed for Euro 2020 abuse were not anonymous, according to Twitter.

Twitter’s own data says that anonymity is not the issue here:

our data suggests that ID verification would have been unlikely to prevent the abuse from happening - as the accounts we suspended themselves were not anonymous. Of the permanently suspended accounts from the Tournament, 99% of account owners were identifiable.

Those who are willing to spread bile or misinformation, or to harass, under their own names are unlikely to be affected, as the additional step of showing ID is unlikely to be a barrier to them.

3) The proposal appears to permit pseudonymous posting, as long as you have identified yourself to the provider

It seems that people will still be permitted to have a public-facing pseudonym.

In other words, that they must show their identity documents to the provider, or otherwise prove their identity, but they can remain pseudonymous towards other users.

This is marginally better than a “real names” policy - where your verified name is made public - but only marginally so, because you still need to hand over “real” identity documents to a website.

I suspect that people who remain pseudonymous for their own protection will be rightly wary of the creation of these new, massive, datasets, which are likely to be attractive to hackers and rogue employees alike.

Whistleblowers?

Sex workers?

Victims of crime?

I note that the proposal does not demand handing over government issued ID, although that is listed as an option. The SMS-code-to-mobile option may be less intrusive / risky.

I am already identifiable online. The photo is readily available. My name is visible, as is what I do for a living.

I just self-censor what I post about by choice, for personal (not professional; my professional obligations of secrecy are sacrosanct) reasons.

The burdens imposed by this requirement will hit others far more heavily than they would hit me.

And yet they would not, according to Twitter’s own research, address the problem of online abuse.

4) The content filter system

I welcome the idea of the content filter system, so that people can have a degree of control over what they see when they access a social media site.

This brings with it its own problems - the filter bubble effect, for example, if all you see are views supporting your own position.

But, on balance, I think that giving users control is to be welcomed.

This is not broadcasting, with a duty of balance, but communications between private individuals.

The focus is on lawful content, which a user doesn’t want to see.

I don’t think anyone should be forced to see things they don’t want to see (even if, in some cases, broader reading / awareness may also have beneficial effects).

But how will this work in practice?

To my mind, this only works if users can choose what goes on their own personal blocking lists. And I am unsure how that would work in practice, as I doubt that automated content classification is sufficiently sophisticated.

When the government refers to “any legal but harmful content”, could I choose to block content with a particular political leaning, for example, that expounds an ideology which I consider harmful? Or is that anti-democractic (even though it is my choice to do so)?

Could I demand to block all content which was in favour of covid vaccinations, if I consider that to be harmful? (I do not.)

What about abusive or offensive comments from a politician?

Or is it going to be a far more basic system, essentially letting users choose to block nudity, profanity, and racist content?

Take profanity filters. You’d have thought that that would be easy. But what about:

Is it really that easy?

(If this is all a bit heavy, and you just want to see a list of words which Ofcom regards as potentially offensive, here you go. And don’t say “flaps”, as Ofcom reckons it is “Strong language, generally unacceptable pre-watershed”. So there.)

Don’t want to see sex-related content?

Can a platform distinguish sex from nudity?

What about posts mentioning seggs, or s3x? Or the myriad other terms and innuendos?

🍑🍆?

Can a computer really decide if “I want to put my carrot in your catflap” is sex-related or not? Or just the kind of thing you’d hear in a pantomime?

If it is to be left to platforms to define what the “certain topics” are - or, worse, the government - it might be easier to achieve, technically.

Are platforms the right people to determine what is “lawful but harmful”? I don’t think so. They’re technical service providers, not legal experts, let alone moral arbiters.

However, I wonder if providers will resort to overblocking, in an attempt to ensure that people do not see things which they have asked to be suppressed.