Three problems with the Age Assurance (Minimum Standards) Bill for keeping children safe online

There’s plenty of discussion on Twitter about the Age Assurance (Minimum Standards) Bill currently going through Parliament.

I want to keep children safe online, and so here are three problems with the bill which, IMHO, jeopardise that

The Bill foreshadows the increased collection of personal data merely to browse the web

The is not a problem with the bill itself, as the bill does not require anyone to use age assurance systems, but rather part of the broader activity of which this bill is part.

I am unconvinced that conditioning people - of all ages, but especially children - to hand over information before visiting websites is the right way to go.

I am particularly perturbed with approaches which use camera footage/stills for age assurance purposes. This is for (at least) two reasons:

If you want to keep children safe online, don’t condition children - or adults - to hand over personal data to strangers, or to demand that they give camera access to third parties?

The requirements around privacy and data breaches add nothing

One of my criticisms of Part 3 of the Digital Economy Act 2017 is the lack of privacy safeguards.

This bill suffers from the same problem.

It references privacy and data protection law

The minimum standards must ensure that any age assurance system … protects the privacy of users in accordance with applicable laws, including the data protection laws and obligations under the treaties set out at paragraph (k) … [and] is compatible with data protection legislation;

but that’s it.

It says that Ofcom must produce “minimum standards”, but these are just required to say that the operators of age assurance systems must comply with the law which they are already bound to comply with, if they are bound to comply with it. Does it bind them to comply with data protection laws if they’re not already obliged to do so? Not on my reading of it.

What would the impact be if the references to privacy and data protection were not in the bill? As far as I can tell, there would be no impact at all.

Am I missing something here? Is it just “privacy washing”?

This seems to be a missed opportunity* to impose additional standards on providers of age assurance services, going beyond the requirements of general purpose data protection law, reflecting the risk posed by them, and imposing more stringent sanctions if they were to fail.

I feel the same about the reference to the Equality Act 2010. Providers already have to comply with this.

If you wanted to keep children safe online, why not impose heightened standards, and greater responsibilities, on age assurance providers?

*Of course, imposing more responsibility, or higher standards, might not be consistent with a state position of encouraging a burgeoning “safety tech” sector in the UK. If the state wants to promote that, safest not to impose stringent liability on them for their failures.

It would permit age assurance services to overblock

Paragraph 2(2)(g) is a worrying one:

The minimum standards must ensure that any age assurance system … does not unduly restrict access of children to services to which they should reasonably have access, for example, news, health and education services

There are two bits which jump out at me:“does not unduly restrict” and “reasonably have access”.

“does not unduly restrict”

Based on the wording of the bill, an age assurance system can restrict children’s access to services to which they should have access, as long as it does not do so “unduly”.

What does that mean? Why is any form of restriction of access to health services appropriate? How can that be justified, from a fundamental rights point of view?

One would have hoped it would reflect the child’s freedom to impart and receive information, not deviate from it, by permitting overblocking as long as it is not “undue”.

Where is the positive obligation on providers of age assurance systems to prevent this kind of infringement?

“services to which they should reasonably have access”

bows for the entry of the Lord Chamberlain

This bit of the bill hands Ofcom - as the body with enforcement powers under the bill - the power to decide what knowledge is suitable for children.

If Ofcom determines that the content is not the sort of content to which children should reasonably have access, then there is no overblocking.

Appointing Ofcom in loco parentis hands it a considerable power over our children’s lives, and over their access to information.

I am doubtful that a state-appointed body should be able to decide what is, and is not, suitable for any given child.

Moreover, the effect is to treat children as a homogenous group. In reality, even within a narrow age range, different children have different sensibilities and sensitivities. They are affected by different things. Their capability and desire for learning will be different.

The bill makes no provision for children to access material suitable for their situation, and instead lumps them all together: the test is whether “children” should reasonably have access, not any given child.

Were a parent, or someone else who knew the child well, making the decision, they may well determine that Site A is fine (even though it might not be suitable for other children of the same age), or is fine if accessed under supervision, but that Site B is not fine (even if other children may be fine with it).

Welcome to a web in which a parent would not be able to guide their child’s learning and development and exploration, but instead an experience enforced by contextless age assurance robots, attempting to comply with the communications regulator’s edicts?

If you wanted to keep children safe online, why not protect their fundamental right to impart and receive information by requiring age assurance providers to prevent overblocking?

(I don’t have a solution for the “Ofcom as the parent” problem - I think it’s unfixable.)