Supervision is not the same as total surveillance

red binoculars1

I wrote a post on decoded.legal’s blog earlier this week, about “Unpicking the “making children as safe as they are offline” fallacy”.

The gist of the piece, in case you are reading this first, is that, despite some policymakers’ assertions, the offline world is not some child-safe panacea, in which humans have done all they possibly could to ensure that children are safe.

Even spaces designed for children — I use the example of a local playground — are hardly befitting of the description “safe”.

In particular, the offline world is not “safe” for unsupervised children. Even that playground carries a sign that it is designed only for use under supervision.

Steve Hill wrote a thoughtful post in response. I think it’s a kind of rebuttal, or possibly a reflection. I’m not sure.

But I found it interesting, in so much that I think I disagreed with almost all of it, so I thought it worth spending a few words digging into why I disagree with so much.

“Supervision” is not the same as “surveillance”

The element of Steve’s post which stood out to me the most was that, in removing support for what he calls “inspection certificates”:

the main corporations that control online platforms have unilaterally decided that parents and schools shouldn’t be allowed to supervise their children.

(The “main corporations” are Google, Facebook, and Twitter, based on comments later in the post.)

“Inspection certificates”

I am not 100% sure, but I have a feeling that the term “inspection certificate” is a seemingly benign term for an infrastructure designed to circumvent the encryption offered by a site, allowing a third party to not only surveil traffic to and from the site, but also to manipulate it.

A typical man-in-the-middle attack using an “inspection certificate” relies on forcing your target’s traffic through a server which you control.

While they think they are connecting to, say, Facebook, they are actually connecting to your server: your server falsely claims to be Facebook, and terminates the encryption from your target’s phone and taking a look at the request your target is trying to make to Facebook. Your server then pretends to Facebook that it is your target’s phone, relaying the request on to it.

It does the same in reverse: Facebook think it is talking to your target’s phone, but actually you, sitting in the middle, can see what Facebook is sending back, and you even have the ability to modify it.

In most implementations, your target will never know that they are not talking directly to Facebook.

It’s not just Facebook, of course. If someone has built the infrastructure to intercept and inspect your communications in this way, they can look your communications with your bank, the content of your email (and modify it!) and so on.

So it’s hardly a surprise that an organisation might want to inhibit this, by changing the way in which it encrypts its traffic, or an operating system manufacturer might want to fix the ability for a third party to surveil all communications to and from a device.

The conflation of “supervision” and “surveillance”

The purpose of an “inspection certificate” is to allow unfettered access to the target’s communications. To defeat the encryption used to inhibit surveillance (or interference), and to do exactly that.

But to argue that Google et al “decided that parents and schools shouldn’t be allowed to supervise their children” when they attempted to plug this security gap is to conflate supervision with constant surveillance.

“Supervision” is not the same as “constant surveillance”. Nor even “capable of constant surveillance”.

When we talk about “supervising” a child offline, we don’t mean subjecting their every move to automated inspection and oversight.

It is, I suspect — I hope — uncommon, to fit a child with a GPS tracker, and monitor their precise location. (Although with smartphones, that’s a distinct possibility.)

Nor do we insist on children wearing always-on cameras and microphones, capable of recording everything they do for subsequent parental review, or even live access.

Bluntly, we walk to school with younger children, and teach them how to cross a road, and to stand away from the kerb, and to be mindful that cars may not stop for red lights or crossings, and that the man who says hello and is friendly is a stranger and, while they may not be dangerous, you’ve no way of knowing so to treat them as if they are. We don’t fit them with a camera and microphone and let them go off.

“Supervision” as a collection of supportive behaviours

What is appropriate will, of course, vary from child to child, and probably from responsible adult to responsible adult, but I venture that “supervising” a child entails more than merely spying on them.

Let’s take the example of the playground again. (And the italicised content after is an approximation of what you might do online.)

For the youngest children, you may not let them go at all. (No devices.)

When they are a bit older, you may take them, but only put them on the swings suitable for them — the ones which are built for them, with the safety measures in mind. And you probably don’t shove them as hard as you can — you play gently, building it up and seeing what they enjoy and what they dislike.

(DNS-based whitelisting of sites you’re comfortable for them to access, letting them use a device while sitting with you. If you’re not comfortable with what a site offers, or how it offers it, it does not make it onto the whitelist.)

When they’re a bit older still, perhaps you start to help them climb on the climbing frame, teaching them how to climb safely, with you standing behind them, ready to catch them if they fall, or if they think they are going to fall. Perhaps you climb up behind them as they go higher, so that they can experience the climb without the risk associated with a fall.

(A relaxation of the whitelisting, to introduce a broader range of sites, but still with you using the device with them, coupled with education about how to operate safely. You talk to them about the possibility of bad things happening, and how to lessen the risk, and what to do if something frightening happens, but in a controlled environment since you are there with them anyway. You start introducing the idea of “playing nicely with others”, and respecting boundaries, and teaching consent — online safety is not just protecting your child from others but teaching your child to be a good online citizen themselves.)

And when they reach the top, you may slide down with them, with them on your lap. Once their confidence and their aptitude are sufficient — when they have the skills to climb safely themselves — you may let them climb to the top of the slide and slide down on their own, with you at the bottom to stop them go shooting off the end, or to reassure them if they get to the top but are too scared to let go.

(Adding more sites to the whitelist, in line with their sensitivities and preferences. They might get to use messaging services while sitting next to you. Perhaps you pick apps together with them.)

As they get older still, you might let them play on all the toys in a defined area while you sit on a bench, and keep an eye on them from a distance.

(Still whitelisted-only access, but perhaps they can use their device in the lounge but not necessarily sitting next to you, or they can use it in the kitchen while you cook dinner. They might get to ask you to install a new app, which you do when you are comfortable with what it does / the access it affords. There’s a balancing act between supporting and looking after them, and letting them explore: if you are too restrictive, children may be further incentivised to work around their constraints you have imposed.)

In time, you might let them walk to the playground themselves, or go with their friends. But you set boundaries, and if you find them not in the playground, but hanging around outside an off licence, or playing “chicken” with cars, you decide how to better safeguard them — perhaps through education, perhaps through reducing what they are able to do without you going along with them, and so on, until you feel that balance has been restored.

(You might switch from a whitelisting to a blacklisting approach, giving them broader access but attempting to restrict access to categories of sites which you consider unsuitable for their age / sensitivities. When you first do this, you spend time with them browsing around, as they get used to operating in a different environment. You continue to talk with them about online safety, and about talking to you if they find things which are disturbing. You talk about the breadth of information and media available online. You might need to tighten up a blacklist. If you’re not comfortable with what a site offers, or how it offers it, it is blacklisted until you are comfortable that your child can deal with it.)

What you are unlikely to do is sit at home in front of your monitoring station, tracking their every move, because you’ve devised a means to do so, rather than looking at supervising them and educating them.

Stopping man-in-the-middle attacks “unilaterally” and “without consultation”

The second notion I found particularly interesting was that the private space on these companies’ platforms (fixing the weaknesses in their own apps), and the operating systems they develop, should not be theirs to control, and that the decisions as to how they develop their services and products should not be theirs:

[those corporations have] imposed new privacy technologies and policies upon the public without consultation. The problem is not the technologies themselves, but that they are unilaterally imposed on users rather than giving them the choice.

In other words, in inhibiting man-in-the-middle attacks unilaterally, Google et al somehow misbehaved.

Private companies, public spaces?

Dr Carolina Are has posited the notion of corpo-civic spaces.

The gist of Dr Are’s thesis is:

the concentration of ownership of the main social networking platforms in too few hands is raising the above stated concerns about social media moderation … and that such concerns need to be addressed from the position society currently finds itself in — namely, using a space with civic characteristics largely run by private companies at a time where the entire social space has moved online.

What makes her position especially interesting is that some of the content which Dr Are thinks should not be subject to moderation is likely to be the kind of content which others would want moderating / restricting from view. (Spoiler: Dr Are is a talented pole dancer.)

I highly recommend her thoughtful paper, even though I struggle to agree with it with the central tenet. My private server may be popular, but it remains my private server: even if all my friends are there too, popularity alone does not turn my private venue into community property. In my view.

I think the same applies to the software layer, such as Android, although I’ve seen less argument around this (in terms of online safety / suppression of surveillance measures).

(A discussion for another day is that if a company is to be subjected to what amounts to essentially forced nationalisation, presumably the company’s owners are entitled to a compensatory payment, for the property which is being wrested from them?)

An obligation of consultation which does not exist offline

Second, Steve’s argument appears to be that these companies should have consulted parents of some users of the services:

Google, Facebook, Twitter, et-al have imposed new privacy technologies and policies upon the public without consultation

Presumably, the expectation is that they should be required to pay heed to the outcome of those consultations, since otherwise an obligation to consult is essentially robbed of any value.

Leaving aside the usurpation of a private organisation’s autonomy, this suggestion demands something of online companies which is not demanded of companies acting offline.

Do I get a say when my favourite radio station changes its programming, or a restaurant changes its menu? No.

Or if a software developer decides to drop certain functionality from its next release, can I force them to listen to me and retain it? No.

What if a nursery had a camera system installed, which allowed parents to log in and watch their children at play, but decided that it was insufficiently secure, and so moved to disable it – is the nursery obliged to consult the affected parents, and react in line with their wishes? No.

And when it comes to governance, I certainly don’t get the right to decide exactly what laws I want offline, so it’s unclear why we should get that power online.

(If what Steve means is that it should have been left to responsible adults to decide whether or not they want encryption which is MitM’able or not, they do, of course, have that choice: they are not required to let their children send traffic to Facebook or Twitter, if they don’t agree with the way they operate, nor are they required to adopt the Android operating system. There may be consequences of this, but there are consequences with every single choice in life. To breastfeed or not to breastfeed. To inoculate or not to inoculate. To take a child swimming or not. To give them a mobile phone or not, or when to do it. To let them hang out at their friend’s house. And so on.)

Surveillance companies, unilateral decisions, and consultations

Lastly, I wonder if there is a degree of double-standards at play here, in that I cannot help but wonder if the vendors of child surveillance systems operate with this degree of transparency and co-operation.

Can these vendors show that those most affected by their software — the children they surveil — were consulted?

Were they asked what features should be in the monitoring software? What controls should be in place?

Even leaving children out of it, were all relevant responsible adults consulted? Informed about the data which would be collected, and how it would be used. And with whom it would be shared or to whom it would be accessible, and under what conditions?

Do children have a consequence-free option of not being subjected to these surveillance measures?

If not, it seems a bit rich to demand these things of others.

An interoperability framework, or backdoors-by-design and -by-default?

Lastly, I was struck by Steve’s comment that

the ICO should be setting out a framework for websites and apps to interoperate with centralised systems operated by parents and schools.

This is an interesting point, and I can see that, phrased in this way, it could be attractive to policy makers. After all, what is there not to love about interoperability?

Three thoughts came to mind.

(I’m leaving aside the fact that the ICO is not a child safety regulator. It’s the UK’s information rights regulator. It is not the ICO’s role to set out rules for coordinated inter-operable surveillance systems, or to take a policy stance on online safety more generally, and I suspect they would be acting ultra vires if they did so.)

It renders service providers beholden to surveillance vendors

If companies are required to interoperate with third party surveillance systems, they are allowed to develop their services as long as they do it in a manner approved by someone who sells monitoring systems.

They are forced to design their services with the capabilities, limitations, and goals, of other commercial organisations in mind.

Want to add a new feature? Only if they can find a way to ensure that someone else’s software will work with it.

Backdoors-by-design and -by-default

It enshrines the notion that, by default, services should come with backdoors. That three massive online services — and presumably all, or many, others — should not permit private communications. That the digital equivalent of whispering in the playground, or passing a surreptitious note to a friend, should be banned.

It is left unclear what happens to children who need to have private communications. What about that call to Childline, or the message to a friend about an abusive responsible adult — communications in respect of which surveillance could be life threatening?

Can you imagine someone saying “the phone system must ensure that any parent can listen in to their child’s phone calls at any time”?

Well, okay, yes, I can imagine someone saying it. But I’m very sceptical that anyone of sound mind would think it a good idea, let alone attempt to pass legislation to make it mandatory.

Backdoors only for the good guys

If there’s a dedicated toolkit for enabling surveillance, how do we get comfort that only the authorised responsible adult can intercept their child’s messages and surveil their every move online?

How do we know that there are not myriad others doing exactly the same, exploiting the capability to observe other children?

In demanding a backdoor for some people, we create a vulnerability that could be exploited by anyone, and so we impose — silently or otherwise — an additional burden, of attempting to secure the very thing which, by design, weakens security.

Lack of consistency with other policy objectives

I’m thinking particularly of the telecoms security review, and the likely imposition of more stringent measures on Internet and telecoms providers to increase the security and resiliency of networks and services in the UK.

Demanding increased security with one hand, while compelling device manufacturers and online service providers to weaken security with the other, seems like an inconsistent approach to policy in this area.


  1. This image is made available through the CC0 Public Domain Dedication↩︎