Former UK Supreme Court Judge Calls Out Online Safety Bill As Harmful By Itself
from the speak-out-while-you-can dept
We have discussed at great lengths the many problems of the UK’s Online Safety Bill, in particular how it will be a disaster for the open internet. Unfortunately, it appears that important politicians seem to think that the Online Safety Bill will be a sort of magic wand that will make the “bad stuff” online disappear automatically (it won’t).
It appears that more people — and prominent ones at that — are now speaking out against the bill. Former UK Supreme Court judge, Jonathan Sumption, has published a piece in the Spectator, the old school UK political commentary magazine that is generally seen as quite conservative. Sumption warns that the Online Harms Bill will, itself, be quite harmful.
The real vice of the bill is that its provisions are not limited to material capable of being defined and identified. It creates a new category of speech which is legal but ‘harmful’. The range of material covered is almost infinite, the only limitation being that it must be liable to cause ‘harm’ to some people. Unfortunately, that is not much of a limitation. Harm is defined in the bill in circular language of stratospheric vagueness. It means any ‘physical or psychological harm’. As if that were not general enough, ‘harm’ also extends to anything that may increase the likelihood of someone acting in a way that is harmful to themselves, either because they have encountered it on the internet or because someone has told them about it.
This test is almost entirely subjective. Many things which are harmless to the overwhelming majority of users may be harmful to sufficiently sensitive, fearful or vulnerable minorities, or may be presented as such by manipulative pressure groups. At a time when even universities are warning adult students against exposure to material such as Chaucer with his rumbustious references to sex, or historical or literary material dealing with slavery or other forms of cruelty, the harmful propensity of any material whatever is a matter of opinion. It will vary from one internet user to the next.
While I don’t necessarily agree with all of his characterization, there is something fundamental in here that I wish so many other people understood: this is all relative. Some people find certain content offensive. Others find it benign. There is no objective standard for “harmful” speech, especially when (as with the UK bill), it includes stuff that the law itself admits remains “legal.”
As Sumption notes, making these kinds of calls at scale, when no one can even agree what the content is, is bound to be a disaster (and, for what it’s worth, he underplays the scale here, because while he’s showing how much happens every minute, it’s even more crazy when you realize how much content this means per hour or day, and how impossible it would be to monitor it all).
If the bill is passed in its current form, internet giants will have to identify categories of material which are potentially harmful to adults and provide them with options to cut it out or alert them to its potentially harmful nature. This is easier said than done. The internet is vast. At the last count, 300,000 status updates are uploaded to Facebook every minute, with 500,000 comments left that same minute. YouTube adds 500 hours of videos every minute. Faced with the need to find unidentifiable categories of material liable to inflict unidentifiable categories of harm on unidentifiable categories of people, and threatened with criminal sanctions and enormous regulatory fines (up to 10 per cent of global revenue). What is a media company to do?
He also has a response to those who insist this can all be handled by algorithms. It can be handled by algorithms if you’re happy to accept a huge number of errors — both false positives and false negatives.
The problem is aggravated by the inevitable use of what the bill calls ‘content moderation technology’, i.e. algorithms. They are necessarily indiscriminate because they operate by reference to trigger text or images. They are insensitive to context. They do not cater for nuance or irony. They cannot distinguish between mischief-making and serious debate. They will be programmed to err on the side of caution. The pious injunctions in the bill to protect ‘content of democratic importance’ and ‘journalistic content’ and to ‘have regard to’ the implications for privacy and freedom of expression are unlikely to make much difference.
As he notes, the entire bill is patronizing (of course, he says “patronising”) but that’s kind of the nature of politics today. Paternalistic and patronizing.
There’s more good stuff in the article, and it’s worth reading. Hopefully UK politicians are actually paying attention.