from the bad-regulations dept
This year seems to be the year in which governments all over the globe really, really want to regulate the internet. And they’re doing a ridiculously dumb job of it. We’ve talked a lot about the EU, with the Copyright Directive and now the Terrorist Content Regulation. And then there’s Australia with its anti-encryption law and its “abhorrent content” law. India has already passed a few bad laws regarding the internet and is discussing a few more. Then there’s the UK, Germany, South Korea, Singapore, Thailand, Cameroon, etc. etc. etc. You get the idea.
Oh, and certainly, the US is considering some really bad ideas as well.
When you look at what “problem” all of these laws are trying to solve, it can basically be boiled down to “people do bad things on the internet, and we need to regulate the internet because of it.” This is problematic to me for a variety of reasons, in part because it seems to be regulating the wrong party. We should, ideally, be going after the people doing the bad things, rather than the tools and services they are using to do the bad things (or to merely promote the bad things they’re doing). However, there is an argument — not one that I wholly buy into — that one reasonable way to regulate is to focus less on which party is actually doing the bad thing, and more on which party is best positioned to minimize the harm of the bad thing. And it’s that theory of regulation (applied stupidly) that is behind much of the regulatory theory on the internet these days.
Well, there’s also a second theory behind many of the regulatory approaches, and it’s “Google and Facebook are big and bad, so anything that punishes them is good regulation”. This makes even less sense to me than the other approach, but it is certainly driving a lot of the thinking, at least in the EU (and possibly the US).
Combine those two driving theories for regulating the internet and you’ve got a pretty big mess. They seem to be taking a sledge hammer to huge parts of the internet, rather than looking for narrow, targeted approaches. And, on top of that, in focusing so much on Google and Facebook, so many of these laws are written solely with those two platforms in mind, and with no thought to how it impacts every other internet company, many of which operate on a very different basis.
Earlier this year, I wrote up my thoughts on what sort of regulatory approach would really “break up” big tech while preserving an open internet, but it’s an approach that would require a very big shift in mindsets (one I’m still hoping will occur).
However, Ben Thompson has taken a much more practical approach to thinking through regulating the internet. He, like me, is skeptical of most of these attempts to regulate the internet, but recognizing that it’s absolutely going to happen no matter how skeptical we are, he is proposing a framework for thinking about regulating the internet, in a way that would (hopefully) minimize the worst outcomes from the approaches being used today.
You should read the whole thing to understand the thinking, the background, and the approach, but the key aspects to Thompson’s framework are to recognize that there are different kinds of internet companies — and that’s true not just up and down the stack, but across the different kinds of services. So his hope is that if the regulatory approaches were more narrowly targeted to a manner in which they fit better we’d have a lot less collateral damage in trying to shove a square regulatory approach through a round internet service.
Another key to his approach is a more modern update to the common “free as in speech v. free as in beer” concept that everyone in the open source world is familiar with. Ben talks about a third option that has been discussed for decades, which is “free as in puppy” — meaning something that you get for free, but which then has an ongoing cost in terms of maintaining the free thing you got.
Most in the West agree, at least in theory, with the idea that the Internet should preserve ?free as in speech?; China in particular represents a cautionary tale as to how technology can be leveraged in the opposite direction. The question that should be asked, though, is if preserving ?free as in speech? should also mean preserving ?free as in beer.?
Specifically, Facebook and YouTube offer ?free as in speech? in conjunction with ?free as in beer?: content can be created and proliferated without any responsibility, including cost. Might it be better if content that society deemed problematic were still ?free as in speech?, but also ?free as in puppy? ? that is, with costs to the supplier that aligned with the costs to society?
With that premise, he suggests a way to better target any potential platform regulation:
In theory, this lets various countries who believe there are certain problems on the internet more narrowly target their regulations without harming other parts of the internet:
This distinct categorization is critical to developing regulation that actually addresses problems without adverse side effects. Australia, for example, has no need to be concerned about shared hosting sites, but rather Facebook and YouTube; similarly, Europe wants to rein in tech giants without ? and I will give the E.U. the benefit of the doubt here ? burdening small online businesses with massive amounts of red tape. And, from a theoretical perspective, the appropriate place for regulation is where there is market failure; constraining the application to that failure is what is so difficult.
Please don’t comment on this without first reading Ben’s entire piece, as it gets into a lot more detail. He very readily admits that this doesn’t answer all the questions (and, indeed, likely creates a bunch of new ones).
I will admit that I’m not convinced by this model, but I do appreciate that it’s given me a lot to think about. At the very least, in targeting just the ad-supported platforms for regulation solves two problems: (1) the mis-aligned incentives of ad-supported platforms to consider the wider societal impact of the platform, and (2) the sledge-hammer approach to regulating all internet platforms, no matter what type and where in the internet stack they reside, by more narrowly focusing it just at the application level and just at a particular type of service. And, frankly, this kind of approach could potentially move us towards that world of “protocols, not platforms” that I envision (a more regulated ad-supported platform world might push companies to explore non-advertising based business models).
I still have lots of concerns, however, for all of the complaints about what Google and Facebook have done with an ad supported model, we should be willing to admit that an ad supported model has created some incredibly powerful services that have really done amazing things for many, many people. Everyone focuses on the negatives — which exist — but we shouldn’t ignore how much of the good stuff we’ve gotten because of an internet built on the back of advertising. Can it be improved? Absolutely. But targeting internet advertising as “the problem” still feels too broad to me (and, in fact, I think Ben would likely agree on that point). If there must be a regulatory approach, it should not be targeted just by the nature of the platform, but around the specific and articulated harm that it is trying to solve. At least that way, we can weigh the harms such a law might mitigate, against the good aspects it might hinder, and then be better able to judge whether or not the regulatory approach makes sense.
I’m still skeptical that most plans to regulate the internet will do a very good job of narrowly targeting actual harms (and to do so without throwing away lots of good stuff), but since we’re going to be having lots of discussions around these regulations in the coming weeks, months, and years, we might as well start having the discussion of how we should view and analyze these proposed laws. And, on that front, Ben’s contribution is a useful way of thinking about these things.