How To Think About Online Ads And Section 230
from the oversimplification-avoidance dept
There’s been a lot of consternation about online ads, sometimes even for good reason. The problem is that not all of the criticism is sound or well-directed. Worse, the antipathy towards ad tech, regardless of whether it is well-founded or not, is coalescing into yet more unwise, and undeserved, attacks on Section 230 and other expressive discretion the First Amendment protects. If these attacks are ultimately successful none of the problems currently lamented will be solved, but they will create lots of new ones.
As always, effectively addressing actual policy challenges first requires a better understanding of what these challenges are. The reality is that there are at least three separate issues that are raised by online ads: those related to ad content itself, those related to audience targeting, and those related to audience tracking. They all require their own policy responses?and, as it happens, none of those policy responses call for doing anything to change Section 230. In fact, to the extent that Section 230 is even relevant, the best policy response will always require keeping it intact.
With regard to ad content, Section 230 applies, and should apply, to the platforms that run advertiser-supplied ads for the same reasons it applies, and should apply, to the platforms hosting the other sorts of content created by users. After all, ad content is, in essence, just another form of user generated content (in fact, sometimes it’s exactly like other forms of user content). And, as such, the principles behind having Section 230 apply to platforms hosting user-generated content in general also apply ? and need to apply ? here.
For one thing, as with ordinary user-generated content, platforms are not going to be able to police all the ad content that may run on their site. One important benefit of online advertising versus offline is that it enables far more entities to advertise to far larger audiences than they would be able to afford in the offline space. Online ads may therefore sometimes be cheesy, low-budget affairs, but it’s ultimately good for the consumer if it’s not just large, well-resourced, corporate entities who get to compete for public attention. We should be wary of implementing any policy that might choke off this commercial diversity.
Of course, the flip side to making it possible for many more actors to supply many more ads is that the supply of online ads is nearly infinite, and thus the volume is simply too great for platforms to be able to scrutinize all of them (or even most of them). Furthermore, even in cases where platforms might be able to examine an ad, it is still unlikely to have the expertise to review it for all possible legal issues that might arise in every jurisdiction where the ad may appear. Section 230 exists in large part to alleviate these impossible content policing burdens to make it possible for platforms to facilitate the appearance of any content at all.
Nevertheless, Section 230 also exists to make it possible for platforms to try to police content anyway, to the extent that they can, by making it clear that they can’t be held liable for any of those moderation efforts. And that’s important if we want to encourage them to help eliminate ads of poor quality. We want platforms to be able to do the best they can to get rid of dubious ads, and that means we need to make it legally safe for them to try.
The more we think they should take these steps, the more we need policy to ensure that it’s possible for platforms to respond to this market expectation. And that means we need to hold onto Section 230 because it is what affords them this practical ability.
What’s more, Section 230 affords platforms all this critical protection regardless of whether they profit from carrying content or not. The statute does not condition its protection on whether a platform facilitates content in exchange for money, nor is there any sort of constitutional obligation for a platform to provide its services on a charitable basis in order to benefit from the editorial discretion the First Amendment grants it. Sure, some platforms do pointedly host user content for free, but every platform needs to have some way of keeping the lights on and servers running. And if the most effective way to keep their services free for some users to post their content is to charge others for theirs, it is an absolutely constitutionally permissible decision for a platform to make.
In fact, it may even be good policy to encourage as well, as it keeps services available for users who can’t afford to pay for access. Charging some users to facilitate their content doesn’t inherently make the platform complicit in the ad content’s creation, or otherwise responsible for imbuing it with whatever quality is objectionable. Even if that an advertiser has paid for algorithmic display priority, Section 230 should still apply just as it applies to any other algorithmically driven display decision the platform employs.
But on the off-chance that the platform did take an active role in creating that objectionable content, Section 230 has never stood in the way of holding the platform responsible. What Section 230 simply says is that making it possible to post unlawful content is not the same as creating content; for the platform to be liable as an “information content provider,” aka a content creator, it had to have done something significantly more to birth its wrongful essence than simply be a vehicle for someone else to express it.
It’s even true if the platform allows the advertiser to choose its audience. After all, the content has already been created. Audience targeting is something else entirely, but it’s also something we should be wary of impinging upon.
There may, of course, be situations where advertisers try to target certain types of ads (ex: jobs, housing offers) in harmful ways. And when they do it may be appropriate to sanction the advertiser for what may amount to illegally discriminatory behavior. But not every such targeting choice is wrongful; sometimes choosing narrow audiences based on protected status may even be beneficial. But if we change the law to allow platforms be held equally liable with the advertiser for their wrongful targeting choices, we will take away the ability for platforms to offer audience targeting for any reasons, even good ones, by making it legally unsafe in case the advertiser does it for bad ones.
Furthermore, doing so will upend all advertising as we’ve known it, and in a way that’s offensive to the First Amendment. There’s a reason that certain things are advertised during prime time, or during sports broadcasts, or on late night tv, just as there’s a reason that ads appearing in the New York Times are not necessarily the same ones running in Field & Stream or Ebony magazines. The Internet didn’t suddenly make those choices possible; advertisers have always wanted the most bang for their buck, to reach the people most likely to be their ultimate customers as cost effectively as possible. And as a result they have always made choices about where to place their ads based on the demographics those ads likely reach. To now say that it should be illegal to allow advertisers to ever make such choices, simply because they may sometimes make these decisions wrongfully would disrupt decades upon decades of past practice and likely run afoul of the First Amendment, which generally protects the choice of whom to speak to. In fact, it protects it regardless of the medium in question, and there is no principled reason why an online platform should be any less protected than a broadcaster or some sort of printed periodical (especially not the former).
Even if it would be better if advertisers weren’t so selective?and it’s a fair argument to make, and a fair policy to pursue?it’s not an outcome we should use the weight of legal liability to try to force. It won’t work, and it impinges on important constitutional freedoms we’ve come to count on. Rather, if there is any affirmative policy response to ad tech that is warranted it is likely with the third constituent part: audience tracking. But even so, any policy response will still need to be a careful one.
There is nothing new about marketers wanting to fully understand their audiences; they have always tried to track them as well as the technology of the day would allow. What’s new is how much better they now can. And the reality is that some of the tracking ability is intrusive and creepy, especially to the degree it happens without the audience being aware of how much of their behavior is being silently learned by strangers. There is room for policy to at minimum encourage, and potentially even require, such systems to be more transparent in how they learn about their audiences, tell others what they’ve learned, and give those audiences a chance to say no to much of it.
But in considering the right regulatory response there are some important caveats. First, take Section 230 off the table. It has nothing to do with this regulatory problem, apart from enabling platforms that may use ad tech to exist at all. You don’t fix ad tech by killing the entire Internet; any regulatory solution is only a solution when it targets the actual problem.
Which leads to the next caution, because the regulatory schemes we’ve seen attempted so far (GDPR, CCPA, Prop. 24) are, even if well-intentioned, clunky, conflicting, and with plenty of overhead that compromises their effectiveness and imposes their own unintended and chilling costs, including on expression itself (and of more expression than just that of advertisers).
Still, when people complain about online ads this is frequently the area they are complaining about and it is worth focused attention to solve. But it is tricky; given how easy it is for all online activity to leave digital footprints, as well as the many reasons we might want to allow those footprints to be measured and then those measurements to be used (even potentially for advertising), care is required to make sure we don’t foreclose the good uses while aiming to suppress the bad. But for the right law, one that recognizes and reasonably reacts to the complexity of this policy challenge, there is an opportunity for a constructive regulatory response to this piece of the online ad tech puzzle. There is no quick fix ? and ripping apart the Internet by doing anything to Section 230 is certainly not any kind of fix at all ? but if something must be done about online advertising, this is the something that’s worth the thoughtful policy attention to try to get right.