An Appeals Court Broke Media Advertising, So The Copia Institute Asked The California Supreme Court To Fix It

from the huge-implications dept

A few months ago a California court of appeals issued a really terrible decision in Liapes v. Facebook. Liapes, a Facebook user, was unhappy that the ads delivered to her correlated with some of her characteristics, like her age. As a result there were certain ads, like one provided by an insurer offering a particular policy for men of a different age, that didn’t get delivered to her.

Of course, it didn’t get delivered to her because the advertiser likely had little interest in spending money to place an ad to reach a customer who would not and could not turn into a sale, since she would not have been eligible for the promotion. And historically advertisers in all forms of media – newspapers, television, radio, etc. – have preferred to spend their marketing budgets on media likely to reach the same sorts of people as would purchase their products and services. Which is why, as we explained to the California Supreme Court, one tends to see different ads in Seventeen Magazine than, say, AARP’s.

Because we also tend to see different expression in each one, as the publishing company chooses what content to deliver to which people. There’s no law that says media companies have to deliver content that would appeal to all people in all media channels, nor could there be constitutionally, because those choices of what expression to deliver to whom are protected by the First Amendment.

Or at least they were up until the court of appeals got its hands on the lawsuit Liapes brought against Facebook, arguing that letting advertisers choose which users would get which ads based on characteristics like age violated the state’s Unruh Act. The Unruh Act basically prevents a company from unlawfully discriminating against people for protected characteristics – if it offers a product or service to one customer it can’t refuse to offer it to another because of things like their age.

But Facebook isn’t a business that sells tangible products or non-expressive services; it is a media business, just like TV stations are, newspapers are, magazine publishers are, etc. Like these other businesses, it is in the business of delivering expression to audiences. True, it is primarily in the business of delivering others users’ expression rather than its own, and it is more likely to have the ability to deliver editorially-tailored expression on an individual level, but then again, increasingly so can traditional media. In any case, there is nothing about the First Amendment that keys it only to the characteristics of traditional media businesses producing media for the masses. After all, they themselves often choose which demographic to target with their own media. Conde Nast, for instance, publishes both GQ and Vogue, as well as TeenVogue, and it is surely using demographics of the targeted audience to decide what expression to provide them in each publication.

But the upshot of the appeals court decision, finding Unruh Act liability when a media business uses demographic information to target an audience with certain content (including advertising content), is that either no media business will be able to make any sort of editorial decision based on the demographic characteristics of their intended audience – and as a result, there goes the American advertising model that has sustained American media businesses for generations – or, even if those businesses somehow are left beyond the Unruh Act, it will introduce an artificial exception to the First Amendment to carve out a business like Facebook because… well, just because. There really is no sound rationale for treating a company like Meta differently than any other media business, but if they could be uniquely targeted by the Unruh Act, unlike their more traditional media brethren, it would still gravely impact every Internet business, especially those that monetize the expression they provide with ads.

Which would be particularly troubling because not only are businesses like Facebook supposed to be protected by the First Amendment but they are supposed to be EVEN MORE PROTECTED by Section 230, which insulates them from liability arising from the expression others provide, as well as the moderation decisions the platforms like Facebook make to choose what expression to serve audiences. The court of appeals decision impinges upon both these forms of protection, and in contravention of Section 230’s pre-emption provision, which prevents states from messing with this basic statutory scheme with its own laws, of which the Unruh Act is one. After all, if there was anything actually wrong with the ad, it was the advertiser who produced it who imbued it with its wrongful quality, not Facebook. And the decision to serve it or not is an editorially-protected moderation decision, which Facebook also should have been entitled to make without liability, per Section 230.

In sum, this California appeals court decision stands to make an enormous mess of at least online businesses, if not every media business, and not even just those who take advertising, because simply weakening Section 230 and the First Amendment itself will lead to its own dire consequences. And so the Copia Institute filed this amicus letter supporting Facebook’s petition for further review by the California Supreme Court in order to clean up this looming mess.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “An Appeals Court Broke Media Advertising, So The Copia Institute Asked The California Supreme Court To Fix It”

Subscribe: RSS Leave a comment
38 Comments
Anonymous Coward says:

Re:

In what particulars? In both cases, the advertiser (!) is making choices about who to show the advertisement to.

Facebook only provides more precise information about the prospective advertis-ee. The advertiser is still making the decision, not the media company.

In that regard, an advertiser can have their ad shown to only a portion of the media’s market.

Take magazines, one of your examples. Do you imagine that they have only one edition everywhere they distribute to? Or do they have a German edition, a UK edition, an Indian edition, a US edition? Do you imagine they carry the same advertisements in each of them?

Anonymous Coward says:

Re: Re:

While also completely bypassing redlining laws, and every other law like it.

It doesn’t matter how much Facebook relies on it for advertising dollars, it is still blatantly violating decades of civil rights law, like the one it was originally sued over.

The Supreme Court is in the wrong to overturn this, billboard companies have more regulation on what they can put up and where than Facebook does, and that’s fucked up

Anonymous Coward says:

Re: Re: Re:

Do you really want even more adverts with your Internet content, because if advertisers cannot be selective they will pay less, and companies relying on adverts will show you more. Also not showing someone an advert does not prevent them from finding the advertised product by other means.

Anonymous Coward says:

I’m not particularly convinced.

Prior examples of “age discrimination in media content” (whether in advertising, in editorial selection of shows/articles/etc, or in various other manners) do not actually prevent those of the “wrong” age from accessing the content in question. While certain ages may be more or less likely to come across that content, nothing prevented someone of the wrong age from seeking it out and consuming it should they wish to.

There are no additional barriers to a 90 year old’s consumption of content appearing in Seventeen magazine then there is a 17 year old’s. Both of them need simply purchase a copy. That one might be more interested in doing so than the other is immaterial to the question of age discrimination in expressive content.

(A rather infamous example of this is the viewing demographics of “My Little Pony”, which was intended for young girls and is now commonly consumed by adult men.)

I see no reason why the Unruh act would not generally apply to expressive content. It does not require that you provide content that is of equal interest to all age groups (after all there was once a store called Babies R Us), nor that you ignore the ages of your audience when choosing content, merely that your content remains equally available to everyone who seeks it out regardless of age. This doesn’t really affect general media advertising in the slightest, much less “break” it.

On the other hand, I could certainly accept an argument that this is the wrong defendant, and it is the advertiser who violated the law, not the one carrying the advertising.

it didn’t get delivered to her because the advertiser likely had little interest in spending money to place an ad to reach a customer who would not and could not turn into a sale

That an advertiser may have little interest in their content being viewable by the wrong ages is irrelevant. An unwillingness by someone to provide service to the wrong group is exactly why anti-discrimination laws exist, and cannot be used as a valid argument against them.

Aidan says:

I honestly think there’s a decent argument here. I don’t know enough law to speak to the legal nuances, but from a general public policy perspective, I think Facebook arguably should lose.

First, advertising is commercial speech. It gets much less protection (intermediate scrutiny, not strict). It’s not completely unprotected, but regulations designed to prevent businesses discriminating (against protected classes) by regulating discrimination in who ads are shown to probably passes.

Now, consider a local magazine, with a predominantly white readership. Maybe it discusses NASCAR, or bland foods. If that magazine wanted to sell advertising space, that would be legal. If a company bought advertising space, that would probably be legal, but if there was evidence the intent was racial discrimination, it might not be. Someone selling t shirts would probably be fine, but a bank showing different rates in magazines with predominantly white and predominantly black readers might land in hot water.

Now imagine that magazine tracks the race of their readers. Maybe on social media, or maybe they’re actually a newspaper and their delivery drivers peep through windows. And they offer a service where your ad can be delivered to only white readers, and discount this compared to ads for all readers since they can sell the space in magazines going to readers of other races to someone else. That’s basically what Facebook is doing here. There also isn’t an option for readers to manually ask for the version given to someone of a different race. If a black couple want to know the mortgage rates advertised to their white neighbors, better set up a white dummy sleeping near the window where the driver peeps in (or, indeed, somehow spoof facebooks targeting process)

Most companies using that option would be illegally discriminating. Before, the proximate factor in discrimination was their interests and magazine subscribership. They weren’t facially discriminating on race, they were discriminating based on NASCAR fanhood, which happens to correlate with it. Any argument it was racial discrimination would rely on disparate impact.
Now, they, with the magazine’s help, are facially discriminating based on race. That might fly if its a casting call to play a historical figure or something, but not for most products. They would probably face liability.

The magazine might face liability too. They provided an option where basically the only conceivable use is to discriminate based on race. Under the roommates.com case, my recollection is this might defeat 230 protection. (I know roommates ultimately decided the discrimination was not illegal, but that was on the grounds that freedom of association protected being racist when seeking roommates, it doesn’t apply to most advertisements). They might face secondary liability for providing a convienient “discriminate on race” button, even if the advertiser is ultimately at fault for actually using it. Heck, Facebook here did some targeting on protected characteristics regardless of the advertisers preferences, correct?

I would argue they should also face primary liability for providing users of different demographics effectively slightly different products. If I sold a magazine, and gave Black subscribers a version with half the articles removed and a few replaced, that would probably be illegal discrimination. If advertising is content like any other, a version with different ads isn’t actually any different from that. Do first amendment protections protect a magazine published by the KKK, targeted for the general public, refusing to allow nonwhite or catholic subscribers? I don’t think so, but also, even if they do, the speech here is advertising and gets reduced protection. And the ultimate goal here, preventing illegal business discrimination, is an important state interest, and preventing discrimination in ads shown is substantially related to achieving it.

Also, from a policy perspective, reining in targeted advertising is a good thing. It’s what gives companies a financial incentive to do all the invasive tracking you hear about, which in turn creates other problems, like creating big pools of data that can be hacked or sold to governments to undermine the 4th amendment.

Also, targeting based on algorithms determining demographics is worse than on interests or self reported demographics, public policy wise, since people can’t manually change it. A reporter wondering what ads a politician is showing to different groups, or someone looking for discrimination in advertised mortgage rates, thus has more trouble doing so.

I will note that some arguments in facebooks favor here are that a) it’s a life insurance company, which legally can discriminate on age, b) age might have been self reported anyway, and c) that they shouldn’t face secondary liability, whether due to 230 protection or because they didn’t do enough to really aid and abet it.

Arianity says:

Not really a huge fan of using the 1st Amendment to justify discrimination. That’s not any better than 303 Creative LLC v. Elenis.

But Facebook isn’t a business that sells tangible products or non-expressive services;

Unruh (and similar laws like the Civil Rights Act) don’t make distinctions for tangible products or expressive products.

If it’s that problematic, the solution seems obvious: fix Unruh and similar to allow ‘justifiable’ forms of discrimination. This sort of precedent already exists with things like insurance (and this precedent also usually has bumpers to insure good faith, such as actuarial data).

There really is no sound rationale for treating a company like Meta differently than any other media business,

There is a rationale, though. Facebook’s targeting is uniquely different than other media businesses. To use your own example, there’s nothing stopping a member of AARP from buying a Seventeen magazine. The owners of the magazine might tailor to a particular audience, but people outside of that audience aren’t restricted from accessing it in any way. There is something (Facebook) stopping them from accessing it on Facebook.

While the law would apply to both, Facebook is doing something materially different from a normal media business, here.

Which would be particularly troubling because not only are businesses like Facebook supposed to be protected by the First Amendment but they are supposed to be EVEN MORE PROTECTED by Section 230, which insulates them from liability arising from the expression others provide, as well as the moderation decisions the platforms like Facebook make to choose what expression to serve audiences

230 doesn’t protect when the media company itself gets involved. As you said yourself, this is an editorial decision:

is that either no media business will be able to make any sort of editorial decision based on the demographic

It’s not at all clear that 230 would protect a platform from using it’s own systems to discriminate.

Anonymous Coward says:

Re:

“While the law would apply to both, Facebook is doing something materially different from a normal media business, here.”

If by ‘normal media business’ you mean a newspaper, then no they’re not. Newspapers also moderate comments on their websites, and are thus entitled to the same §230 protections as Facebook and other social media websites.

Arianity says:

Re: Re:

Does that a mean a company should show everybody all the adverts that advertisers want shown against their content?

Stuff like Unruh only applies to protected classes. You can discriminate for other reasons. That sort of question is exactly why protected classes exist.

That said, personally speaking I’m not exactly sad if advertisers have to go back to completely untargeted ads, like traditional media did/does.

Also, not showing an advert is not the same as refusing to do business with someone.

Unruh doesn’t specify just doing business with someone, but prevents any form of discrimination.

Benjamin Barber says:

Mike Masnick Malding Again

Mike Masnick is literally shilling the exact opposite claims than that of Karl Bode, i.e. that racially targeting people with ads is protected by the first amendment, despite there being literally no good law to back any of that up. I really wonder what Karl Bode thinks of this, because he seems to shill really hard about disparity of demographic outcomes, but Masnick seems to think that Facebook is immune to suit, if it lets companies only target white people only for jobs and housing ads.

In the recent Supreme Court Case, 303 creative, I distinctly remember oral arguments, that the litigants admitted that they were not claiming that they could deny service to make websites for gay people, but that they in their individual capacity could not be forced by the government, to make an expression that violated their religious beliefs.

Similarly there is a case in American Alliance for Equal Rights v. Fearless Fund Management, LLC, where the defendants claim they can create a black woman only business competition, because they have a first amendment right to association with only black women, and a first amendment right to say they will give money to only black women. The appeals court issued an injunction, stating that their conduct was not protected first amendment, and that expression as a means to commit illegal conduct is not protected.

Then you have the 9th circuit court case Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, it was claimed that allowing people to set up demographic filters, violated the antidiscrimination laws. Facebook has lost in the 9th circuit court appeals over this very same issue in Vargas v. Facebook #21-16499

BJC (profile) says:

I don't think the sky is falling.

I don’t see this case as wrongly decided, or necessarily disastrous policy even if so.

The first reason is that it’s a demurrer. There’s no “truth” to this case, just the plaintiff’s pleading.

In the appellate decision, the plaintiff’s pleading is interpreted to be:

Facebook’s search function, in conjunction with its advertising, is used by its customers as a portal to purchase financial products, and Meta is aware of and encourages its use this way.

I think we all agree that, in a world where we actually get to argue real facts, that’s crazypants, but you don’t get to argue real facts at demurrer.

So I don’t think the ruling is wrong from the perspective of whether, if Facebook actually was able of being asked “give me all the insurance products I’m eligible for” it made it easy for insurance companies to “hide” their products based on protected classes.

(Some people may want to start arguing in a tort reform-ish manner about how these facts are so ridiculous a court should have thrown them out, but I would point out that at no time does Ms. Gellis in her post or her letter argue generally about the standard for demurrer under California civil procedure, so let’s stay on topic, OK? Assume you can’t fix jackpot justice generally, you have to convince the judges on the underlying legal issue.)

Second, I just don’t buy the slippery slope argument.

Courts have been pretty good at drawing some line where Civil Rights Act of 1964 and similar laws can’t go due to First Amendment and other constitutional concerns.

Admittedly, if you’re a libertarian, you may not like where that line falls. But they draw some line.

So I don’t see this as banning all directed advertising. At worst, it imposes a “you can’t be too efficient” rule about directed advertising; you have to accept some people outside your target will see your ad.

Part of this also comes back to my first point, I guess: if we’re talking about what Meta actually did rather than what the plaintiff alleged, I don’t think there’s going to be quite as broad a ruling.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...