Thanks Mike for spelling out the implications. I'm a regular reader of Eric Goldman's blog post, but still I had not fully grasped some of his points until now.
I agree that the "design choices" argument can be abused. I have no insight on how likely that is to happen.
I came to comment on a couple arguments in Mike's post which I found less convincing. Azuaron and Arianity have already pointed them out.
Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. [...]
The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.
This is an interesting argument but I think you have overplayed it a bit.
First of all, in the hypothetical case of a website literally devoted to paint drying videos, it would be an example of how this new legal standard actually works quite well. The editorial decision of the website is to only post paint drying videos; nobody is going to go after that. Their design choices, such as a recommendation algorithm to only promote the most addictive paint drying videos, could be questioned. Would that affect their editorial decisions? There's no reason to believe it would, importantly because as you say probably nobody is going to sue.
But of course that was only an extreme example. The argument is that design choices and content choices are inextricably linked and that every design choice is also a moderation decision, just like the addictive design (a) requires the content to be posted and (b) affects what kind of content people are shown.
On (a), I think it's irrelevant. Sure, users have their role in contributing to the addictiveness by doing exactly what they are expected to do, but this not move the responsibility. This sort of argument works for defamation or copyright infringement because you can shift the responsibility to the person who actually published the problematic content. But who else should be sued here instead? Should plaintiff instead file a lawsuit against the millions of authors of addictive posts? Of course not, that doesn't make sense.
On (b), it is indeed easy to find examples where a "design choice" is effectively a decision to surface specific content over other. Facebook in the past 10 years has repeatedly and very publicly "tweaked" their home feeds in order to increase or decrease the amount of "news", "political content", "personal updates" or other such categories, simply by using some parameters that on the face of it could appear content-neutral (such as whether a post comes from a "friend" or someone further apart in the social graph, or how many interactions they got).
Does this mean that every design choice is going to be like that? Or is this "a slippery slope argument with no evidence that such a slope exists" like Azuaron says? I think it's possible to find design choices which are clearly content neutral. Innocuous design choices, such as the color palette of the website, are likely not to affect the content. Some very impactful design choices, such as some sort of parental control where some features of Instagram become unavailable after e.g. 1h of continuous use in the day (and maybe you can only look at DMs), can probably still be content neutral. (That doesn't mean that they're necessarily good.)
And vice versa, is there a risk that any content-specific objection to moderation or publishing decisions can be smuggled in the guise of a design choice argument? That, I think, is the main worry here.
Finally, even if we do find a way to identify "bad" design choices which are subject to litigation and "ok" design choices which should not be, the question remains how do we make sure that the lawsuits end up following such "good" standard.
Eric Goldman and Mike argue that they won't, it will be impossible, it will be a deluge of lawsuits and bad rulings.
Others say it will be fine, only the most egregious violators like Facebook/Snapchat/TikTok will actually get dragged to court because look how badly they had to screw up before finally some judge allowed this sort of case to go forward.
I say, it's going to be somewhere in between and it will be a mess, so you will need regulation (probably legislation) to clear up the mess. Congress will probably end up doing something if this blows up for big and small actors alike. (Admittedly, Congressional inaction or even misguided action is more likely if such lawsuits only end up hurting the small guys, just as we see in copyright law.)
Or in other words, you might end up with something not too different from the EU's Digital Services Act, which is a mess and definitely has some component of content-specific censorship risk, but also has some good elements. A legislator can more easily draw the line and say that only companies above a certain amount of revenue are going to be subject to some regulation, as the EU did in the DSA.
California is allegedly considering adopting some ideas from the DSA and DMA (Digital Markets Act), if you believe in reading tea leaves from the meeting between Gavin Newsom and Teresa Ribeira. Some experimentation at state level may allow a more orderly result than playing lottery at the courtroom in the hope SCOTUS mops up the eventual mess.
By this logic, any situation where companies could face discovery would need immunity. These sorts of “smoking guns” happen all the time in trials, and not just tech or social media ones.
They happen all the time but also the extent of the risk is limited because not so many people have standing to bring a lawsuit and it's relatively easy to draw a line on who has standing. With social media, practically anyone online can claim to have standing to bring lawsuits. Often they are right, but there's a need to draw a line somewhere to avoid frivolous litigation, and it's going to be hard.
The importance of the "procedural safeguard" is that it allows defendants to get a case dismissed by the judge well before it goes to jury trial. A jury trial is ruinous for almost everyone, whether they end up losing or not.
Good point about Breton vs. Carr. I thought Breton was fired just because von der Leyen could not stand him on a personal level (and Macron did not bother to defend him), but I like your version. It is too early to tell whether Ribeira and Virkkunen will do a better job, but for sure they are less flashy, to use an euphemism.
The funniest thing for me is how the same people who spent years cutting public services indiscriminately now have to spend weeks filling the airwaves with defenses of said public services, which avoid
real pain for real people, veterans, the elderly people who rely upon these services
said Mike Johnson (https://thehill.com/homenews/house/5552958-mike-johnson-government-shutdown/ ).
Indeed. After all, even the publisher who won a lawsuit against the Internet Archive failed to show any market harm. You'd think they had an easy job, considering their lawsuit was about some 100 input books and very close copies/outputs thereof, as opposed to millions of books and billions of extremely dispersed (non)copies/outputs, but they didn't even try, presumably because they knew they'd fail.
Perhaps this judge is hoping that the appeals court will send the case back, instructing the judge to make up any necessary evidence to reach the preordained conclusion, as was done in the Internet Archive case.
the Wikimedia Foundation made a very sleazy attempt this year to get Finland’s “sweat” or “hiki” encyclopedia shut down on trumped-up trademark grounds
Where are they going to file suit? What court would be authoritative for something like this?
I don't see where's the difficulty. They can start from a court in Mexico and continue from there, like Brazil did for X/Twitter. No doubt Google has already calculated such litigation risk as part of their cost of doing business in many jurisdictions.
Cotton later said that any company that helps distribute TikTok could face “hundreds of billions of dollars” in fines from entities beyond the federal government. “Think about it,” he warned.
https://www.theverge.com/2025/1/19/24347280/tiktok-ban-shutdown-ends
Now I must admit I would be happy to see Google, Apple, Amazon, Microsoft, Oracle etc. fined a total of several trillion dollars, but I somehow suspect it wouldn't happen.
Biden takes aim at 'tech industrial complex,' echoing Eisenhower
https://www.reuters.com/world/us/biden-raises-alarm-about-dangerous-concentration-power-among-few-wealthy-people-2025-01-16/
As if he didn't personally gift a monopoly on government "cybersecurity" products to Microsoft, and cheerfully pass a law whose sole purpose is to confiscate a top 10 internet property and gift it to some other big tech player (be it Oracle or someone else). All in the name of national security, of course, just as with the military-industrial complex.
Court deploys tricks to prevent the people from reviewing a court-created "law" giving courts the power to ignore laws. Sounds like judicial activism to me.
Meanwhile though, the ballot measure appears to have been allowed to proceed.
https://ohiocapitaljournal.com/2024/12/05/ohio-ballot-board-approves-signature-gathering-for-proposed-amendment-to-end-qualified-immunity/
Three years later [...] It’s time to pull the plug.
I think this translates to "People are going to see benefits soon, so we need to kill the program now or it will be too popular to kill later". Much like Obamacare.
Arianity is actually correct, with the server being the relay. On the fediverse, most ActivityPub servers handle a variety of functions which on BlueSky have been decomposed into multiple services. One of the functions that a Mastodon instance serves is equivalent to the BS relay, a searchable cache of posts. Currently there's only one central relay really; some other functions are still handled by a single point of failure centralised server. Bryan Newbold of BS has an excellent overview:
https://bnewbold.net/2024/atproto_progress/
Running your own relay is possible but prohibitively expensive for most users (over 5 TiB of storage currently required, growing fast):
https://whtwnd.com/bnewbold.net/entries/Notes%20on%20Running%20a%20Full-Network%20atproto%20Relay%20(July%202024)
https://alice.bsky.sh/post/3laega7icmi2q
This information via Bryan Newbold at https://social.coop/@bnewbold/113448935229507365
It's true that releasing detailed click data (or even mere URL statistics) presents significant privacy risks. Google is well positioned to do it with proper guarantees of differential privacy, just like the Wikimedia Foundation did with far more limited resources (https://meta.wikimedia.org/wiki/Research:Wikipedia_clickstream ).
I wonder if Google will now stop prohibiting such ads. As of 2015 or so, it maintained a secret list of trademarks which would get your ads unapproved.
The proposed roles are in the "mission letters": https://commission.europa.eu/about-european-commission/president-elect-ursula-von-der-leyen/commissioners-designate-2024-2029_en
The tech portfolio is mostly under Teresa Ribera and Henna Virkkunen, who will get DGCOMP (competition) and DGCNECT (copyright etc.) respectively. Ribera's letter mentions DMA enforcement while Virkkunen's mentions DSA and DMA enforcement and dialogues.
This is good news. I hope we can finally get past this pretense that Trump won because "disinformation" and that we can restore the good old days by removing bad speech from social media. Just get better at your propaganda and call it what it is, there's no need to hide.
"Nonbinding dicta" just means "we're free to change mind at any time and issue a new ruling in any direction we feel like". Not that anyone needed reminders, but presumably it's an invite for the justices' donor base to keep throwing more juicy cases at them.
Can we find content-neutral design choices?
Thanks Mike for spelling out the implications. I'm a regular reader of Eric Goldman's blog post, but still I had not fully grasped some of his points until now. I agree that the "design choices" argument can be abused. I have no insight on how likely that is to happen. I came to comment on a couple arguments in Mike's post which I found less convincing. Azuaron and Arianity have already pointed them out.
This is an interesting argument but I think you have overplayed it a bit. First of all, in the hypothetical case of a website literally devoted to paint drying videos, it would be an example of how this new legal standard actually works quite well. The editorial decision of the website is to only post paint drying videos; nobody is going to go after that. Their design choices, such as a recommendation algorithm to only promote the most addictive paint drying videos, could be questioned. Would that affect their editorial decisions? There's no reason to believe it would, importantly because as you say probably nobody is going to sue. But of course that was only an extreme example. The argument is that design choices and content choices are inextricably linked and that every design choice is also a moderation decision, just like the addictive design (a) requires the content to be posted and (b) affects what kind of content people are shown. On (a), I think it's irrelevant. Sure, users have their role in contributing to the addictiveness by doing exactly what they are expected to do, but this not move the responsibility. This sort of argument works for defamation or copyright infringement because you can shift the responsibility to the person who actually published the problematic content. But who else should be sued here instead? Should plaintiff instead file a lawsuit against the millions of authors of addictive posts? Of course not, that doesn't make sense. On (b), it is indeed easy to find examples where a "design choice" is effectively a decision to surface specific content over other. Facebook in the past 10 years has repeatedly and very publicly "tweaked" their home feeds in order to increase or decrease the amount of "news", "political content", "personal updates" or other such categories, simply by using some parameters that on the face of it could appear content-neutral (such as whether a post comes from a "friend" or someone further apart in the social graph, or how many interactions they got). Does this mean that every design choice is going to be like that? Or is this "a slippery slope argument with no evidence that such a slope exists" like Azuaron says? I think it's possible to find design choices which are clearly content neutral. Innocuous design choices, such as the color palette of the website, are likely not to affect the content. Some very impactful design choices, such as some sort of parental control where some features of Instagram become unavailable after e.g. 1h of continuous use in the day (and maybe you can only look at DMs), can probably still be content neutral. (That doesn't mean that they're necessarily good.) And vice versa, is there a risk that any content-specific objection to moderation or publishing decisions can be smuggled in the guise of a design choice argument? That, I think, is the main worry here. Finally, even if we do find a way to identify "bad" design choices which are subject to litigation and "ok" design choices which should not be, the question remains how do we make sure that the lawsuits end up following such "good" standard. Eric Goldman and Mike argue that they won't, it will be impossible, it will be a deluge of lawsuits and bad rulings. Others say it will be fine, only the most egregious violators like Facebook/Snapchat/TikTok will actually get dragged to court because look how badly they had to screw up before finally some judge allowed this sort of case to go forward. I say, it's going to be somewhere in between and it will be a mess, so you will need regulation (probably legislation) to clear up the mess. Congress will probably end up doing something if this blows up for big and small actors alike. (Admittedly, Congressional inaction or even misguided action is more likely if such lawsuits only end up hurting the small guys, just as we see in copyright law.) Or in other words, you might end up with something not too different from the EU's Digital Services Act, which is a mess and definitely has some component of content-specific censorship risk, but also has some good elements. A legislator can more easily draw the line and say that only companies above a certain amount of revenue are going to be subject to some regulation, as the EU did in the DSA. California is allegedly considering adopting some ideas from the DSA and DMA (Digital Markets Act), if you believe in reading tea leaves from the meeting between Gavin Newsom and Teresa Ribeira. Some experimentation at state level may allow a more orderly result than playing lottery at the courtroom in the hope SCOTUS mops up the eventual mess.On the need of immunity
Re: procedural safeguard
The importance of the "procedural safeguard" is that it allows defendants to get a case dismissed by the judge well before it goes to jury trial. A jury trial is ruinous for almost everyone, whether they end up losing or not.
Rare case of Techdirt being less cynical on the EU
Good point about Breton vs. Carr. I thought Breton was fired just because von der Leyen could not stand him on a personal level (and Macron did not bother to defend him), but I like your version. It is too early to tell whether Ribeira and Virkkunen will do a better job, but for sure they are less flashy, to use an euphemism.
Real pain
The funniest thing for me is how the same people who spent years cutting public services indiscriminately now have to spend weeks filling the airwaves with defenses of said public services, which avoid
said Mike Johnson (https://thehill.com/homenews/house/5552958-mike-johnson-government-shutdown/ ).The myth of market harm
Indeed. After all, even the publisher who won a lawsuit against the Internet Archive failed to show any market harm. You'd think they had an easy job, considering their lawsuit was about some 100 input books and very close copies/outputs thereof, as opposed to millions of books and billions of extremely dispersed (non)copies/outputs, but they didn't even try, presumably because they knew they'd fail. Perhaps this judge is hoping that the appeals court will send the case back, instructing the judge to make up any necessary evidence to reach the preordained conclusion, as was done in the Internet Archive case.
Wikimedia trademarks
Jurisdiction
Re: Fines for intermediaries
Tech industrial complex
Meanwhile:
https://www.reuters.com/world/us/biden-raises-alarm-about-dangerous-concentration-power-among-few-wealthy-people-2025-01-16/ As if he didn't personally gift a monopoly on government "cybersecurity" products to Microsoft, and cheerfully pass a law whose sole purpose is to confiscate a top 10 internet property and gift it to some other big tech player (be it Oracle or someone else). All in the name of national security, of course, just as with the military-industrial complex.Activism
Court deploys tricks to prevent the people from reviewing a court-created "law" giving courts the power to ignore laws. Sounds like judicial activism to me. Meanwhile though, the ballot measure appears to have been allowed to proceed. https://ohiocapitaljournal.com/2024/12/05/ohio-ballot-board-approves-signature-gathering-for-proposed-amendment-to-end-qualified-immunity/
It's time
Servers on BlueSky
Arianity is actually correct, with the server being the relay. On the fediverse, most ActivityPub servers handle a variety of functions which on BlueSky have been decomposed into multiple services. One of the functions that a Mastodon instance serves is equivalent to the BS relay, a searchable cache of posts. Currently there's only one central relay really; some other functions are still handled by a single point of failure centralised server. Bryan Newbold of BS has an excellent overview: https://bnewbold.net/2024/atproto_progress/ Running your own relay is possible but prohibitively expensive for most users (over 5 TiB of storage currently required, growing fast): https://whtwnd.com/bnewbold.net/entries/Notes%20on%20Running%20a%20Full-Network%20atproto%20Relay%20(July%202024) https://alice.bsky.sh/post/3laega7icmi2q This information via Bryan Newbold at https://social.coop/@bnewbold/113448935229507365
Privacy risks and open data
It's true that releasing detailed click data (or even mere URL statistics) presents significant privacy risks. Google is well positioned to do it with proper guarantees of differential privacy, just like the Wikimedia Foundation did with far more limited resources (https://meta.wikimedia.org/wiki/Research:Wikipedia_clickstream ).
Google ad guidelines
I wonder if Google will now stop prohibiting such ads. As of 2015 or so, it maintained a secret list of trademarks which would get your ads unapproved.
MongoDB
MongoDB? Is this a trick question to see who has read past Techdirt articles on BS copyright licenses? ;-)
Re: effervescently efficient "free markets"
Always a pleasure to read Karl. :)
Teresa Ribera, Henna Virkkunen
The proposed roles are in the "mission letters": https://commission.europa.eu/about-european-commission/president-elect-ursula-von-der-leyen/commissioners-designate-2024-2029_en The tech portfolio is mostly under Teresa Ribera and Henna Virkkunen, who will get DGCOMP (competition) and DGCNECT (copyright etc.) respectively. Ribera's letter mentions DMA enforcement while Virkkunen's mentions DSA and DMA enforcement and dialogues.
Disinformation
This is good news. I hope we can finally get past this pretense that Trump won because "disinformation" and that we can restore the good old days by removing bad speech from social media. Just get better at your propaganda and call it what it is, there's no need to hide.
Nonbinding
"Nonbinding dicta" just means "we're free to change mind at any time and issue a new ruling in any direction we feel like". Not that anyone needed reminders, but presumably it's an invite for the justices' donor base to keep throwing more juicy cases at them.