Walled Culture is a big fan of the public domain. The amazing artistic uses that people are able to make of material only once it enters the public domain are an indication that copyright can act as an obstacle to wider creativity, rather than something that automatically promotes it. But there’s a problem: because the public domain is about making artistic productions available to everyone for no cost and without restrictions, there are no well-funded lobbyists who stand up and defend it. Instead, all we hear is whining from the copyright world that the public domain exists, and calls for it to be diminished or even abolished by extending copyright wherever possible.
Sometimes those attacks can come from surprising quarters. For example, in October last year Walled Culture wrote about Italy’s Uffizi Galleries suing the French fashion house Jean Paul Gaultier for the allegedly unauthorized use of images of Botticelli’s Renaissance masterpiece The Birth of Venus on its clothing products.
Sadly, this is not a one-off case. The Communia blog has another example of something that is unequivocally in the public domain and yet cannot be used for any purpose, in this case a commercial one. The public domain art is the famous Vitruvian Man drawn by Leonardo da Vinci over 500 years ago.
The commercial use is as the image on a Ravensburger puzzle. As the Communia blog post explains:
According to the Italian Cultural Heritage Code and relevant case law, faithful digital reproductions of works of cultural heritage — including works in the Public Domain — can only be used for commercial purposes against authorization and payment of a fee. Importantly though, the decision to require authorization and claim payment is left to the discretion of each cultural institution (see articles 107 and 108). In practice, this means that cultural institutions have the option to allow users to reproduce and reuse faithful digital reproductions of Public Domain works for free, including for commercial uses. This flexibility is fundamental for institutions to support open access to cultural heritage.
This makes a mockery of the idea of the public domain, which to be meaningful has to apply in all cases, not just in ones where the relevant Italian cultural institution graciously decides to allow it. The fact that this law was passed is in part down to the success of the copyright industry in belittling the public domain as an aberration of no real value – something that can be jettisoned without any ill effects. However:
These cases are bound to leave wreckage in their wake: great uncertainty around the use of cultural heritage across the entire single market, hampered creativity, stifled European entrepreneurship, reduced economic opportunities, and a diminished, impoverished Public Domain. To address these issues, we hope the European Court of Justice will soon have the opportunity to clarify that the Public Domain must not be restricted, a fortiori by rules outside of copyright and related rights, which compromise the European legislator’s clear intent to uphold the Public Domain.
Let’s hope the Court of Justice of the European Union does the right thing, and defends the incredible riches of the public domain against every depredation – including those by Italian cultural institutions.
Ring offers security products. Shame they’re not all that secure. Sure, things have improved in recent years, but there was nowhere to go but up.
In December 2019, multiple reports surfaced of Ring cameras — most of them inside people’s houses — being hijacked by malicious idiots who used the commandeered cameras to yell nasty things at people’s children when not just lurking and watching the inner lives of unsuspecting Ring users. The worst of these people performed livestreams of camera hacking, taunting and frightening their targets for the amusement of truly terrible human beings.
The problem here was the default security options for the cameras. Ring did not require anything more than an email address and password to activate accounts, allowing these miscreants to sift through the massive piles of endlessly reused credentials to hijack the cameras. Shortly thereafter, Ring “encouraged” users to enable two-factor authentication. But it did not make this a requirement.
That same month, login credentials for nearly 4,000 Ring owners were exposed. Ring claimed it had suffered no breach, suggesting (rather unbelievably) that people were compiling credentials from other data breaches and compiling lists of verified Ring owners. Whatever the case, the company still wasn’t forcing customers to use strong passwords or enable 2FA, so credentials continued to be easily obtained and exploited.
The hijacked cameras led to a lawsuit in early 2020. A few days after the lawsuit was filed, Ring finally decided it was time to make some changes. It added a privacy dashboard for users to allow them to manage connected devices, block any they didn’t recognize, and control their interactions with law enforcement. And it finally made 2FA opt-out, rather than opt-in.
None of that’s helping much in the latest bad news for Ring. As Joseph Cox reports for Motherboard, hackers claim to have made off with some Ring data and left behind a ransom note.
A ransomware gang claims to have breached the massively popular security camera company Ring, owned by Amazon. The ransomware gang is threatening to release Ring’s data.
The party behind this appears to be ALPHV, a ransomware gang that — unlike others in this criminal business — created a searchable database of data obtained from these attacks and made it available on the open web.
That’s where this data may soon end up:
“There’s always an option to let us leak your data,” a message posted on the ransomware group’s website reads next to Ring’s logo.
Nice. But what data is it? And where did it come from?
Ring claims this isn’t its data, at least not specifically. In a comment to Motherboard, Ring claimed the breached/ransomed party is one of its third-party vendors and not Ring itself. But ALPHV must have something Ring-related and worth ransoming, otherwise it likely would not have called out Ring by name (and logo) on its website. Ring says this vendor does not have access to customer records, but it could have access to information and records Ring may not want to be made public.
Whatever the case, Ring claims to be on top of it. Not exactly comforting, given its history of taking a rather hands-off approach to user security.
Mainstream political news outlets like Axios have long been accused of “both sides” or “view from nowhere” journalism where they bend over backward to frame everything through a lens of illusory objectivity as to not offend. This distortion is then routinely exploited by authoritarians and corporations keen on normalizing bigotry, rank corruption, or even the dismantling of democracy.
It’s been a rough decade of very ugly lessons on this front, yet there’s still zero indication of meaningful self-awareness of the editorial leadership of mainstream political news outlets like Axios, Semafor, or Politico.
Case in point: this week Axios fired local, respected Tampa reporter Ben Montgomery after he responded to a press release blast from the office of Florida Governor Ron DeSantis by calling it (quite accurately) propaganda. It began when the Florida Department of Education shared Montgomery’s reply publicly, apparently in a bid to make Montgomery seem radical:
The problem: the press release genuinely is propaganda. It’s not actually announcing anything meaningful, and is full of the usual anti-diversity screeds intended on making modest Diversity, Equity and Inclusion (DEI) efforts sound corrosively diabolical. It’s the same authoritarian gibberish we’ve seen parroted for years by anti-“woke” nanny state bullies intent on normalizing racism and dipshittery.
“There was no, like, event to cover. It might have been a roundtable at some point, but there was no event that I had been alerted to. … This press release was just a series of quotes about DEI programs, and the ‘scam’ they are, and nothing else,” Montgomery said. “I was frustrated by this. I read the whole thing and my day is very busy.”
Correctly labeling propaganda as propaganda is a cardinal sin for outlets like Axios’, whose scoops, funding, and events generally rely on not making those in power (especially on the troll-happy right) upset. So Axios did what Axios often does: it buckled to authoritarian bullies. Montgomery very quickly got a call from Axios executive editor for local news, Jamie Stockwell, informing him he’d been fired:
“She started immediately by asking if I could confirm that I sent that email and I did immediately confirm it,” he continued. “She then sounded like she was reading from a script and she said … ‘Your reputation has been irreparably tarnished in the Tampa Bay area and, because of that, we have to terminate you.’”
On the call, Montgomery said he “objected with my full fucking throat on behalf of every hard-working journalist.” However, he said Stockwell “wasn’t answering any questions.” According to Montgomery, his laptop and access to company email were swiftly shut down.
It’s the latest in a long line of instances where DeSantis’ office has attempted to bully reporters and feckless media outlets into submission. And Axios just made it abundantly clear that when it comes to modern mainstream U.S. access journalism, it works. Not only does it work, but many mainstream shops still, a decade into Trumpism, don’t understand what’s actually happening. Or worse: know and don’t care.
That’s a problem for a country where violent, conspiracy and propaganda fueled authoritarianism is on the rise. U.S. journalism’s function is to convey something vaguely resembling the truth to your readership. But when your income depends heavily on strong relationships with right wing advertisers, sources, and event sponsors, the truth can be expensive.
If you’re not particularly keen on history, none of this ends well without some major, meaningful sea change. Or a massive funding infusion for independent media outlets with actual backbone. And there’s no real indication that either is coming anytime soon.
Question Presented: Does Section 230 Protect Generative AI Products Like ChatGPT?
As the buzz around Section 230 and its application to algorithms intensifies in anticipation of the Supreme Court’s response, ‘generative AI’ has soared in popularity among users and developers, begging the question: does Section 230 protect generative AI products like ChatGPT? Matt Perault, a prominent technology policy scholar and expert, thinks not, as he discussed in his recently published Lawfare article: Section 230 Won’t Protect ChatGPT.
Perault’s main argument follows as such: because of the nature of generative AI, ChatGPT operates as a co-creator (or material contributor) of its outputs and therefore could be considered the ‘information content provider’ of problematic results, ineligible for Section 230 protection. The co-authors of Section 230, former Representative Chris Cox and Sen. Ron Wyden, have also suggested that their law doesn’t grant immunity to generative AI.
I respectfully disagree with both the co-authors of Section 230 and Perault, and offer the counter argument: Section 230 does (and should) protect products like ChatGPT.
It is my opinion that generative AI does not demand exceptional treatment. Especially since, as it currently stands, generative AI is not exceptional technology; an understandably provocative take to which we’ll soon return.
But first, a refresher on Section 230.
Section 230 Protects Algorithmic Curation and Augmentation of Third-Party Content
Recall that Section 230 says websites and users are not liable for the content they did not create, in whole or in part. To evaluate whether the immunity applies, the Barnes v. Yahoo! Court provided a widely accepted three-part test:
The defendant is an interactive computer service;
The plaintiff’s claim treats the defendant as a publisher or speaker; and
The plaintiff’s claim derives from content the defendant did not create.
The first prong is not typically contested. Indeed, the latter prongs are usually the flashpoint(s) of most Section 230 cases. And in the case of ChatGPT, the third prong seems especially controversial.
Section 230’s statutory language states that a website becomes an information content provider when it is “responsible, in whole or in part, for the creation or development” of the content at issue. In their recent Supreme Court case challenging Section 230’s boundaries, the Gonzalez Petitioners assert that the use of algorithms to manipulate and display third-party content precludes Section 230 protection because the algorithms, as developed by the defendant website, convert the defendant into an information content provider. But existing precedent suggests otherwise.
For example, the Court in Fair Housing Council of San Fernando Valley v. Roommate.com (aka ‘the Roommates case’)—a case often invoked to evade Section 230—held that it is not enough for a website to merely augment the content at issue to be considered a co-creator or developer. Rather, the website must have materially contributed to the content’s alleged unlawfulness. Or, as the majority put it, “[i]f you don’t encourage illegal content, or design your website to require users to input illegal content, you will be immune.”
The majority also expressly distinguished Roomates.com from “ordinary search engines,” noting that unlike Roommates.com, search engines like Google do not use unlawful criteria to limit the scope of searches conducted (or results delivered), nor are they designed to achieve illegal ends. In other words, the majority suggests that websites retain immunity when they provide neutral tools to facilitate user expression.
While “neutrality” brings about its own slew of legal ambiguities, the Roommates Court offers some clarity suggesting that websites with a more hands-off approach to content facilitation are safer than websites that guide, encourage, coerce, or demand users produce unlawful content.
For example, while the Court rejected Roommate’s Section 230 defense for its allegedly discriminatory drop-down options, the Court simultaneously upheld Section 230’s application to the “additional comments” option offered to Roommates.com users. The “additional comments” were separately protected because Roommates did not solicit, encourage, or demand their users provide unlawful content via the web form. In other words, a blank web form that simply asks for user input is a neutral tool, eligible for Section 230 protection, regardless of how the user actually uses the tool.
The Barnes Court would later reiterate the neutral tools argument, noting that the provision of neutral tools to carry out what may be unlawful or illicit content does not amount to ‘development’ for the purposes of Section 230. Hence, while the ‘material contribution’ test is rather nebulous (especially for emerging technologies), it is relatively clear that a website must do something more than just augmenting, curating, and displaying content (algorithmically or otherwise) to transform into the creator or developer of third-party content.
The Court in Kimzey v. Yelp offers further clarification:
“the material contribution test makes a “‘crucial distinction between, on the one hand, taking actions (traditional to publishers) that are necessary to the display of unwelcome and actionable content and, on the other hand, responsibility for what makes the displayed content illegal or actionable.’”).”
So, what does this mean for ChatGPT?
The Case For Extending Section 230 Protection to ChatGPT
In his line of questioning during the Gonzalez oral arguments, Justice Gorsuch called into question Section 230’s application to generative AI technologies. But before we can even address the question, we need to spend some time understanding the technology.
Products like ChatGPT use large language models (LLMs) to produce a reasonable continuation of human-sounding responses. In other words, as discussed here by Stephen Wolfram, renown computer scientist, mathematician, and creator of WolframAlpha, ChatGPT’s core function is to “continue text in a reasonable way, based on what it’s seen from the training it’s had (which consists in looking at billions of pages of text from the web, etc).”
While ChatGPT is impressive, the science behind it is not necessarily remarkable. Computing technology reduces complex mathematical computations into step-by-step functions that the computer can then solve at tremendous speeds. As humans, we do this all the time, just much slower than a computer. For example, when we’re asked to do non-trivial calculations in our heads, we start by breaking up the computation into smaller functions on which mental math is easily performed until we arrive at the answer.
Tasks that we assume are fundamentally impossible for computers to solve are said to involve ‘irreducible computations’ (i.e. computations that cannot be simply broken up into smaller mathematical functions, unaided by human input). Artificial intelligence relies on neural networks to learn and then ‘solve’ said computations. ChatGPT approaches human queries the same way. Except, as Wolfram notes, it turns out that said queries are not as sophisticated to compute as we may have thought:
“In the past there were plenty of tasks—including writing essays—that we’ve assumed were somehow “fundamentally too hard” for computers. And now that we see them done by the likes of ChatGPT we tend to suddenly think that computers must have become vastly more powerful—in particular surpassing things they were already basically able to do (like progressively computing the behavior of computational systems like cellular automata).
But this isn’t the right conclusion to draw. Computationally irreducible processes are still computationally irreducible, and are still fundamentally hard for computers—even if computers can readily compute their individual steps. And instead what we should conclude is that tasks—like writing essays—that we humans could do, but we didn’t think computers could do, are actually in some sense computationally easier than we thought.
In other words, the reason a neural net can be successful in writing an essay is because writing an essay turns out to be a “computationally shallower” problem than we thought. And in a sense this takes us closer to “having a theory” of how we humans manage to do things like writing essays, or in general deal with language.”
In fact, ChatGPT is even less sophisticated when it comes to its training. As Wolfram asserts:
“ChatGPT as it currently is, the situation is actually much more extreme, because the neural net used to generate each token of output is a pure “feed-forward” network, without loops, and therefore has no ability to do any kind of computation with nontrivial “control Flow.””
Put simply, ChatGPT uses predictive algorithms and an array of data made up entirely of publicly available information online to respond to user-created inputs. The technology is not sophisticated enough to operate outside of human-aided guidance and control. Which means that ChatGPT (and similarly situated generative AI products) are functionally akin to “ordinary search engines” and predictive technology like autocomplete.
Now we apply Section 230.
For the most part, the courts have consistently applied Section 230 to algorithmically generated outputs. For example, the Sixth Circuit in O’Kroley v. Fastcase Inc. upheld Section 230 for Google’s automatically generated snippets that summarize and accompany each Google result. The Court notes that even though Google’s snippets could be considered a separate creation of content, the snippets derive entirely from third-party information found at each result. Indeed, the Court concludes that contextualization of third-party content is in fact a function of an ordinary search engine.
Similarly, in Obado v. Magedson, Section 230 applies to search result snippets. The Court says:
Plaintiff also argues that Defendants displayed through search results certain “defamatory search terms” like “Dennis Obado and criminal” or posted allegedly defamatory images with Plaintiff’s name. As Plaintiff himself has alleged, these images at issue originate from third-party websites on the Internet which are captured by an algorithm used by the search engine, which uses neutral and objective criteria. Significantly, this means that the images and links displayed in the search results simply point to content generated by third parties. Thus, Plaintiff’s allegations that certain search terms or images appear in response to a user-generated search for “Dennis Obado” into a search engine fails to establish any sort of liability for Defendants. These results are simply derived from third-party websites, based on information provided by an “information content provider.” The linking, displaying, or posting of this material by Defendants falls within CDA immunity.
The Court also nods to Roommates:
“None of the relevant Defendants used any sort of unlawful criteria to limit the scope of searches conducted on them; “[t]herefore, such search engines play no part in the ‘development’ of the unlawful searches” and are acting purely as an interactive computer service…
The Court goes further, extending Section 230 to autocomplete (i.e. when the service at issue uses predictive algorithms to suggest and preempt a user’s query):
“suggested search terms auto-generated by a search engine do not remove that search engine from the CDA’s broad protection because such auto-generated terms “indicates only that other websites and users have connected plaintiff’s name” with certain terms.”
Like Google Search, ChatGPT is entirely driven by third-party input. In other words, ChatGPT does not invent, create, or develop outputs absent any prompting from an information content provider (i.e. a user). Further, nothing on the service expressly or impliedly encourages users to submit unlawful queries. In fact, OpenAI continues to implement guardrails that force ChatGPT to ignore requests that would demand problematic and / or unlawful responses. Compare this to Google Search which may actually still provide a problematic or even unlawful result. Perhaps ChatGPT actually improves the baseline for ordinary search functionality.
Indeed, ChatGPT essentially functions like the “additional comments” web form in Roommates. And while ChatGPT may “transform” user input into a result that responds to the user-driven query, that output is entirely composed of third-party information scraped from the web. Without more, this transformation is simply an algorithmic augmentation of third-party content (much like Google’s snippets). And as discussed, algorithmic compilations or augmentations of third-party content are not enough to transform the service into an information content provider (e.g. Roommates; Batzel v. Smith; Dyroff v. The Ultimate Software Group, Inc.; Force v. Facebook).
The Limit Does Exist
Of course, Section 230’s coverage is not without its limits. There’s no doubt that future generative AI defendants, like OpenAI, will face an uphill battle in persuading a court. Not only do defendants have the daunting challenge of explaining generative AI technologies for less technologically savvy judges, the current judicial swirl around Section 230 and algorithms does defendants no favors.
For example, the Supreme Court could very well hand-down a convoluted opinion in Gonzalez that introduces ambiguity as to when Section 230 applies to algorithmic curation / augmentation. Such an opinion would only serve to undermine the precedence discussed above. Indeed, future defendants may find themselves embroiled in convoluted debate about AI’s capacity for neutrality. In fact, it would be intellectually dishonest to ignore emerging common law developments that preclude Section 230 from claims alleging dangerous / defective product designs (e.g. Lemmon v. Snap, A.M. v. Omegle, Oberdorf v. Amazon).
Further, the Fourth Circuit’s recent decision in Henderson v. Public Data could also prove to be problematic for future AI defendants as it imposes contributive liability for publisher activities that go beyond those of “traditional editorial functions” (which could include any and all publisher functions done via algorithms).
Lastly, as we saw in the Meta / DOJ settlement regarding Meta’s discriminatory practices involving algorithmic targeting of housing advertisements, AI companies cannot easily avoid liability when they materially contribute to the unlawfulness of the result. If OpenAI were to hard-code ChatGPT with unlawful responses, Section 230 will likely be unavailable. However, as you might imagine, this is a non-trivial distinction.
Public Policy Demands Section 230 Protections for Generative AI Technologies
Section 230 was initially established with the recognition that the online world would undergo frequent advancements, and that the law must accommodate these changes to promote a thriving digital ecosystem.
Generative AI is the latest iteration of web technology that has enormous potential to bring about substantial benefits for society and transform the way we use the Internet. And it’s already doing good. Generative AI is currently used in the healthcare industry, for instance, to improve medical imaging and to speed up drug discovery and development.
As discussed, courts have developed precedence in favor of Section 230 immunity for online services that solicit or encourage users to create and provide content. Courts have also extended the immunity to online services that facilitate the submission of user-created content. From a legal standpoint, generative AI tools are not unique from any other online service that encourages user interaction and contextualizes third-party results.
From a public policy perspective, it is crucial that courts uphold Section 230 immunity for generative AI products. Otherwise, we risk foreclosing on the technology’s true potential. Today, there are tons of variations of ChatGPT-like products offered by independent developers and computer scientists who are likely unequipped to deal with an inundation of litigation that Section 230 typically preempts.
In fact, generative AI products are arguably more vulnerable to frivolous lawsuits because they depend entirely upon whatever query or instructions its users may provide, malicious or otherwise. Without Section 230, developers of generative AI services must anticipate and guard against every type of query that could cause harm.
Indeed, thanks to Section 230, companies like OpenAI are doing just that by providing guardrails that limit ChatGPT’s responses to malicious queries. But those guardrails are neither comprehensive nor perfect. And like with all other efforts to moderate awful online content, the elimination of Section 230 could discourage generative AI companies from implementing said guardrails in the first place; a countermove that would enable users to prompt LLMs with malicious queries to bait out unlawful responses subject to litigation. In other words, plaintiffs could transform ChatGPT into their very own personal perpetual litigation machine.
And as Perault rightfully warns:
“If a company that deploys an LLM can be dragged into lengthy, costly litigation any time a user prompts the tool to generate text that creates legal risk, companies will narrow the scope and scale of deployment dramatically. Without Section 230 protection, the risk is vast: Platforms using LLMs would be subject to a wide array of suits under federal and state law. Section 230 was designed to allow internet companies to offer uniform products throughout the country, rather than needing to offer a different search engine in Texas and New York or a different social media app in California and Florida. In the absence of liability protections, platforms seeking to deploy LLMs would face a compliance minefield, potentially requiring them to alter their products on a state-by-state basis or even pull them out of certain states entirely…
…The result would be to limit expression—platforms seeking to limit legal risk will inevitably censor legitimate speech as well. Historically, limits on expression have frustrated both liberals and conservatives, with those on the left concerned that censorship disproportionately harms marginalized communities, and those on the right concerned that censorship disproportionately restricts conservative viewpoints.
The risk of liability could also impact competition in the LLM market. Because smaller companies lack the resources to bear legal costs like Google and Microsoft may, it is reasonable to assume that this risk would reduce startup activity.”
Hence, regardless of how we feel about Section 230’s applicability to AI, we will be forced to reckon with the latest iteration of Masnick’s Impossibility Theorem: there is no content moderation system that can meet the needs of all users. The lack of limitations on human awfulness mirrors the constant challenge that social media companies encounter with content moderation. The question is whether LLMs can improve what social media cannot.
Late last year, we wrote about the extremely misleading discussion around “shadow banning” on Twitter. The history of the term is important, as it originated as a tool to defeat trolls, and it had a very specific definition: making users who were deemed problematic to a site think their posts were still getting through, when no one else could actually see them. The concept began on the Something Awful forums as a tool against trolls, and migrated elsewhere. It was seen as a clever approach to trolls who especially live for reactions: they can keep posting, but they never get any reaction.
However, in 2018 the term was corrupted, and morphed by some bad reporting, into being used to convey any kind of de-ranking or algorithmic demotion. Still, these days, it has become the common usage of the term among many, even as it makes the word kinda meaningless and disconnected from the more clever anti-trolling tool it really was under its original meaning.
This is because the nature of any algorithm, be it search or recommendation feed, is that it has to uprank some items (the ones the algorithm thinks is most relevant) and downrank other items (the ones the algorithm thinks are least relevant). Yet, that shouldn’t be seen as “shadow banning,” as it’s not about banning anything.
Either way, one of Elon Musk’s big pronouncements upon taking over Twitter was that he seemed all in on this idea, which he acted as if he invented, calling it “deboosting.”
Hilariously, though, just weeks later, when one of the Twitter Files discussed how Twitter already had such tools in place for what it referred to internally as “visibility filtering,” (something that had been widely discussed years earlier when Twitter announced the policy), he acted as if something criminal had happened.
Indeed, soon after he promised that Twitter would shortly be rolling out a feature to tell users if they had been “shadowbanned.”
Like oh so many of his promises, this one is still yet to materialize.
What has been shown, repeatedly, however, is that Musk is now using the ability to “max deboost” those he dislikes, to his own advantage. We already noted how it was ordered that the account that tracks his jet was given the most stringent visibility filtering (before he banned it entirely — despite promising not to).
Then, last month, Tesla employees charged that Elon had done the same to their new union’s Twitter account in some filing with the NLRB.
Twitter has been down-ranking the corporate accounts of its competitors, including TikTok, Snap, Meta, and Instagram, Platformer has learned. The change, which was rolled out in December, means that tweets from these accounts are not recommended to users who do not follow them, and won’t show up in their For You tab, we’re told.
The down-ranking has been applied to more of TikTok’s accounts than any other company’s, according to internal documents obtained by Platformer. At least 19 of TikTok’s corporate accounts, including @TikTok_US, @TIkTokSports, and @TikTokSupport, are included in the down-ranking list, compared to three of Snap’s corporate accounts and two of Instagrams. Publicly available data shows that engagement on tweets from @TikTok_US saw a sharp downturn in January.
The timing of this matches with that brief moment when Twitter officially changed its public policies to say that no user could link to any alternative social media platform, which pissed off basically everyone. About the only person who stood up to defend it was Musk’s mother, who looked kinda silly when Elon rolled back the policy a day later, admitting it was stupid.
However, based on the timing, it looks like Musk only rolled back the public part of the policy, saying users couldn’t link to other social media apps. What appears to have been left in place was the plan to secretly “max deboost” the corporate accounts of those other companies, which is the kind of thing you do when you’re really secure in your value proposition over them.
Elon is, of course, free to do this. It’s his playground and he can do whatever he wants with it. But it’s pretty funny that people were all worked up about publicly revealed plans to try to use these algorithmic filtering tools to boost “healthy conversations,” and yet those very same people don’t seem to much care when Elon is using it to settle personal scores.
Project Management Institute Training Bundle has 2 courses to help you become a project management expert. The first course focuses on the basic principles and the lifecycle of project management. You’ll learn about group interactions, managing cost factors, managing risk factors, and more. The second course focuses on Agile project management. It’s on sale for $49.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
In 2014, Twitter sued the DOJ over its National Security Letter (NSL) reporting restrictions, which limited the company from producing transparency reports with much transparency in them. NSLs were only allowed to be reported in bands. And what broad bands they were. If Twitter received 20 NSLs, it had to report it as 0-499. If it received 498, it had to use the same band. And the band started at zero, so even if Twitter didn’t receive any, it would still look like it did.
After a lot of litigation back-and-forth, the federal court finally dismissed Twitter’s First Amendment lawsuit in 2020, claiming the government had said enough things about national security to exit the lawsuit and continue to limit NSL reporting to bands of 500.
Twitter appealed. The Ninth Circuit Court of Appeals has now weighed in. It says basically the same thing: the government has a national security interest in restricting NSL reporting from NSL recipients. And that interest outweighs Twitter’s First Amendment interest in providing more detailed information in its annual transparency reports.
The factor in this decision [PDF] is the government’s ex parte presentations to the appellate judges. According to the court, the presentation made it very clear that smaller reporting bands would let terrorists and criminals gain the upper hand. [Cue ominous music.]
While we are not at liberty to disclose the contents of the classified materials that we reviewed, our analysis under the narrow tailoring prong depends principally on the knowledge we gleaned from our review of that material. The classified materials provided granular details regarding the threat landscape and national security concerns that animated the higher-level conclusions presented in the unclassified declarations. The classified declarations spell out in greater detail the importance of maintaining confidentiality regarding the type of matters as to which intelligence requests are made, as well as the frequency of these requests. Against the fuller backdrop of these explicit illustrations of the threats that exist and the ways in which the government can best protect its intelligence resources, we are able to appreciate why Twitter’s proposed disclosure would risk making our foreign adversaries aware of what is being surveilled and what is not being surveilled—if anything at all.
The thing about ex parte presentations is that they’re non-adversarial. It’s basically the government running the show, pointing out only the things that agree with their desired outcome, and presumably a bunch of jargon that makes things that may not actually be a threat to national security sound like a threat to national security.
That being said, I’m glad the Ninth Circuit actually forced the government to submit something in support of its national security claims. Most courts don’t. The mere invocation of the state secrets privilege is often all that’s needed to dismiss a lawsuit.
Part of the government’s argument is somewhat more amusing. Sounding like an exasperated middle-manager dealing with an last-minute time off request, the DOJ claims that if it lets Twitter do it (utilize narrower reporting bands) then it will have to let everyone do it. And that way lies madness.
Mr. Tabb also explained that if Twitter were allowed to make its granular disclosures, other recipients of national security process would seek to do the same. And the result would be an even greater exposure of U.S. intelligence capabilities and strategies.
Well, yeah. It probably would need to let others do it, too. But I doubt this would result in the sort of data mining by our nation’s enemies that will finally tip the War on Terror in their favor. Terrorists and criminals use social media services. They also know governments routinely request user info and other data/communications when performing investigations. Unless the transparency reports are linked to unredacted NSLs containing targeted account names, it unlikely that breaking these numbers down just a bit more would let investigation targets know something they don’t already know.
Twitter can ask the Supreme Court to review this case. But given that the Supreme Court has denied certification to two national security-related lawsuits in recent months, it seems unlikely this will be the case it decides it needs to review. The government wins. And the public will have to continue settling for its half-assed transparency.
A bipartisan coalition of Senators including Roger Wicker (MS), Todd Young (IN), Mark Kelly (AZ) and Ben Ray Luján (NM) are poised to reintroduce legislation supported by telecom monopolies that could ultimately result in tech giants paying telecom giants billions of dollars for no coherent reason.
The soon to be reconstituted FAIR Contributions Act, word of which was first leaked to Axios, would direct the FCC to issue a report on “the feasibility of collecting USF contributions from internet edge providers”:
“It is important to ensure the costs of expanding broadband are distributed equitably and that all companies are held accountable for their role in shaping our digital future,” Wicker said in a statement to Axios.
While it’s true that FCC programs like the USF are facing a budgetary shortfall, the idea that “Big Tech” should be the one that pays for this shortfall has been a telecom industry dream policy since the net neutrality fight first heated up in 2003. But while forcing tech companies to pay for the telecom industry’s networks might be “feasible,” that doesn’t mean it makes actual sense.
Big Telecom has tried desperately for decades to force Big Tech companies to pay them billions of additional dollars for no coherent reason. It’s what began the net neutrality wars. This has largely involved falsely claiming that companies like Netflix and Google get a “free ride” on the internet, and should therefore be forced to pay telecom giants billions of dollars to fund broadband expansion.
It’s effectively double dipping.
Telecom giants want to monopolize an essential utility, and work tirelessly to erode both competition and regulatory oversight. This in turn has allowed them to skimp on broadband expansion and drive up prices for absolutely everybody in the chain. They also to gobble up billions in subsidies for networks they almost always fail to uniformly deploy but rarely face consequences for.
At the same time, they have long believed that tech giants owe them money for… simply existing.
This bogus claim that tech companies “aren’t paying their fair share” is decades old. Both Mike and I have debunked variations of it more times than I can count. Yet the free ride claim pops up again here, this time by Luján in his prepared comments, unchallenged by Axios:
“This report will examine how the largest tech companies can pay their fair share. The future is online, and it’s critical that essential broadband programs receive robust funding,” Luján said.
Again, the claim that tech giants “don’t pay their fair share” is a false telecom industry talking point, and policymakers and reporters should know it by now. Companies like Netflix and Google spend untold billions not only on bandwidth, but on cloud storage, transit routes, content delivery networks (CDNs), undersea transit lines, and in Google’s case, even a major last-mile broadband ISP (Google Fiber).
You can usually tell when a Senator is doing heavy lifting for the telecom sector on this subject. One, they’ll repeat the false “fair share” claim. Two, they’ll avoid talking about how the best way to shore up broadband funding problems is to implement meaningful reform of existing subsidy programs. Or that the FCC as a whole could do a hell of a lot better job standing up to monopoly power.
Most U.S. policy makers literally can’t even acknowledge that telecom monopolies exist and cause widespread problems despite decades of very obvious evidence.
If you want to fix broadband funding shortfalls, it starts with policing the untold billions we throw at local monopolies like Comcast, Frontier, or Charter for networks they always, mysteriously, half deploy. It starts with policing companies like AT&T accused of ripping off existing school broadband programs without penalty. We’re about to throw $45 billion dollars at broadband as part of the infrastructure bill, and you can be damn certain that giants like Comcast will already be getting the lion’s share of that money for broadband deployments that may or may not ever actually arrive.
Without telecom subsidy reform and increased accountability for telecom monopolies, you’re just throwing billions upon billions of dollars at an industry with a multi-decade history of widespread subsidy fraud and abuse. But you’ll notice this is never even mentioned by folks pursuing this new Big Tech telecom tax, or the major press outlets covering the policy. Not even as fleeting context.
Despite the entire proposition being stupid, telecoms have had consistent luck in getting captured or otherwise stupid policymakers to support this kind of push. In part because it helps politicians seem like they’re tackling the ever nebulous “digital divide,” a problem telecom monopolization effectively created.
There’s a big, renewed debate over exactly these disingenuously named “fair share” proposals over in the EU. South Korean regulators also implemented a similar regulatory environment that has resulted in ISPs suing Netflix just because shows like Squid Game were popular. It’s a dumb policy, and any costs borne by tech giants will just get passed on by consumers, and thrown at telecom giants with a history of fraud.
I’ve debunked this gibberish so many times now I feel like I’m living in a weird purgatorial space where historical context and factual reality simply no longer exist. These relentless efforts by the telecom lobby to tax tech giants aren’t being conducted in good faith, and any politician or news outlet that parrots them while ignoring historical context is part of the problem.