A couple of years back, we talked about how Ubisoft handled its own accidental leak of the Beyond Good & Evil remaster to its subscribers. Months before it was due to be released, a technical error resulted in an incomplete version of the game suddenly being available on Ubisoft+. Some people grabbed copies of it, played it, and streamed or uploaded footage of the game on YouTube and elsewhere. Understandably, Ubisoft freaked out, worried that an un-finished version of the game showing up might tarnish what would eventually be a complete game. To that end, they stupidly went out and attempted to DMCA to death every bit of content and information about the leaked game and footage before eventually landing on what was the right way to handle this all along.
That’s exactly what Ubisoft should have done from the jump. Acknowledge the leak, note it’s not in its finished state, express excitement for the finished product, fin.
This past week, it appears some folks got their hands on copies of Assassin’s Creed Shadows prior to its release. As you’d expect, those copies were played or resold on the internet, after which footage of the game was uploaded to all kinds of streaming and social media sites. And if you’re wondering if Ubisoft learned its lesson from its previous experience with this sort of thing, rest assured the company instead decided to learn it all over again.
Chunks of gameplay footage are now easily findable online, even as Ubisoft battles to remove the videos. Other fans have posted summaries of their time with the game so far, including story spoilers. And there’s still weeks to go until Assassin’s Creed Shadows’ 20th March launch date.
Yup, the company attempted to play bury-the-leak via takedowns once again. And, as you’d expect, it is going precisely as well as the last attempt. Which is to say terribly. And, because of those actions, even more attention is being paid to that leaked footage.
As it did last time, Ubisoft eventually got around to saying all the right words.
“We are aware players have accessed Assassin’s Creed Shadows ahead of its official release,” Ubisoft said in a message from the Assassin’s Creed community team, posted to reddit. “The development team is still working on patches to prepare the experience for launch and any footage shared online does not represent the final quality of the game.”
“Leaks are unfortunate and can diminish the excitement for players,” Ubisoft’s statement continues. “We kindly ask you not to spoil the experience for others. Thank you to our community for already taking steps to protect everyone from spoilers. Stay in the shadows, avoid the spoilers, and keep an eye on our channel for more official surprises in the coming weeks! 20th March will be here soon!”
The statement is, once more, just about perfect. It does everything Ubisoft needs: informs the public that the leaked game isn’t complete, expresses excitement for the release, thanks and otherwise looks out for its fans, and so on. There was zero need to try to play whac-a-mole with the leaks.
In fact, it sure sounds like Ubisoft should have used the leak instead.
Physical copies of the game were likely manufactured weeks if not months ago, while Ubisoft will continue to issue patches up to and beyond the game’s street date in March. That said, fans report not seeing too many issues with the game so far.
Not seeing too many issues in a leaked game that hasn’t gotten its inevitable day-one patch is about as much of an endorsement as a gaming company could hope for.
What I’m hoping for, however, is that Ubisoft doesn’t have to learn this lesson a third time.
Last November, the Anti-Defamation League (ADL) released Steam-Powered Hate, accusing Valve’s game launcher, Steam, of fostering extremism. The report dropped just before Senator Mark Warner, a SAFE TECH Act proponent, threatened Steam’s owner, raising concerns about the political motivations behind the ADL’s claims.
The ADL analyzed over one billion data points, flagging just 0.5% as “hateful.” Yet, they misrepresent Steam—primarily a game marketplace—as a social media hub overrun with extremism, despite offering no real expertise in online content moderation or gaming culture. Meanwhile, they give powerful figures like Elon Musk a pass while pushing for government intervention in digital spaces they don’t understand.
This isn’t new—the ADL has a history of advocating speech restrictions, from social media to video games. As an American Jew, I find their big-government approach to content moderation alarming. Regulators must reject pressure from advocacy groups that misrepresent online communities and threaten free expression in the name of fighting extremism.
The ADL Misunderstands Gaming’s Complex and Notoriously Edgy Environment
Gaming communities operate on a different wavelength than typical online spaces. Gamers are notorious for their dark humor, edgier memes, and a communication style that can seem alien to outsiders. The ADL, in its attempt to analyze a platform central to gaming culture, failed to grasp this, making sweeping generalizations about a community it clearly doesn’t understand.
Take their report’s biggest claim: the vast majority of so-called “hateful content” was Pepe the Frog—a meme that, while hijacked by extremists in recent years, remains widely used in mainstream gaming culture. Even the meme’s creator was outraged by its association with hate groups. Yet the ADL doesn’t distinguish between an actual extremist Pepe and a harmless, widely used gaming meme. Instead, they lump them together, inflating their numbers.
Their AI system, “HateVision,” identified nearly one million extremist symbols—over half of which were Pepe. The AI was trained on a limited dataset of images and keywords the ADL pre-selected as hateful, but it failed to differentiate between legitimate extremism and gaming’s irreverent meme culture. Worse, it didn’t distinguish between U.S.-based and international users, ignoring the fact that gaming communities operate under different cultural norms worldwide.
The AI’s failures didn’t stop at images. It also couldn’t tell the difference between actual hate speech and the tongue-in-cheek, often provocative style of gaming communities. While gaming culture can be abrasive, the vast majority of players understand the difference between in-game trash talk and real-world hostility. The ADL? Not so much.
The ADL also went after copypastas—blocks of text copied and pasted to provoke reactions—identifying 1.83 million “potentially harmful” ones without bothering to check context. Their keyword-based approach flagged terms like “boogaloo” and “amerikaner” without acknowledging their multiple meanings. “Boogaloo” is mostly a Gen-Z meme, not a secret alt-right code word in gaming. “Boogalo” does have alt-right connotations, but there are other connotations like the one listed above. “Amerikaner” can refer to a cookie, the German word for “American,” or even a famous YouTuber’s username. They also flagged “Goyim” as a slur, despite it being a common and sometimes affectionate term used by Jewish people themselves. In the in-group of Jewish people it is often non-offensive. Though the term can be used in an offensive manner by antisemitic people, the ADL made no distinctions.
Curious, I did a Steam keyword search for “Amerikaner.” The first result was a left-winger calling out racism. The second was someone mocking Americans in Counter-Strike. The third was a non-English post. None of the results, in my opinion, rose to the level of extremism. I also searched “Boogaloo” and found references to the classic “electric boogaloo” meme, a non-English speaker using the term, and a gaming forum name. The ADL didn’t bother with this level of nuance—they just scraped forums, pulled words out of context, and called it a day.
The ADL also attacked Garry’s Mod (G-Mod), a sandbox game known for its anything-goes creativity. They focused on one mod featuring maps of real-life mass shootings, citing comments with words like “based,” “Sigma,” and even “Subscribe to PewDiePie” as signs of extremism. But these are common ‘chronically online’ phrases with broad uses. “Based” is Gen-Z slang used by individuals on both the left and right. “Sigma” is a meme mocking “alpha male” tropes. And while the Christchurch shooter did mention PewDiePie, claiming the ADL is unfairly targeting him isn’t exactly a stretch. Yes, PewDiePie has had controversies, but painting him as a hate symbol is a major leap.
The report wraps up with the tragic white supremacist attack in Turkey, where the ADL notes that while there were red flags on the shooter’s Steam profile, there’s “no evidence” he was directly inspired by extremist content on the platform. Still, they use this tragedy to argue Steam isn’t doing enough to moderate content. But even their own research found Steam actively filters Swastikas into hearts—identifying only 11 profiles where this workaround failed. Eleven profiles. Out of millions. That’s an edge case, not a crisis.
To be fair, the study did identify a small number of fringe groups glorifying hate and violence. But the bigger question is whether the ADL’s findings actually reflect a serious problem—or if they’re simply misunderstanding an edgy, chaotic, but largely non-extremist gaming culture. And given what a small amount of extreme content that the ADL found worldwide, it looks like Steam is actually doing its job.
The ADL’s Steam Comparison is Hypocritical and Misguided
Still, the ADL reportedly takes issue with Steam’s so-called “ad hoc” approach to content moderation, claiming that despite Valve’s removal efforts, the platform still “fails to systematically address the issue of extremism and hate.” But this critique ignores the reality of gaming culture and Steam’s own policies.
Steam’s moderation reflects the nature of its community. Its content rules fall into two categories: one for games—allowing all titles except those that are illegal or blatant trolling—and another for user-generated content, which bans unlawful activity, harassment, IP violations, and commercial exploitation. The ADL criticizes Steam for not taking a stricter stance like Microsoft and Roblox, but that comparison is misleading at best.
Microsoft’s gaming history isn’t exactly a beacon of virtue. Xbox 360 live chats were infamous for racist slurs, and Call of Duty’s lobbies remain a toxic free-for-all. Meanwhile, Minecraft—the game the ADL seems to hold in high regard—was created by someone with a history of antisemitic remarks, and Microsoft itself has faced accusations of workplace discrimination. Yet, the ADL doesn’t seem nearly as concerned about these issues.
As for Roblox, while it does enforce stricter content moderation, it’s far from an extremist-free utopia. The Australian Federal Police have warned about the platform’s potential for radicalization, and NBC has reported extremist content explicitly targeting children. If anything, this suggests that heavy-handed moderation doesn’t necessarily eliminate bad actors—it just pushes them to adapt.
Steam’s approach may not align with the ADL’s ideal vision of content moderation, but pretending that Microsoft and Roblox represent the gold standard ignores their own deep-seated issues. It does not make sense for a platform like Steam to have policies identical or similar to XBox and Roblox. Both of those are fully live-service platforms, whereas Steam is primarily a consumption platform for games as opposed to a platform where users are constantly interacting with one another in-game, online through the platform.This creates market differentiation. Platform’s policies are a reflection of the services that they offer and if users feel the policies are problematic they can jump ship to another provider.
Regulators Must Beware of Overreach from Non-Trust & Safety Experts Like the ADL
In its report, the ADL calls for a national gaming safety task force, urging policymakers to create a federally backed group to “combat this pervasive issue” through a multi-stakeholder approach. On paper, this sounds like a noble goal. In practice, it’s a recipe for government overreach that could stifle the gaming industry’s creative and independent spirit.
Gaming has thrived because of its grassroots nature—built by passionate developers and players, not by bureaucrats or advocacy groups with no real understanding of gaming culture, online community norms, or trust and safety. A federal task force risks imposing rigid, top-down regulations that don’t fit the dynamic and ever-evolving gaming world. Worse, it could open the door to politically motivated interventions that prioritize appearances over real solutions.
The ADL also suggests Steam engage in multi-stakeholder moderation efforts. But who controls the conversation? When powerful corporations and activist organizations dominate these discussions, smaller developers and gaming communities get sidelined. That’s how you end up with policies shaped by corporate interests and advocacy agendas rather than solutions that actually work for gamers. And let’s be blunt—the ADL has no business dictating content moderation policies for gaming platforms.
The ADL is not an expert on content moderation, online community dynamics, or trust and safety. It has no meaningful experience navigating the complexities of digital spaces, algorithmic content regulation, or the unique cultural norms that define gaming communities. Instead, their report relies on anecdotal evidence, an oversimplified AI model, and out-of-context symbols, all of which lead to flawed conclusions and misleading claims.
Steam isn’t Microsoft or Disney. It’s a privately owned company run by Valve and Gabe Newell, without the vast political and financial clout of industry giants. Forcing broad content moderation mandates onto platforms like Steam sets a dangerous precedent, burdening smaller businesses that lack the infrastructure of the major tech companies. And let’s be clear: Steam’s primary function is to sell video games, not to serve as a social media watchdog.
The ADL’s concerns about extremism may be well-intended, but their lack of expertise, misinterpretation of gaming culture, and one-size-fits-all approach make them uniquely unqualified to weigh in on this issue. Their push for federal intervention aligns with the broader SAFE TECH Act’s concerning political and financial motivations, which could disproportionately harm platforms that aren’t backed by corporate lobbying power.
Yes, online extremism is a problem—but handing control to out-of-touch regulators and advocacy groups that don’t understand the space isn’t the answer. The gaming industry must stay free, innovative, and independent—not bogged down by heavy-handed government oversight that threatens to erase the very culture that makes online gaming communities thrive.
Elizabeth Grossman is a first-year law student at the University of Akron School of Law in the Intellectual Property program and with a goal of working in tech policy.
During peak COVID lockdowns, New York State passed a law requiring that ISPs (with more than 20,000 subscribers) offer low-income state residents (and low income residents only) a 25 Mbps broadband tier for $15. Big Telecom didn’t much like that, but their multi-year effort to kill the law, first passed in 2021, recently fell apart when the Trump Supreme Court refused to hear their challenge.
Now telecom giants, long fat and comfortable thanks to regional monopolies, are worried by the fact that other states are following suit. Vermont, California and Massachusetts recently proposed their own versions of New York’s law requiring ISPs make broadband affordable for poor people.
For a generation, the U.S. government largely looked the other way as big telecom companies like AT&T and Comcast crushed all competition underfoot, then lobbied or literally bribed lawmakers to look the other way. The result: Americans pay significantly more for patchy, slower broadband than in most developed nations. With terrible customer service to match.
Telecom lobbyists have long insisted that having government do anything to address this competitive logjam is radical overreach. Whether net neutrality, privacy oversight, or most notably price caps. Yet at the same time, they’re on the cusp of a generational victory that has the Trump administration effectively destroying the entirety of what’s left of federal consumer protection.
So not too surprisingly, states are rushing to fill that void federal regulators are leaving. And telecom giants are whining about a problem they created; first by dismantling competition and jacking up consumer rates, second by dismantling federal oversight:
“Any attempt by individual states to regulate prices or other parts of the broadband market will undermine all of the connectivity progress we have made, discourage investment, and end up hurting consumers.”
When New York passed its law, AT&T lobbyists put on a little performance where they pretended they were leaving New York state due to a “hostile business environment.” In reality, the company barely did business in the state in the first place; its home 5G service in question having extremely limited availability.
AT&T engaged in the ploy in the hopes that the Supreme Court would reconsider its refusal to hear the case. But, apparently busy doing other favors to AT&T (like eviscerating the entirety of U.S. federal consumer protection oversight), the Supreme Court again this week refused to hear the case. That opens the door to other states following suit, much to Comcast, AT&T, Verizon, and Charter’s chagrin.
As with everything (net neutrality, privacy, basic transparency requirements), telecom will insist that any government action to lower broadband prices is radical overreach. But requiring they provide a cheap, slow tier to poor people isn’t a huge ask. In the gigabit era, providing a 25 Mbps tier costs big providers a tiny pittance of their fat, captured revenues.
It’s also worth noting that companies like AT&T are massively politically powerful in state legislatures, and the Vermont, California and Massachusetts bills haven’t passed yet. And despite kicking this all off with its own law mandating affordable broadband to the poor, New York has yet to actually enforce its own law, so the full scope and impact of this will be nowhere as dramatic as telecom lobbyists will claim.
You shouldn’t need a permission slip to read a webpage—whether you do it with your own eyes, or use software to help. AI is a category of general-purpose tools with myriad beneficial uses. Requiring developers to license the materials needed to create this technology threatens the development of more innovative and inclusive AI models, as well as important uses of AI as a tool for expression and scientific research.
Threats to Socially Valuable Research and Innovation
Requiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.
Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet.
The stakes are high, because ML is critical to helping us interpret the world around us. It’s being used by researchers to understand everything from space nebulae to the proteins in our bodies. When the task requires crunching a huge amount of data, such as the data generated by the world’s telescopes, ML helps rapidly sift through the information to identify features of potential interest to researchers. For example, scientists are using AlphaFold, a deep learning tool, to understand biological processes and develop drugs that target disease-causing malfunctions in those processes. The developers released an open-source version of AlphaFold, making it available to researchers around the world. Other developers have already iterated upon AlphaFold to build transformative new tools.
Threats to Competition
Requiring AI developers to get authorization from rightsholders before training models on copyrighted works would limit competition to companies that have their own trove of training data, or the means to strike a deal with such a company. This would result in all the usual harms of limited competition—higher costs, worse service, and heightened security risks—as well as reducing the variety of expression used to train such tools and the expression allowed to users seeking to express themselves with the aid of AI. As the Federal Trade Commission recently explained, if a handful of companies control AI training data, “they may be able to leverage their control to dampen or distort competition in generative AI markets” and “wield outsized influence over a significant swath of economic activity.”
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reutersv. Ross Intelligence, widely considered to be the first lawsuit over AI training rights ever filed. Ross Intelligence sought to disrupt the legal research duopoly of Westlaw and LexisNexis by offering a new AI-based system. The startup attempted to license the right to train its model on Westlaw’s summaries of public domain judicial opinions and its method for organizing cases. Westlaw refused to grant the license and sued its tiny rival for copyright infringement. Ultimately, the lawsuit forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Similarly, shortly after Getty Images—a billion-dollar stock images company that owns hundreds of millions of images—filed a copyright lawsuit asking the court to order the “destruction” of Stable Diffusion over purported copyright violations in the training process, Getty introduced its own AI image generator trained on its own library of images.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. To develop a “foundation model” that can be used to build generative AI systems like ChatGPT and Stable Diffusion, developers need to “train” the model on billions or even trillions of works, often copied from the open internet without permission from copyright holders. There’s no feasible way to identify all of those rightsholders—let alone execute deals with each of them. Even if these deals were possible, licensing that much content at the prices developers are currently paying would be prohibitively expensive for most would-be competitors.
We should not assume that the same companies who built this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make it possible for new players to build them.
Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.
Threats to Free Expression
Generative AI tools like text and image generators are powerful engines of expression. Creating content—particularly images and videos—is time intensive. It frequently requires tools and skills that many internet users lack. Generative AI significantly expedites content creation and reduces the need for artistic ability and expensive photographic or video technology. This facilitates the creation of art that simply would not have existed and allows people to express themselves in ways they couldn’t without AI.
Some art forms historically practiced within the African American community—such as hip hop and collage—have a rich tradition of remixing to create new artworks that can be more than the sum of their parts. As professor and digital artist Nettrice Gaskins has explained, generative AI is a valuable tool for creating these kinds of art. Limiting the works that may be used to train AI would limit its utility as an artistic tool, and compound the harm that copyright law has already inflicted on historically Black art forms.
Generative AI has the power to democratize speech and content creation, much like the internet has. Before the internet, a small number of large publishers controlled the channels of speech distribution, controlling which material reached audiences’ ears. The internet changed that by allowing anyone with a laptop and Wi-Fi connection to reach billions of people around the world. Generative AI magnifies those benefits by enabling ordinary internet users to tell stories and express opinions by allowing them to generate text in a matter of seconds and easily create graphics, images, animation, and videos that, just a few years ago, only the most sophisticated studios had the capability to produce. Legacy gatekeepers want to expand copyright so they can reverse this progress. Don’t let them: everyone deserves the right to use technology to express themselves, and AI is no exception.
Threats to Fair Use
In all of these situations, fair use—the ability to use copyrighted material without permission or payment in certain circumstances—often provides the best counter to restrictions imposed by rightsholders. But, as we explained in the first post in this series, fair use is under attack by the copyright creep. Publishers’ recent attempts to impose a new licensing regime for AI training rights—despite lacking any recognized legal right to control AI training—threatens to undermine the public’s fair use rights.
By undermining fair use, the AI copyright creep makes all these other dangers more acute. Fair use is often what researchers and educators rely on to make their academic assessments and to gather data. Fair use allows competitors to build on existing work to offer better alternatives. And fair use lets anyone comment on, or criticize, copyrighted material.
When gatekeepers make the argument against fair use and in favor of expansive copyright—in court, to lawmakers, and to the public—they are looking to cement their own power, and undermine ours.
A Better Way Forward
AI also threatens real harms that demand real solutions.
Many creators and white-collar professionals increasingly believe that generative AI threatens their jobs. Many people also worry that it enables serious forms of abuse, such as AI-generated nonconsensual intimate imagery, including of children. Privacy concerns abound, as does consternation over misinformation and disinformation. And it’s already harming the environment.
Expanding copyright will not mitigate these harms, and we shouldn’t forfeit free speech and innovation to chase snake oil “solutions” that won’t work.
We need solutions that address the roots of these problems, like inadequate protections for labor rights and personal privacy. Targeted, issue-specific policies are far more likely to succeed in resolving the problems society faces. Take competition, for example. Proponents of copyright expansion argue that treating AI development like the fair use that it is would only enrich a handful of tech behemoths. But imposing onerous new copyright licensing requirements to train models would lock in the market advantages enjoyed by Big Tech and Big Media—the only companies that own large content libraries or can afford to license enough material to build a deep learning model—profiting entrenched incumbents at the public’s expense. What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution.
What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI.
Expanding copyright, on the other hand, threatens socially beneficial uses of AI—for example, to conduct scientific research and generate new creative expression—without meaningfully addressing the harms.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
The “rule of law” folks (who also have a sizable overlap with the “party of free speech“) are at it again. Around the nation, legislators emboldened by the Republican party’s embrace of bigotry have been passing bills banning books (or, worse, subjecting librarians to criminal charges). Almost as often as a law gets passed, it gets sued out of existence, because most courts are more inclined to protect constitutional rights than throw their support behind close-minded people who think everyone should only have access to content they think is acceptable.
So, when the laws fail to survive constitutional challenges and/or fail to gather governors’ signatures, the “rule of law” people decide it’s time to take matters into their own hands. The sheer amount of pettiness on display here is astounding. Rather than respect the viewpoints and content preferences of others, stupid people are doing extremely shitty things to prevent others from accessing the content they’re seeking.
When the Crown Point Community Library underwent a building project at its Winfield branch, workers had to move the branch’s bookshelves. After they’d been moved, staff found books that had been hidden.
All books that were found had been challenged, said Julie Wendorf, the library’s director and president of the Indiana Library Federation.
“They’re mostly LGBTQ materials that are found under the shelves,” Wendorf said. “It’s quite suspicious that they would only be of that topic matter.”
Yeah, it’s super-weird that the only books being hidden by “patrons” are those they failed to get removed from the shelves lawfully. According to Wendorf, another popular tactic deployed by these terrible people is much less subtle: throwing the books into library trash cans.
And it’s not isolated to Indiana. Librarians are reporting the same activities in other states that have aggressively pursued book bans of LGBTQ+ content (along with historical depictions of US racist activities and policies), such as Texas, Florida, and Iowa.
It goes further than books that have been unsuccessfully challenged. In one library, an ill-willed individual decided to clear the “religion” section of books that didn’t agree with their preferred religion.
In 2023, a patron asked about Christian books, found them near Islamic books and moved more than 30 books throughout the library, either in between or behind shelves.
Yep, that’s the mentality we’re dealing with here in the US. And that’s why people who loudly support Trump and his sycophants scattered around the nation should never have their bad-faith arguments entertained, much less debated. These people have already decided your views don’t matter and that they’re willing to do whatever it takes to ensure no one stumbles across content they don’t like, no matter what age they are. These are ridiculous people that are, despite their mental, moral, and emotional limitations, capable of carrying out the harms the nation’s courts are doing their level best (in most cases) to prevent local governments from committing.
The first Trump FCC tried to give Musk nearly a billion dollars to deliver expensive Starlink access to some traffic medians and airport parking lots. The Biden FCC clawed back most of those subsidies, (correctly) arguing that the service couldn’t deliver consistent speeds, and if we’re going to spend taxpayer money on broadband, more future-proof and less capacity constrained options like fiber and 5G should probably be prioritized.
Not surprisingly, Trump 2.0 is going to massively over-compensate for this fake scandal, and slather their favorite fake engineer billionaire manbaby with cash at every conceivable opportunity.
That apparently starts with giving Musk and Starlink a lucrative new FAA contract as Musk and his 4chan tween DOGE minions set about pretending to fix government by throwing it into chaos. Musk appears to be trying to elbow out Verizon, which has an existing 15 year, $2 billion contract with the agency to upgrade its infrastructure that was obtained through traditional transparent bidding processes.
The length and price tag of Starlink’s new FAA contract were, unsurprisingly, not publicly disclosed. Which is weird for a DOGE figurehead that professes to care so much about transparency:
“The contract comes while Musk is leading efforts to make deep cuts in federal government spending, including staffing cuts at the FAA, and some critics are raising questions about conflicts of interest over his role overseeing government agencies that are supposed to be regulating his businesses.”
Bloomberg had a little more leaked inside detail, noting the partnership would “eventually” include 4,000 Starlink terminals and be deployed over the next 12 to 18 months. Follow up reporting from the Washington Post suggests there’s some consternation about Musk’s giant handout among FAA officials.
In a post to his right wing propaganda platform, Musk stated, without any sort of evidence, that the “Verizon system is not working and so is putting air travelers at serious risk.” Basically falsely claiming that Verizon might be killing U.S. air travelers:
I’d just like to pause for a moment to acknowledge that as somebody who has probably written more about Verizon than anybody alive, it takes a very specific type of shitty villain to have me backing Verizon.
Verizon signed up for Trump 2.0 eager to get a giant tax cut for doing nothing. And relentless attacks on organized labor. And the total evisceration of corporate oversight of whatever’s left of FCC consumer protection authority. And they’re keen to get their giant $20 billion merger with Frontier rubber stamped.
That’s a lot of potential money at stake, so I’m not sure Verizon will show any backbone and file suit here. But if they don’t, shareholders will certainly have the opportunity to sue. Knowing Verizon’s greasy lobbying and legal practices pretty intimately, it’s all a very leopards-eating-faces sort of affair.
But again, Musk stealing Verizon’s FAA contract is just one of countless conflicts of interest that arise with having an unelected bureaucrat illegally declaring how government should or shouldn’t function and illegally bypassing bidding processes. Not to mention the numerous privacy and national intelligence issues.
The FAA contract is certainly just the opening salvo for Musk favoritism. The U.S. government has already threatened to pull Ukraine’s access to Starlink unless they sign off on a mineral deal that would be beneficial to Tesla. You probably also missed that USAID officials were investigating Starlink‘s use in Ukraine right before Trump and Musk engaged in a rapid unscheduled disassembly of the agency.
It’s clear the Trump NTIA is also hoping to redirect some of the $42.5 billion in BEAD broadband infrastructure subsidies away from existing projects and toward Musk’s Starlink whenever possible. That’s not just bad due to corruption, but because it’s going to wind up redirecting a lot of taxpayer money away from small local businesses and popular community-owned broadband networks.
Starlink is a good option if you’re stuck in the middle of nowhere with nothing else. But “I didn’t do the reading” guys like Joe Rogan tend to think Starlink is akin to some kind of magic pixie dust you can just sprinkle around to fix everything.
Rank corruption aside, the GOP is genuinely convinced that Musk is an engineering super-genius who can fix government with a wave of his hand. They genuinely have no idea that this persona was a press-enabled mythology providing cover for a rank opportunist who takes credit for other peoples’ ideas, something the tech press only belatedly discovered during his bungled takeover of Twitter.
So they’re keen on throwing all of their eggs in the Elon Musk basket, fairly oblivious to the fact they’ve given absolute power to a conspiratorial oligarch who genuinely has no Earthly idea what he’s actually doing. So yeah, a lot of this is just corrupt cronyism pretty typical in an authoritarian kakistocracy. But a lot of it genuinely is being driven by rank delusion into Musk’s actual intellect and expertise, which is going to end extremely, extremely badly for absolutely everybody involved.