We all should know by now that Nintendo is incredibly protective of its IP. When it comes to anything having to do with Pokémon specifically, all the more so. While they would tell you that they’re just protecting their IP, the end result is that some of the biggest Pokémon fans out there that just want to do some fun things that represent no harm to Nintendo get shut down by threats, lawyers, or copyright strikes.
Take the YouTube series called PokeNational Geographic, for instance. While this YouTube series has been pushing out faux nature documentary videos about Pokémon for several years, the channel behind it just got hit with a bunch of copyright strikes from Nintendo.
In a video posted to an alternate channel, Elious says that Nintendo of America suddenly issued numerous strikes on large batches of his videos, all in the space of 12 hours. At the time he posted the video, a total of 20 videos had been caught up in four separate copyright strikes which encompass the entirety of the videos. With YouTube’s three-strikes policy, this means his channel is now pending deletion by YouTube and will disappear in seven days.
Elious says the strikes claim his channel is inappropriately using “content used in Pokémon video games including audiovisual works, characters, and imagery.” Elious’ videos consist of original 3D animation of various Pokémon in the “wild,” with a David Attenborough–style narration sharing various facts about Pokémon like Magikarp, Squirtle, Magnemite, Snom, Mew, Charizard, and more. He has been producing these videos on this channel since as far back as 2023 without issue, and claims in his video that the only actual content he took directly from the games was “tiny sprite roars” that last less than three seconds, adding that numerous other Pokémon creators on YouTube, as well as AI-produced channels mimicking his own, use images or footage directly from the games with no issue.
So, why now? There’s no way to know for sure, but Elious did recently launch a Patreon account so that fans could compensate them for the series. The general speculation is that once Elious attempted to make any kind of money from his video series, that spurred Nintendo to send the copyright strikes. And for many people, that will make complete sense.
I don’t understand that point of view. Regardless of any money changing hands, this still doesn’t represent any threat or harm to Nintendo or the Pokémon franchise. If anything, fun little fan videos like this only propel interest in the product. They represent free engagement lures for fans of Pokémon. Why in the world is copyright striking this channel to hell a better option than working out a free or cheap licensing arrangement with Elious so that they can keep producing the series and Nintendo can reap some of the benefit?
Or, hell, Nintendo could have tried to have a conversation with Elious, at least.
Elious continues by saying that he isn’t opposed to just deleting all the Pokémon videos if Nintendo of America asks, but he wishes he could keep his nearly 100,000 subscribers so he can keep making videos of other things, as he has on the channel in the past.
“I can’t really fight this,” Elious says. “It all seems legitimate, it does seem to come from the actual, real Nintendo of America. I can’t fight this. I don’t…I don’t know what to do about it because it’ll remove everything. I’m downloading stuff, of course, I have like, all the videos myself. But I’ll never be able to post them again, and I’ll never be able to use this channel again. Almost 100,000 subscribers over three years of making these animations and it’s all going to be gone in seven days.”
It’s simply too bad that Nintendo would rather worship at the altar of intellectual property than get creative with how it can support its fans. Thanks to IP maximalist thought, here is just a little more fun that Nintendo has flushed down the toilet.
There’s a famous Mitchell & Webb sketch where two SS officers, mid-conversation on the Eastern Front, suddenly notice something troubling about their uniforms. “Hans,” one asks, peering at his cap, “are we the baddies?” The skulls had been there the whole time. The skulls are kind of a giveaway. But it took a while for the question to surface. You’ve probably seen it:
I thought about that sketch reading Wired’s reporting on the internal turmoil at Palantir, where both current and former employees are starting to ask that question of their own work:
Around that time, two former employees reconnected by phone. Right as they picked up the call, one of them asked, “Are you tracking Palantir’s descent into fascism?”
“That was their greeting,” the other former employee says. “There’s this feeling not of ‘Oh, this is unpopular and hard,’ but ‘This feels wrong.’”
Two weeks ago, we wrote about Palantir going mask-off for fascism, specifically about CEO Alex Karp’s company posting a 22-point manifesto that included some genuinely ugly stuff about how “certain cultures” are “regressive and harmful” and how pluralism is a “shallow temptation.” I argued that this kind of public ideological positioning was both morally bankrupt and strategically suicidal. The moral bankruptcy part should be obvious (if it’s not, go do some soul-searching). But doing so at a time when American-style fascism is historically unpopular basically everywhere, including within the US, just seems like you’ve bet on the losing team at a time when it’s clear they have no chance of coming back to win.
That’s quite a decision for the company, given that Palantir is supposed to be in the business of using technology to predict how strategic decisions will play out.
It turns out a lot of Palantir employees agree that maybe it’s not so good for them or the company to be picking the morally bankrupt, historically unpopular position. Better late than never, I suppose.
There’s a “well, duh” element to all of this that we shouldn’t gloss over. Palantir has been Palantir for two decades. The company is named after the corrupting all-seeing surveillance orb from Lord of the Rings. Its initial venture capital came from the CIA. Peter Thiel co-founded it. The entire pitch has always been mass data aggregation in service of authoritarian state power. If you took a job there at any point in the last twenty years, the skulls were sitting right on top of the cap, plainly visible, and people were pointing at them constantly.
So in one sense, the current employee soul-searching is just the sort of late-to-the-party realization that deserves to be called out. Where, exactly, did people think this was going?
But it’s also a sign of how far Karp is willing to go — stripping away the plausible deniability that let employees tell themselves they were just building tools, not endorsing a worldview. Palantir didn’t just keep doing what it had always been doing. Karp made a deliberate choice to escalate, both in what the company is building and in how openly it’s announcing what it’s building it for.
The Wired piece documents the various moments where things began shifting internally: the deepening ICE deportation infrastructure work, the death of nurse Alex Pretti during anti-ICE protests, the questions about whether Palantir’s Maven targeting system was used in the missile strike on an Iranian elementary school that killed more than 120 children. And then, to top it all off, Karp published a manifesto telling employees and customers and the entire world that the company now believes pluralism itself is a civilizational error.
The most damning revelation in the Wired piece comes from a Palantir privacy and civil liberties (PCL) employee in a recorded internal AMA — and it shows that the entire concept of a PCL team at the company is window dressing, there so Karp and others in management can pretend they’re not quite as authoritarian as they actually are.
At least one of these AMAs was organized independently of PCL leadership by two team leads, including one who worked directly on the ICE contract for a period of time. “This was very rogue,” a PCL employee who worked on the ICE contract said in a February AMA, a recording of which was obtained by WIRED. “Courtney [Bowman, head of the privacy and civil liberties team] doesn’t know that I’m spending three hours this week talking to IMPLs [Palantir terminology for its client-facing product teams], but I think this is the only real way to start going in the right direction.”
Throughout the lengthy call, employees working on a variety of Palantir’s defense projects posed hard questions. Could ICE agents delete audit logs in Palantir’s software? Could agents create harmful workflows on their own without the company’s help? What is the most malicious thing that could come out of this work?
Answering these questions, the PCL employee who worked on the ICE contract said that “a sufficiently malicious customer is, like, basically impossible to prevent at the moment” and could only be controlled through “auditing to prove what happened” and legal action after the fact if the customer breached the company’s contract.
And then the big (if unsurprising) reveal that Karp doesn’t seem to think much of civil liberties and these employees are basically forced to see if they can distract the dictator-wanna-be at the top (does this sound familiar?):
At one point during the call, one of the employees tried to level with the group, explaining that Palantir’s work with ICE was a priority for Karp and something that likely wouldn’t change any time soon.
“Karp really wants to do this and continuously wants this,” they said. “We’re largely at the role of trying to give him suggestions and trying to redirect him, but it was largely unsuccessful and we seem to be on a very sharp path of continuing to expand this workflow.”
So the internal civil liberties function has been reduced to politely suggesting that maybe the CEO not do the worst version of the worst thing, and getting overruled.
Cool cool cool.
What seems to have finally broken the dam, though, was not the deportation work or the missile strike — it was the manifesto. As one employee posted internally after the company published its 22-point screed:
“I’m curious why this had to be posted. Especially on the company account. On the practical level every time stuff like that gets posted it gets harder for us to sell the software outside of the US (for sure in the current political climate), and I doubt we need this in the US?” wrote one frustrated employee. The message received more than 50 “+1” emojis.
The actual harms — the deportations, the surveillance infrastructure, the dead children — generated internal Slack threads and uncomfortable AMAs. The branding embarrassment generated more of a revolt.
The skulls were always on that hat. People only really started pointing at them when management decided to put the hats on billboards.
Two things are worth calling out separately:
Workers who take jobs at companies like Palantir have an obligation to think harder about what they’re building before they build it. The “I was just writing code” defense has always been weak, and it gets weaker the more obvious the application becomes. We wrote recently about how the “bring your whole self to work” era has pretty much ended, and how workers in a tighter labor market are increasingly going to find themselves at companies whose values they can’t fully stomach. That’s part of what’s happening in a labor market where management has way more leverage. But it’s also true that some companies have been waving very large red flags for a very long time, and “hey, I needed a job” excuse only goes so far when the job is building deportation infrastructure and missile targeting software that ends up blowing up schoolchildren. That shit sticks to people. And it should.
The second is that better-late-than-never still matters. The PCL employees pushing back internally, the Slack threads demanding accountability, the rogue AMAs organized without leadership’s blessing — this is the kind of pressure that has, in the past, gotten Google to drop Project Maven (which the amoral Palantir, naturally, swooped in to take over). Internal dissent is one of the only mechanisms that actually constrains what companies like this do. When employees stop accepting the rationalizations, things change. Sometimes the company changes. Sometimes the employees leave and build something better. Either one is better than just letting things continue as they are.
You can argue that Palantir taking on Project Maven when Google dropped it means that internal protest is fruitless, but that’s simply not true. Internal protest makes it more expensive and difficult for companies to get away with doing bad things. It may not stop them all, but it adds real friction. And if, as now, we start to add some real social baggage for being the software devs who were “just coding for a paycheck,” it can definitely make a bigger difference over time.
Which brings us back to why Karp’s manifesto might end up being a strategic disaster even setting aside the moral question. Palantir’s value proposition has always rested on a kind of plausible deniability: yes, we build surveillance tools, but we have a civil liberties team, we care about safeguards, we’re the responsible adults in the room. The manifesto torched that framing on purpose. And now the company’s own engineers are saying, in writing, that the post is making it harder to sell software, harder to recruit, harder to defend the work to friends and family.
These kinds of things should be costly. That’s how society prevents people from just going along with enabling horrendous human rights violations because the pay and perks are decent.
And at Palantir, the skulls are on the cap. Some people are finally noticing.
Hans, are we the baddies? Yeah. You probably are. So, are you going to keep wearing those skulls?
Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper.
That’s effectively what’s begun happening online in the last few months. The Internet Archive—the world’s largest digital library—has preserved newspapers since it went online in the mid-1990s. The Archive’s mission is to preserve the web and make it accessible to the public. To that end, the organization operates the Wayback Machine, which now contains more than one trillion archived web pages and is used daily by journalists, researchers, and courts.
But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit.
For nearly three decades, historians, journalists, and the public have relied on the Internet Archive to preserve news sites as they appeared online. Those archived pages are often the only reliable record of how stories were originally published. In many cases, articles get edited, changed, or removed—sometimes openly, sometimes not. The Internet Archive often becomes the only source for seeing those changes. When major publishers block the Archive’s crawlers, that historical record starts to disappear.
The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use.
Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response. Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn’t start, and didn’t ask for.
If publishers shut the Archive out, they aren’t just limiting bots. They’re erasing the historical record.
Archiving and Search Are Legal
Making material searchable is a well-established fair use. Courts have long recognized it’s often impossible to build a searchable index without making copies of the underlying material. That’s why when Google copied entire books in order to make a searchable database, courts rightly recognized it as a clear fair use. The copying served a transformative purpose: enabling discovery, research, and new insights about creative works.
The Internet Archive operates on the same principle. Just as physical libraries preserve newspapers for future readers, the Archive preserves the web’s historical record. Researchers and journalists rely on it every day. According to Archive staff, Wikipedia alone links to more than 2.6 million news articles preserved at the Archive, spanning 249 languages. And that’s only one example. Countless bloggers, researchers, and reporters depend on the Archive as a stable, authoritative record of what was published online.
The same legal principles that protect search engines must also protect archives and libraries. Even if courts place limits on AI training, the law protecting search and web archiving is already well established.
The Internet Archive has preserved the web’s historical record for nearly thirty years. If major publishers begin blocking that mission, future researchers may find that huge portions of that historical record have simply vanished. There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake.
Last fall, I wrote about how the fear of AI was leading us to wall off the open internet in ways that would hurt everyone. At the time, I was worried about how companies were conflating legitimate concerns about bulk AI training with basic web accessibility. Not surprisingly, the situation has gotten worse. Now major news publishers are actively blocking the Internet Archive—one of the most important cultural preservation projects on the internet—because they’re worried AI companies might use it as a sneaky “backdoor” to access their content.
This is a mistake we’re going to regret for generations.
Nieman Lab reports that The Guardian, The New York Times, and others are now limiting what the Internet Archive can crawl and preserve:
When The Guardian took a look at who was trying to extract its content, access logs revealed that the Internet Archive was a frequent crawler, said Robert Hahn, head of business affairs and licensing. The publisher decided to limit the Internet Archive’s access to published articles, minimizing the chance that AI companies might scrape its content via the nonprofit’s repository of over one trillion webpage snapshots.
Specifically, Hahn said The Guardian has taken steps to exclude itself from the Internet Archive’s APIs and filter out its article pages from the Wayback Machine’s URLs interface. The Guardian’s regional homepages, topic pages, and other landing pages will continue to appear in the Wayback Machine.
The Times has gone even further:
The New York Times confirmed to Nieman Lab that it’s actively “hard blocking” the Internet Archive’s crawlers. At theend of 2025, the Times also added one of those crawlers —archive.org_bot — to itsrobots.txt file, disallowing access to its content.
“We believe in the value of The New York Times’s human-led journalism and always want to ensure that our IP is being accessed and used lawfully,” said a Times spokesperson. “We are blocking the Internet Archive’s bot from accessing the Times because the Wayback Machine provides unfettered access to Times content — including by AI companies — without authorization.”
I understand the concern here. I really do. News publishers are struggling, and watching AI companies hoover up their content to train models that might then, in some ways, compete with them for readers is genuinely frustrating. I run a publication myself, remember.
But blocking the Internet Archive isn’t going to stop AI training. What it will do is ensure that significant chunks of our journalistic record and historical cultural context simply… disappear.
And that’s bad.
The Internet Archive is the most famous nonprofit digital library, and has been operating for nearly three decades. It isn’t some fly-by-night operation looking to profit off publisher content. It’s trying to preserve the historical record of the internet—which is way more fragile than most people comprehend. When websites disappear—and they disappear constantly—the Wayback Machine is often the only place that content still exists. Researchers, historians, journalists, and ordinary citizens rely on it to understand what actually happened, what was actually said, what the world actually looked like at a given moment.
In a digital era when few things end up printed on paper, the Internet Archive’s efforts to permanently preserve our digital culture are essential infrastructure for anyone who cares about historical memory.
And now we’re telling them they can’t preserve the work of our most trusted publications.
Think about what this could mean in practice. Future historians trying to understand 2025 will have access to archived versions of random blogs, sketchy content farms, and conspiracy sites—but not The New York Times. Not The Guardian. Not the publications that we consider the most reliable record of what’s happening in the world. We’re creating a historical record that’s systematically biased against quality journalism.
Yes, I’m sure some will argue that the NY Times and The Guardian will never go away. Tell that to the readers of the Rocky Mountain News, which published for 150 years before shutting down in 2009, or to the 2,100+ newspapers that have closed since 2004. Institutions—even big, prominent, established ones—don’t necessarily last.
As one computer scientist quoted in the Nieman piece put it:
“Common Crawl and Internet Archive are widely considered to be the ‘good guys’ and are used by ‘the bad guys’ like OpenAI,” said Michael Nelson, a computer scientist and professor at Old Dominion University. “In everyone’s aversion to not be controlled by LLMs, I think the good guys are collateral damage.”
That’s exactly right. In our rush to punish AI companies, we’re destroying public goods that serve everyone.
The most frustrating bit of all of this: The Guardian admits they haven’t actually documented AI companies scraping their content through the Wayback Machine. This is purely precautionary and theoretical. They’re breaking historical preservation based on a hypothetical threat:
The Guardian hasn’t documented specific instances of its webpages being scraped by AI companies via the Wayback Machine. Instead, it’s taking these measures proactively and is working directly with the Internet Archive to implement the changes.
And, of course, as one of the “good guys” of the internet, the Internet Archive is willing to do exactly what these publishers want. They’ve always been good about removing content or not scraping content that people don’t want in the archive. Sometimes to a fault. But you can never (legitimately) accuse them of malicious archiving (even if music labels and book publishers have).
Either way, we’re sacrificing the historical record not because of proven harm, but because publishers are worried about what might happen. That’s a hell of a tradeoff.
This isn’t even new, of course. Last year, Reddit announced it would block the Internet Archive from archiving its forums—decades of human conversation and cultural history—because Reddit wanted to monetize that content through AI licensing deals. The reasoning was the same: can’t let the Wayback Machine become a backdoor for AI companies to access content Reddit is now selling. But once you start going down that path, it leads to bad places.
The Nieman piece notes that, in the case of USA Today/Gannett, it appears that there was a company-wide decision to tell the Internet Archive to get lost:
In total, 241 news sites from nine countries explicitly disallow at least one out of the four Internet Archive crawling bots.
Most of those sites (87%) are owned by USA Today Co., the largest newspaper conglomerate in the United States formerly known as Gannett. (Gannett sites only make up 18% of Welsh’s original publishers list.) Each Gannett-owned outlet in our dataset disallows the same two bots: “archive.org_bot” and “ia_archiver-web.archive.org”. These bots were added to the robots.txt files of Gannett-owned publications in 2025.
Some Gannett sites have also taken stronger measures to guard their contents from Internet Archive crawlers.URL searches for the Des Moines Register in the Wayback Machinereturn a message that says, “Sorry. This URL has been excluded from the Wayback Machine.”
A Gannett spokesperson told NiemanLab that it was about “safeguarding our intellectual property” but that’s nonsense. The whole point of libraries and archives is to preserve such content, and they’ve always preserved materials that were protected by copyright law. The claim that they have to be blocked to safeguard such content is both technologically and historically illiterate.
And here’s the extra irony: blocking these crawlers may not even serve publishers’ long-term interests. As I noted in my earlier piece, as more search becomes AI-mediated (whether you like it or not), being absent from training datasets increasingly means being absent from results. It’s a bit crazy to think about how much effort publishers put into “search engine optimization” over the years, only to now block the crawlers that feed the systems a growing number of people are using for search. Publishers blocking archival crawlers aren’t just sacrificing the historical record—they may be making themselves invisible in the systems that increasingly determine how people discover content in the first place.
The Internet Archive’s founder, Brewster Kahle, has been trying to sound the alarm:
“If publishers limit libraries, like the Internet Archive, then the public will have less access to the historical record.”
But that warning doesn’t seem to be getting through. The panic about AI has become so intense that people are willing to sacrifice core internet infrastructure to address it.
What makes this particularly frustrating is that the internet’s openness was never supposed to have asterisks. The fundamental promise wasn’t “publish something and it’s accessible to all, except for technologies we decide we don’t like.” It was just… open. You put something on the public web, people can access it. That simplicity is what made the web transformative.
Now we’re carving out exceptions based on who might access content and what they might do with it. And once you start making those exceptions, where do they end? If the Internet Archive can be blocked because AI companies might use it, what about research databases? What about accessibility tools that help visually impaired users? What about the next technology we haven’t invented yet?
This is a real concern. People say “oh well, blocking machines is different from blocking humans,” but that’s exactly why I mention assistive tech for the visually impaired. Machines accessing content are frequently tools that help humans—including me. I use an AI tool to help fact check my articles, and part of that process involves feeding it the source links. But increasingly, the tool tells me it can’t access those articles to verify whether my coverage accurately reflects them.
I don’t have a clean answer here. Publishers genuinely need to find sustainable business models, and watching their work get ingested by AI systems without compensation is a legitimate grievance—especially when you see how much traffic some of these (usually less scrupulous) crawlers dump on sites. But the solution can’t be to break the historical record of the internet. It can’t be to ensure that our most trusted sources of information are the ones that disappear from archives while the least trustworthy ones remain.
We need to find ways to address AI training concerns that don’t require us to abandon the principle of an open, preservable web. Because right now, we’re building a future where historians, researchers, and citizens can’t access the journalism that documented our era. And that’s not a tradeoff any of us should be comfortable with.
Walled Culture has written a number of times about the true fans approach – the idea that creators can be supported directly and effectively by the people who love their work. As Walled Culture the book explains (available as a free ebook), one of the earliest and best expositions of the concept came from Kevin Kelly, former Executive Editor at Wired magazine, in an essay he wrote originally in 2008. The true fans idea is sometimes dismissed as simply selling branded t-shirts to supporters. That may have been true decades ago, but things have moved on. For example, Universal Music Group has recently opened retail locations that cater specifically for true fans. In addition to shops in Tokyo and Madrid, there are new outlets in New York and London. Here’s what the latter will offer, as reported by Music Business Worldwide:
Located in Camden Market, the London-based space will “serve as a creative hub where music, fashion, and design collide,” UMG said.
The announcement added that the shop was “designed to capture Camden’s rebellious spirit and deep musical roots”.
The store will feature exclusive artist collections, immersive installations, and live performances, along with a Vinyl Lounge, DJ booth, and recording studio-inspired Sound Room that “allows fans to experience music like never before”.
That is a fairly conventional extension of the “selling branded t-shirts to supporters” idea. A post on the Midia Research blog points out a more radical development in the true fans space involving the latest generative AI technology:
AI is best considered as an accelerant rather than something entirely new, intensifying pre-existing trends. AI music absolutely fits this trend. Over the course of the last decade – including a super-charged COVID bump – accessible music tech has enabled ever-more people to become music creators. AI simply lowered the barriers to entry even further. The debate over whether a text prompt constitutes creativity will continue to run (just like the same debate still runs for sampling), but what is clear is that more people are now making music because of AI.
Thanks to genAI, true fans are not limited to a passive role. They can actively participate in the artistic ecosystem brought into being by their musical heroes, through the creation of new works based on and extending the originals they love. The fanfic world has been doing this for many years, so it is no surprise to find the use of generative AI there even more advanced there than in the world of music. For example, the DreamGen site lists no less than nine “AI fanfic generators”, including its own. It offers a good description of how these systems work:
1. You give it a prompt: This could be something like “Harry Potter and Hermione go on a space adventure” or “Naruto meets Spider-Man in New York.”
2. The AI takes over: It uses its knowledge of language and storytelling to write a story based on your idea. It fills in the details, such as dialogue, action, emotions,and plot twists.
3. You can guide it: Want more romance? More drama? A surprise ending? You can tweak the prompt or add instructions, and the AI will adjust the story.
4. You get a full fanfic: Some tools write it all at once, others let you build it paragraph by paragraph so you can shape the story as it goes.
As that indicates, the new AI-based fanfic generators are so easy to use, anyone can use them. The only limit is the imagination and the ability to put that into words. That’s an incredible democratization of creativity that takes the idea of participatory fandom to the next level. And, of course, it can be applied in other domains too, such as “fan art”, which Wikipedia defines as follows:
Fan art or fanart is artwork created by fans of a work of fiction or celebrity depicting events, character, or other aspect of the work. As fan labor, fan art refers to artworks that are not created, commissioned, nor endorsed by the creators of the work from which the fan art derives.
As with other uses of genAI, this raises questions of copyright, some of which have already found their way to court. Perhaps surprisingly, Disney has just announced its embrace of this use of AI by fans, in a partnership with OpenAI:
The Walt Disney Company and OpenAI have reached an agreement for Disney to become the first major content licensing partner on Sora, OpenAI’s short-form generative AI video platform, bringing these leaders in creativity and innovation together to unlock new possibilities in imaginative storytelling.
As part of this new, three-year licensing agreement, Sora will be able to generate short, user-prompted social videos that can be viewed and shared by fans, drawing from a set of more than 200 animated, masked and creature characters from Disney, Marvel, Pixar and Star Wars, including costumes, props, vehicles, and iconic environments. In addition, ChatGPT Images will be able to turn a few words by the user into fully generated images in seconds, drawing from the same intellectual property. The agreement does not include any talent likenesses or voices.
There’s a billion-dollar investment by Disney in OpenAI, as well as the following:
OpenAI and Disney will collaborate to utilize OpenAI’s models to power new experiences for Disney+ subscribers, furthering innovative and creative ways to connect with Disney’s stories and characters.
Presumably, Disney hopes to gain more Disney+ subscribers and drive more revenues with these short-form, fan-generated videos, plus whatever “creative ways” of using AI that it comes up with. OpenAI, meanwhile, gains some handy investment, and a showcase for its Sora genAI video platform.
Although this deal is a welcome sign that some major copyright companies are starting to think imaginatively and positively about genAI, and how it can actually boost profits, the new service will doubtless be rather limited, not least in terms of what kind of videos can generated. The press release emphasises:
OpenAI and Disney have affirmed a shared commitment to maintaining robust controls to prevent the generation of illegal or harmful content, to respect the rights of content owners in relation to the outputs of models, and to respect the rights of individuals to appropriately control the use of their voice and likeness.
That means that there will always be room for edgier, smaller sites producing fanfic, fan art and fan videos that don’t worry about things like good taste or copyright. As more fans discover the delights of building on and extending the creative ideas of their idols in novel ways using genAI, we can expect a corresponding rise in the number of legal actions trying to stop them doing so.
Americans are not peasants. We are citizens of a republic founded on the revolutionary proposition that ordinary people can govern themselves. This isn’t poetry or aspiration—it’s the foundational premise of the American project. And right now, a faction of tech oligarchs is betting everything on proving that premise wrong.
They want to replace “We the People” with “We the Users.”
When Peter Thiel writes that democracy and freedom are incompatible, he’s not making a philosophical observation. He’s stating a preference. When Elon Musk guts federal agencies while posting American flags, he’s not reforming government. He’s replacing citizenship with administration. When Silicon Valley oligarchs speak about “optimization” and “efficiency,” they’re not talking about improving systems that serve citizens. They’re talking about managing peasants.
Because that’s what they think we are. Peasants. Masses incapable of self-governance. Users to be monetized. Workers to be replaced. Voters to be manipulated through algorithmic feeds designed to exploit our psychological vulnerabilities. Populations requiring management by those with superior intelligence and technological sophistication.
You see this in your daily life. An algorithm decides what news you see, not your own judgment about what matters. Your feed is curated by systems optimized for engagement rather than truth, designed to keep you scrolling rather than thinking. Your attention becomes their commodity. Your consciousness becomes their resource. Your capacity for independent judgment gets systematically eroded by platforms that treat you as a user to be optimized rather than a citizen capable of self-governance.
This represents the complete inversion of the American founding premise. The revolutionary generation staked everything on a radical proposition: that ordinary people could govern themselves, that citizenship was possible, that republican self-governance was superior to rule by kings, aristocrats, or anyone claiming the right to govern based on superior status, breeding, or intelligence.
“We hold these truths to be self-evident” means exactly what it says—not that kings acknowledge these truths, not that the intelligent agree with them, not that the powerful grant them, but that citizens assert them as the foundation of legitimate government. Self-evident to whom? To us. To the people who govern ourselves through collective deliberation rather than submitting to administration by our betters.
Lincoln understood what was at stake when he stood at Gettysburg and declared that the war would determine whether “government of the people, by the people, for the people, shall not perish from the earth.” Not government for the people by superior managers. Not government of the people by technological elites. But government by the people themselves—the radical proposition that citizens possess the capacity to govern rather than requiring governance by those who claim superior qualification.
The distinction between citizens and peasants isn’t semantic. It’s ontological. Peasants exist to be governed. Their role is obedience, tribute, and acceptance of decisions made by those qualified to make them. Citizens govern themselves. Their role is participation, judgment, and shared responsibility for collective outcomes.
We are not peasants. And yet every assault on American institutions over the past several years represents the systematic effort to transform us into exactly that.
The systematic elimination of civil service protections doesn’t improve government efficiency—it replaces professional judgment answerable to law with personal loyalty answerable to power. The attacks on independent agencies don’t reduce bureaucratic waste—they eliminate the institutional mechanisms through which citizens check oligarchic extraction. The celebration of “disruption” doesn’t foster innovation—it destroys the stable frameworks within which genuine self-governance becomes possible.
DOGE isn’t a government efficiency project. It’s the systematic replacement of citizenship with administration, democratic accountability with optimization metrics, collective self-governance with management by superior intelligence. When Elon Musk eliminates entire agencies staffed by career professionals and replaces them with political loyalists, he’s not improving government. He’s implementing his explicit belief that most people are incapable of meaningful judgment and require direction from those smart enough to know better.
This is why the flag-posting rings so hollow. Genuine patriotism implies reciprocal obligation—that loving your country means contributing to its maintenance as a collective project, that national pride entails responsibility for national institutions, that citizenship is something you participate in rather than perform. What the tech oligarchs demonstrate is nationalism without reciprocity: they want the aesthetic of belonging to a great nation while refusing every actual obligation that citizenship requires.
They love America as a brand, as an identity marker, as a territory they control. But they hate America as an actual collective project requiring their submission to democratic judgment, their participation in shared governance, their acceptance that other citizens possess equal standing to challenge their preferences and constrain their power.
Even Steve Bannon—nationalist populist, former Trump strategist, authoritarian movement builder—recognizes what the Silicon Valley faction represents. In a rare point of agreement across factional lines, Bannon has observed that the tech oligarchs aren’t patriots but post-national extractors using patriotic language to disguise systematic looting. When even authoritarian allies can see that you’re not engaged in national renewal but oligarchic capture, the performance has become too obvious to maintain.
Americans are not peasants. We are citizens of a republic founded on the revolutionary proposition that self-governance is possible, that ordinary people possess the capacity for judgment, that democratic deliberation beats optimization by superior intelligence. Every accommodation to oligarchic extraction, every acceptance of their framing, every failure to defend citizenship against those who would reduce us to subjects in their optimization experiments—all of it betrays the fundamental premise that makes America America.
We deserve better than this because citizenship is the foundation of what we are. Not subjects. Not users. Not populations to be managed. Citizens.
And citizens don’t wait for permission to defend what we are. We govern, or we lose everything that makes us who we are. The choice is here. The choice is now. History will not forgive us if we forget what we are—and surrender without a fight to those who would reduce us to peasants in a land our ancestors bled to make free.
We are not peasants. We are citizens. And citizenship is not a gift granted by superior intelligence. It is a responsibility we claim, a burden we carry, a right we defend—or lose forever to those who never believed we deserved it in the first place.
Mike Brock is a former tech exec who was on the leadership team at Block. Originally published at his Notes From the Circus.
When people use the term “Orwellian,” it’s not a good sign.
It usually characterizes an action, an individual or a society that is suppressing freedom, particularly the freedom of expression. It can also describe something perverted by tyrannical power.
It’s a term used primarily to describe the present, but whose implications inevitably connect to both the future and the past.
This ambition was manifested in efforts by the Department of Education to eradicate a “DEI agenda” from school curricula. It also included a high-profile assault on what detractors saw as “woke” universities, which culminated in Columbia University’s agreement to submit to a review of the faculty and curriculum of its Middle Eastern Studies department, with the aim of eradicating alleged pro-Palestinian bias.
On Aug. 12, 2025, the Smithsonian’s director, Lonnie Bunch III, received a letter from the White House announcing its intent to carry out a systematic review of the institution’s holdings and exhibitions in the advance of the nation’s 250th anniversary in 2026.
On Aug. 19, 2025, Trump escalated his attack on the Smithsonian. “The Smithsonian is OUT OF CONTROL, where everything discussed is how horrible our Country is, how bad Slavery was…” he wrote in a Truth Social post. “Nothing about Success, nothing about Brightness, nothing about the Future. We are not going to allow this to happen.”
Such ambitions may sound benign, but they are deeply Orwellian. Here’s how.
But while Orwell believed in the existence of an objective truth about history, he did not necessarily believe that truth would prevail.
Truth, Orwell recognized, was best served by free speech and dialogue. Yet absolute power, Orwell appreciated, allowed those who possessed it to silence or censor opposing narratives, quashing the possibility of productive dialogue about history that could ultimately allow truth to come out.
As Orwell wrote in “1984,” his final, dystopian novel, “Who controls the past controls the future. Who controls the present controls the past.”
Historian Malgorzata Rymsza-Pawlowska has written about America’s bicentennial celebrations that took place in 1976. Then, she says, “Americans across the nation helped contribute to a pluralistic and inclusive commemoration … using it as a moment to question who had been left out of the legacies of the American Revolution, to tell more inclusive stories about the history of the United States.”
This was an example of the kind of productive dialogue encouraged in a free society. “By contrast,” writes Rymsza-Pawlowska, “the 250th is shaping up to be a top-down affair that advances a relatively narrow and celebratory idea of Americanism.” The newly announced Smithsonian review aims to purge counternarratives that challenge that celebratory idea.
The Ministry of Truth
The desire to eradicate counternarratives drives Winston Smith’s job at the ironically named Ministry of Truth in “1984.”
The novel is set in Oceania, a geographical entity covering North America and the British Isles and which governs much of the Global South.
Oceania is an absolute tyranny governed by Big Brother, the leader of a political party whose only goal is the perpetuation of its own power. In this society, truth is what Big Brother and the party say it is.
The regime imposes near total censorship so that not only dissident speech but subversive private reflection, or “thought crime,” is viciously prosecuted. In this way, it controls the present.
But it also controls the past. As the party’s protean policy evolves, Smith and his colleagues are tasked with systematically destroying any historical records that conflict with the current version of history. Smith literally disposes of artifacts of inexpedient history by throwing them down “memory holes,” where they are “wiped … out of existence and out of memory.”
At a key point in the novel, Smith recalls briefly holding on to a newspaper clipping that proved that an enemy of the regime had not actually committed the crime he had been accused of. Smith recognizes the power over the regime that this clipping gives him, but he simultaneously fears that power will make him a target. In the end, fear of retaliation leads him to drop the slip of newsprint down a memory hole.
The contemporary U.S. is a far cry from Orwell’s Oceania. Yet the Trump administration is doing its best to exert control over the present and the past.
As part of efforts to purge references to gay people, U.S. Defense Secretary Pete Hegseth has ordered the removal of gay rights advocate Harvey Milk’s name from a Navy ship. Screenshot, Military.com
Other erasures have included the deletion of content on government sites related to the life ofHarriet Tubman, the Maryland woman who escaped slavery and then played a pioneering role as a conductor of the Underground Railroad, helping enslaved people escape to freedom.
Responding to questions, the Smithsonian stated that the placard’s removal was not in response to political pressure: “The placard, which was meant to be a temporary addition to a 25-year-old exhibition, did not meet the museum’s standards in appearance, location, timeline, and overall presentation.”
Repressing thought
Orwell’s “1984” ends with an appendix on the history of “Newspeak,” Oceania’s official language, which, while it had not yet superseded “Oldspeak” or standard English, was rapidly gaining ground as both a written and spoken dialect.
According to the appendix, “The purpose of Newspeak was not only to provide a medium of expression for the worldview and mental habits proper to the devotees of [the Party], but to make all other modes of thought impossible.”
Orwell, as so often in his writing, makes the abstract theory concrete: “The word free still existed in Newspeak, but it could only be used in such statements as ‘This dog is free from lice’ or ‘This field is free from weeds.’ … political and intellectual freedom no longer existed even as concepts.”
The goal of this language streamlining was total control over past, present and future.
If it is illegal to even speak of systemic racism, for example, let alone discuss its causes and possible remedies, it constrains the potential for, even prohibits, social change.
It has become a cliché that those who do not understand history are bound to repeat it.
As George Orwell appreciated, the correlate is that social and historical progress require an awareness of, and receptivity to, both historical fact and competing historical narratives.
This is the final piece in a series of posts that explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the first, second, third, fourth, fifth, and sixth posts in the series.
As the conversation about AI’s impact on creative industries continues, there’s a common misconception that AI models are “stealing” content by absorbing it for free. But if we take a closer look at how AI training works, it becomes clear that this isn’t the case at all. AI models don’t simply replicate or repackage creative works—they break them down into something much more abstract: tokens. These tokens are tiny, fragmented pieces of data that no longer represent the creative expression of an idea. And here’s where the distinction lies: copyright is meant to protect expression, not individual words, phrases, or patterns that make up those works.
The Lego Analogy: Breaking Down Creative Works into Tokens
Imagine you’re a creator, and your work is like a detailed Lego model of the Star Wars Millennium Falcon. It’s intricate, with every piece perfectly assembled to create something unique and valuable. Now imagine that an AI system comes along—not to take your Millennium Falcon and display it as its own creation, but to break it down into individual Lego blocks. These blocks are then scattered among millions of others from different sources, and the AI uses them to build entirely new structures—things that look nothing like the Millennium Falcon.
In this analogy, the Lego blocks are the tokens that AI models use. These tokens are fragments of data—tiny bits of information stripped of the original context and creative expression. Just like Lego pieces, tokens are abstract and can be recombined in an infinite number of ways to create something entirely new. The AI doesn’t copy your Falcon; it takes the building blocks (tokens) and uses them to create something that’s not a replica of the original but something completely different, like a castle or a spaceship you’ve never seen before.
This is the key distinction: AI models aren’t absorbing entire creative works and reproducing them as their own. They’re learning patterns from vast datasets and using those patterns to generate new content. The tokens no longer reflect the expression of the original work, and thus, they don’t infringe on the creative essence that copyright law is designed to protect.
Why Recent Content Matters: AI Needs to Reflect Modern Language and Values
There’s another critical point that often gets overlooked: AI models need access to recent, contemporary content to be useful, relevant, and ethical. Let’s imagine for a moment what would happen if AI models were restricted to learning only from public domain works, many of which are decades or even centuries old.
While public domain works are valuable, they often reflect the social norms and biases of their time. If AI models are trained primarily on outdated texts, there’s a serious risk that they could “speak” in a way that’s misogynistic, biased, anti-LGBTQ+, or even outright racist. Many public domain works contain language and ideas that are no longer acceptable in today’s society, and if AI is limited to these sources, it may inadvertently propagate harmful, antiquated views.
To ensure that AI reflects current values, inclusive language, and modern social norms, it needs access to recent content. This means analyzing and learning from today’s books, articles, speeches, and other forms of communication. If creators and copyright holders opt out of allowing their content to be used for AI training, we risk creating models that don’t reflect the diversity, progress, and inclusivity of modern society.
For example, language evolves quickly—just look at the increased use of gender-neutral pronouns or terms like intersectionality in recent years. If AI is cut off from these contemporary linguistic trends, it will struggle to understand and engage with the world as it is today. It would be like asking an AI trained exclusively on Shakespearean English to have a conversation with a 21st-century teenager—it simply wouldn’t work.
Article 4 of the EU Directive: Opting Out of Text and Data Mining
Let’s bring the EU Directive on Copyright in the Digital Single Market (DSM) into the picture. The Directive includes provisions (Article 4) allowing copyright holders to opt out of having their content used in text and data mining (TDM). TDM is crucial for training AI models, as it allows them to analyze and learn from large datasets. The opt-out mechanism gives creators and copyright holders the ability to expressly reserve their works from being used for TDM.
However, it’s important to remember that this opt-out applies to all AI models, not just generative AI systems like ChatGPT. This means that by opting out in a broad, blanket manner, creators could inadvertently limit the potential of AI models that have nothing to do with creative industries—tools that are critical for advancements in healthcare, education, and even in day-to-day conveniences that many of us benefit from.
The Risk of a Data Winter: Why Broad Opt-Outs Could Harm Innovation
What happens if creators and copyright holders across Europe start opting out of TDM on a large scale? The answer is something AI researchers dread: a data winter. Without access to a diverse and rich array of data, AI models will struggle to evolve. This could slow innovation not just in the creative industries, but across the entire economy.
AI needs high-quality data to function properly. The principle of Garbage In, Garbage Out applies here: if AI models are starved of diverse input, their output will be flawed, biased, and of lower quality. And while this may not seem like an issue for some industries, it has a ripple effect. Every AI tool we rely on—from smart assistants to medical research applications—depends on robust training data. Restricting access to this data doesn’t just hinder progress in AI innovation; it stifles public interest tools that have far-reaching benefits for society.
Think about it: many creators themselves probably use AI-driven tools in their daily lives—whether it’s for streamlining workflows, generating new ideas, or even just organizing information. By opting out of TDM, they could inadvertently be damaging the very tools that enhance their own creative processes.
The Way Forward: Balance Between Protection and Innovation
While copyright is crucial for protecting creators and ensuring fair compensation, it’s equally important not to over-regulate in a way that stifles innovation. AI models aren’t absorbing entire works for free; they’re breaking them down into unrecognizable tokens that enable transformative uses. Rather than opting out of TDM as a knee-jerk reaction, creators should consider the long-term consequences of limiting AI’s potential to innovate and enhance their own industries.
A balance needs to be struck. Copyright protection should ensure that creators are fairly compensated, but it shouldn’t be wielded as a tool to restrict the very data that drives AI innovation. Creators and policymakers must recognize that AI isn’t the enemy—it’s a collaborator. And if we’re not careful, we might find ourselves facing a data winter, where the tools we rely on for both convenience and advancement are weakened due to short-sighted decisions.
Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.
This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the first postand second post in the series.
In policy circles, creative industries have become the loudest voices in copyright debates. The problem? They are often mistaken for representing creativity itself, or even protecting individual creators and culture. But let’s get one thing straight: creativity is very different from the creative industries—as different as music is from the music business. Think The Beatles vs. Bad Boy Records: not the same vibe!
The creative industries are an economic concept, an invention of the British government in 1997 under Tony Blair. This was when the Creative Industries Task Force was born, bringing together sectors like advertising, design, fashion, film, music, and software—all under one umbrella. We’re talking about a vast range, from opera and ballet to architecture, advertising and video games. This is way beyond what most people think of as “culture.” And let’s not even talk about the hodgepodge concept of IPR-intensive industries waved by the Intellectual Property Office (EUIPO) and European Patent Office (EPO), which covers pretty much any company that filed patents or geographical indications, from McDonalds to the wonderful vendors of Prosciutto di Parma.
Who’s Who in the Creative Industry?
When talking about the creative industries, it’s important to differentiate between the players involved. There are rightsholders, who may be those producing and distributing content, or sometimes simply financial investors—think Scooter Braun vs. Taylor Swift. Then there are the creators themselves, who often don’t even own the rights to what they’ve created. And of course, there are all the other people who work in the industry—from “creatives” to those in support roles, just like in any other industry.
This complexity becomes crucial when considering AI. As we’ve seen with the Hollywood writers’ strike, the creative industry is already embracing AI, viewing it as either a new creative tool or a cost-cutting measure that could replace human jobs. That’s the “industries” part of the label—a business-driven focus that doesn’t necessarily align with the interests of individual creators or the broader value of creativity.
AI, Authenticity, and the Human Touch
The real challenges posed by AI aren’t limited to copyright or creative rights—they’re about the future of work and how we value human contribution in an automated world. To understand the human creator’s role, let’s take a look at the evolution of electronic dance music (EDM). As Douglas Rushkoff describes, EDM started with anonymous techno raves, with the DJ barely visible or hidden entirely. Over time, the DJ became the centerpiece, part of the spectacle—because humans relate to humans. This dynamic isn’t going to change with AI.
Or, as Dan Graham, owner of Gothic Storm Limited and Founder of the Library of the Human Soul, puts it: “We’re suckers for a backstory and authenticity. We hate knock-offs, even if they’re perfect. Fake Rolexes, forged artwork—it doesn’t matter how good it is, the real thing is always worth more, because we care.” AI might make flawless imitations, but the value of human creativity, authenticity, and connection remains unmatched.
So, while AI will certainly change the creative industries, it won’t replace the core of creativity—the human spirit, storytelling, and the authenticity we all crave as fans.
Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.
This series of posts explores how we can rethink the intersection of AI, creativity, and policy. From examining outdated regulatory metaphors to questioning copyright norms and highlighting the risks of stifling innovation, each post addresses a different piece of the AI puzzle. Together, they advocate for a more balanced, forward-thinking approach that acknowledges the potential of technological evolution while safeguarding the rights of creators and ensuring AI’s development serves the broader interests of society. You can read the first post in the series here.
Let me show my age here—does anyone remember the movie Fame?
There’s a scene where Bruno Martelli, a confident student, declares, “Violins are on the way out, you don’t need strings today.” He insists that with “a keyboard and some oscillators,” orchestras have become obsolete. The teacher’s response is simple yet powerful: “The music survived.”
This scene perfectly captures a recurring theme in the history of creativity. Every time a new technology comes along, people predict the end of traditional art forms. Yet, time and again, creativity not only survives—it thrives.
Technology: A Tool for Growth, Not a Threat
Take the Gutenberg Press. When it was invented, many feared that the painstaking art of manuscript copying by monks would vanish forever. And yes, the printing press transformed how books were produced, but it didn’t destroy writing or creativity. Instead, it democratised knowledge, making literature accessible to a broader audience and sparking an explosion of new ideas and artistic expression.
Or consider photography. When the camera was invented, people thought painters were doomed. Why spend hours painting when a camera could capture the same moment in an instant? But painting didn’t vanish. Instead, it evolved—movements like Impressionism and Cubism flourished, as artists found new ways to express themselves beyond mere replication of reality.
Film didn’t kill theatre, and electric guitars didn’t kill acoustic ones. These technologies expanded the toolkit available to creators, offering new ways to explore their craft. In fact, new technologies have even created entirely new art forms. Just look at the video games industry—within fifteen years of its inception, it surpassed the century-old film industry in value, creating fresh opportunities for storytelling, artistry, and engagement.
AI: Expanding the Boundaries of Creativity
The same holds true for AI. Just like violins didn’t disappear when synthesizers came along, AI won’t replace human creativity. It will push boundaries, open up new possibilities, and allow artists and innovators to do things we couldn’t have imagined even a decade ago. But the essence of creativity—the spark of human imagination—remains indispensable.
Instead of fearing AI, we should embrace it as the latest in a long line of tools that expand human potential. AI will help creative industries thrive by providing new ways to create, innovate, and engage audiences. But the true magic—the core of creativity—will always come from the human mind.
The Music Will Play On
The lesson here is simple: creativity will survive. It always has. Every time a new tool, technology, or innovation emerges, there’s a tendency to think it spells the end for what came before. But history tells a different story—one of adaptation, evolution, and growth.
And as always, the creative industries will continue to thrive, building on the spark of human ingenuity.
Caroline De Cock is a communications and policy expert, author, and entrepreneur. She serves as Managing Director of N-square Consulting and Square-up Agency, and Head of Research at Information Labs. Caroline specializes in digital rights, policy advocacy, and strategic innovation, driven by her commitment to fostering global connectivity and positive change.