While we’ve taken some issues with his approach to copyright laws and enforcement in the past, there is no doubting that Steven Soderbergh is a filmmaking legend. This is a man who directed films like Traffic and Ocean’s 11. He talks about, and cares about, the art of filmmaking. And he’s apparently beginning to use AI in some limited ways.
You really have to pay attention to Soderbergh’s specific comments on how he’s using it, because I would argue that it’s exactly the right artistic approach to the conversation: limited, targeted uses that help achieve the artist’s vision rather than replace everything in a film with garbage slop. Interestingly, articles like this one from Salon still frame all of this as some betrayal of art on Soderbergh’s part. Here’s how Soderbergh describes how he’s using AI as part of an upcoming film about John Lennon and Yoko Ono.
“AI has been helpful in creating thematically surreal images that occupy a dream space rather than a literal space,” Soderbergh said. “And it’s been really fun because you need a Ph.D. in literature to tell it what to do.” Soderbergh relented that generative programs require “very close human supervision,” before going on to admit that he’s also using “a lot of AI” for an upcoming film about the Spanish-American War, to generate images of archaic warships and God knows what else.
I very much understand Soderbergh’s description of how he’s using this tool for his films, but I have no idea what the hell the commentary from Salon around the quote is on about. “And God knows what else” is perhaps the silliest comment in the post, because that statement only works if Soderbergh himself happens to be God.
I don’t believe he is, to be clear. And I think an artist like this one who finds the tool useful in achieving his overall artistic vision is something we should be paying attention to, not dismissing out of hand. The Salon piece notes that Soderbergh has routinely been a director who has embraced the use of new technology before launching into this diatribe.
But just because Soderbergh jumping at AI could be seen from a mile away doesn’t make it any less disappointing, nor does it excuse his reluctance to thoughtfully engage with others’ criticisms about the technology. If “The Christophers” is to be believed, art that tries to imitate a certain style is little more than hollow, emotionless posturing. Generative AI is the same: mere mimicry, devoid of the humanity that makes art . . . well, art. And by being so willfully averse to acknowledging the ways AI and art conflict — not to mention its ramifications for others in his industry — Soderbergh’s take on an artist losing his touch in “The Christophers” is disappointingly apt.
Of course the art that AI “creates” is mimicry and devoid of humanity. That’s definitionally how the tool works. And anyone who thinks they’re going to rely on an AI tool to “create art” is on a fool’s mission. It simply won’t work because it’s not designed to work that way. Instead, it’s a tool to get you some components of what you need to create an overall artistic vision, which is still led by a very human artist. Will there be work done by an AI on the margins in filmmaking that would normally have been done via paid workers in the industry. Perhaps. Likely, even. But will the limited use of these tools also lower the barrier of entry in terms of skill set needed and budget to produce films, thereby creating even more output of films overall? I’m struggling to see how that would not be the case.
And at the end of the day, there’s still an artist calling the shots. Perhaps fewer overall total artists involved in a single movie, but the limited use of AI tools doesn’t somehow suck the entire soul from a film anymore than the ease of digital footage editing over the use of film does. And just like a movie that is almost nothing other than pretty CGI graphics, but which otherwise sucks, lazy people trying to create entire films with AI are going to fail. And fail hard.
Say it with me now: there is more nuance to this conversation than the hardliners and evangelists are bothering to acknowledge.
In a follow-up chat with Variety, Soderbergh expanded on his initial comments about using AI in future films. “I’m just not threatened by it . . . Ten years ago, I would have needed to engage a visual effects house at an unbelievable cost to come up with this stuff,” he said. “No longer. My job is to deliver a good movie, period. And this tool showed up at a moment when I needed it. I don’t think it’s the solution to everything, and I don’t think it’s the death of everything . . . There are some people that I have absolute love and respect for that refuse to engage with it. That’s their privilege. But I’m not built that way. You show me a new tool, I want to get my hands on it and see what’s going on.”
That’s an artist saying that, folks, not some Silicone Valley tech bro. And, to be clear, he might get it wrong. He may use the tool and his product might suck out loud. But to try to abort the use of a tool before it’s even been explored seems silly.
As its name suggests, generative AI is designed to generate material in response to prompts by drawing on its probabilistic database built up through analyzing huge quantities of training input. But it can draw on those patterns to analyze other files, and that’s also a widely used application. Writing in The Argument, Kelsey Piper encountered an interesting variant of that approach:
Recently, Anthropic released a new version of Claude, Opus 4.7. I did what I usually do when a new AI model is released by Google, OpenAI, or Anthropic and ran a bunch of tests on it to see what it can do. One of those tests is to paste in some text from unpublished drafts of mine and ask it to guess the author.
…
From only the above text [not shown here], 125 words, Claude Opus 4.7 informed me that the likeliest author is Kelsey Piper. This is an Opus 4.7-specific power; ChatGPT guessed Yglesias, and Gemini guessed Scott Alexander. I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode.
As Piper admits:
this is far from an impossible feat of style identification — a lot of my writing is public on the internet, and this is clearly the start of a political column, narrowing the possible authors down dramatically.
She went on to input less obvious material. For example, an “unpublished draft of a school progress report in a completely different register”:
An unpublished fantasy novel produced a similar result, although:
in that case it took more like 500 words for Claude to inform me that it’s the work of Kelsey Piper (whereas ChatGPT flattered me by guessing that I’m real fantasy novelist K.J. Parker).
And finally, “a college application essay I wrote 15 years ago, when my prose style was vastly worse and frankly embarrassing to reread”:
“Kelsey Piper,” said Claude, and in this case, also ChatGPT.
Piper comments:
Right now, today’s AI tools probably can be used to deanonymize any writer who has a large public corpus of writing under their real name and also writes anonymously, unless they have been extremely careful, for years, to make sure that nothing written under their secondary account has the stylistic fingerprints of their primary one. Many academics and industry researchers, for instance, have reported being identified from a draft or in the middle of a chat.
And she concludes:
Whatever goods anonymity ever offered us, we will have to do without them. I don’t want the anonymous posters to all go away and for everyone to frantically delete all their old internet presence before it surfaces, but more than anything, I don’t want them to be surprised.
Those links to other cases of unpublished material being recognized by AI show that Piper’s experience was not a one-off, although the results remain in the realm of anecdata. But even if imperfect, the ability of generative AI to carry out this kind of analysis quickly and often accurately represents an important new option for the well-established field of stylometry. Wikipedia explains:
Stylometry may be used to unmask pseudonymous or anonymous authors, or to reveal some information about the author short of a full identification. Authors may use adversarial stylometry to resist this identification by eliminating their own stylistic characteristics without changing the meaningful content of their communications. It can defeat analyses that do not account for its possibility, but the ultimate effectiveness of stylometry in an adversarial environment is uncertain: stylometric identification may not be reliable, but nor can non-identification be guaranteed; adversarial stylometry’s practice itself may be detectable.
The limitations of stylometry were demonstrated in John Carreyrou’s attempt to reveal the true identity of Bitcoin’s pseudonymous creator, Satoshi Nakamoto, published in The New York Times a few weeks ago. Carreyrou concluded that various real-world coincidences plus linguistic evidence indicated that Bitcoin was created by the 55-year-old British computer scientist Adam Back, something Back denies. Carreyrou’s attempts to use computerized stylometry (not the AI services Piper drew on) were unsatisfactory, and he eventually adopted a more hands-on approach to text analysis, which involved looking at Satoshi’s vocabulary, grammatical hyphenation mistakes and the use of British spellings.
Despite Carreyrou’s lack of success, stylometric analysis by generative AI is likely to become more common in many disciplines for the simple reason it is so quick, easy and cheap to carry out. Even if its results are unreliable, people may find it useful as a stimulus for further investigations. And as we know, the fact that generative AI systems can churn out nonsense hasn’t stopped hundreds of millions of people from using and trusting them anyway.
Warning: This article discusses suicide and some research regarding suicidal ideation. If you are having thoughts of suicide, please call or text 988 to reach the Suicide and Crisis Lifeline orvisit this list of resourcesfor help. Know that people care about you and there are many available to help.
When someone dies by suicide, there is an immediate, almost desperate need to find something—or someone—to blame. We’ve talked before about the dangers of this impulse. The target keeps shifting: “cyberbullying,” then “social media,” then “Amazon.” Now it’s generative AI.
There have been several heartbreaking stories recently involving individuals who took their own lives after interacting with AI chatbots. This has led to lawsuits filed by grieving families against companies like OpenAI and Character.AI, alleging that these tools are responsible for the deaths of their loved ones. Many of these lawsuits are settled, rather than fought out in court because no company wants its name in the headlines associated with suicide.
It is also impossible not to feel for these families. The loss is devastating, and the need for answers is a fundamentally human response to grief. But the narrative emerging from these lawsuits—that the AI caused the suicide—relies on a premise that assumes we understand the mechanics of suicide far better than we actually do.
Unfortunately, we know frighteningly little about what drives a person to take that final, irrevocable step. An article from late last year in the New York Times profiling clinicians who are lobbying for a completely new way to assess suicide risk, makes this painfully clear: our current methods of predicting suicides are failing.
If experts who have spent decades studying the human mind admit they often cannot predict or prevent suicide even when treating a patient directly, we should be extremely wary of the confidence with which pundits and lawsuits assign blame to a chatbot.
The Times piece focuses on the work of two psychiatrists who have been devastated by the loss of patients who gave absolutely no indication they were about to harm themselves.
In his nearly 40-year career as a psychiatrist, Dr. Igor Galynker has lost three patients to suicide while they were under his care. None of them had told him that they intended to harm themselves.
In one case, a patient who Dr. Galynker had been treating for a year sent him a present — a porcelain caviar dish — and a letter, telling Dr. Galynker that it wasn’t his fault. It arrived one week after the man died by suicide.
“That was pretty devastating,” Dr. Galynker said, adding, “It took me maybe two years to come to terms with it.”
He began to wonder: What happens in people’s minds before they kill themselves? What is the difference between that day and the day before?
Nobody seemed to know the answer.
Nobody seemed to know the answer.
That is the state of the science. Apparently the best we currently have in tracking suicidal risk is asking people: “Are you thinking about killing yourself?” And as the article notes, this method is catastrophically flawed.
But despite decades of research into suicide prevention, it is still very difficult to know whether someone will try to die by suicide. The most common method of assessing suicidal risk involves asking patients directly if they plan to harm themselves. While this is an essential question, some clinicians, including Dr. Galynker, say it isinadequateforpredicting imminent suicidal behavior….
Dr. Galynker, the director of theSuicide Prevention Research Labat Mount Sinai in New York City, has said that relying on mentally ill people to disclose suicidal intent is “absurd.” Some patients may not be cognizant of their own mental state, he said, while others are determined to die and don’t want to tell anyone.
The data backs this up:
According to one literature review, about half of those who died by suicide had denied having suicidal intent in the week or month before ending their life.
This profound inability to predict suicide has led these clinicians to propose a new diagnosis for the DSM-5 called “Suicide Crisis Syndrome” (SCS). They argue that we need to stop looking for stated intent and start looking for a specific, overwhelming state of mind.
To be diagnosed with S.C.S., Dr. Galynker said, patients must have a “persistent and intense feeling of frantic hopelessness,” in which they feel trapped in an intolerable situation.
They must also have emotional distress, which can include intense anxiety; feelings of being extremely tense, keyed up or jittery (people often develop insomnia); recent social withdrawal; and difficulty controlling their thoughts.
By the time patients develop S.C.S., they are in such distress that the thinking part of the brain — the frontal lobe — is overwhelmed, said Lisa J. Cohen, a clinical professor of psychiatry at Mount Sinai who is studying S.C.S. alongside Dr. Galynker. It’s like “trying to concentrate on a task with a fire alarm going off and dogs barking all around you,” she added.
This description of “frantic hopelessness” and feeling “trapped” gives us a glimpse into the internal maelstrom that leads to suicide. It also highlights why externalizing the blame to a technology is so misguided.
The article shares the story of Marisa Russello, who attempted suicide nine years ago. Her experience underscores how internal, sudden, and unpredictable the impulse can be—and how disconnected it can be from any specific external “push.”
On the night that she nearly died, Ms. Russello wasn’t initially planning to harm herself. Life had been stressful, she said. She felt overwhelmed at work. A new antidepressant wasn’t working. She and her husband were arguing more than usual. But she wasn’t suicidal.
She was at the movies with her husband when Ms. Russello began to feel nauseated and agitated. She said she had a headache and needed to go home. As she reached the subway, a wave of negative emotions washed over her.
[….]
By the time she got home, she had “dropped into this black hole of sadness.”
And she decided that she had no choice but to end her life. Fortunately, she said, her attempt was interrupted.
Her decision to die by suicide was so sudden that if her psychiatrist had asked about self-harm at their last session, she would have said, truthfully, that she wasn’t even considering it.
When we read stories like Russello’s, or the accounts of the psychiatrists losing patients who denied being at risk, it becomes difficult to square the complexity of human psychology with the simplistic narrative that “Chatbot X caused Person Y to die.”
There is undeniably an overlap between people who use AI chatbots and people who are struggling with mental health issues—in part because so many people use chatbots today, but also because people in distress seek connection, answers, a safe space to vent. That search often leads to chatbots.
Unless we’re planning to make thorough and competent mental health support freely available to everyone who needs it at any time, that’s going to continue. Rather than simply insisting that these tools are evil, we should be looking at ways to improve outcomes knowing that some people are going to rely on them.
Just because a person used an AI tool—or a search engine, or a social media platform, or a diary—prior to their death does not mean the tool caused the death.
When we rush to blame the technology, we are effectively claiming to know something that experts in that NY Times piece admit they do not know. We are claiming we know why it happened. We are asserting that if the chatbot hadn’t generated what it generated, if it hadn’t been there responding to the person, that the “frantic hopelessness” described in the SCS research would simply have evaporated.
There is no evidence to support that.
None of this is to say AI tools can’t make things worse. For someone already in crisis, certain interactions could absolutely be unhelpful or exacerbating by “validating” the helplessness they’re already experiencing. But that is a far cry from the legal and media narrative that these tools are “killing” people.
The push to blame AI serves a psychological purpose for the living: it provides a tangible enemy. It implies that there is a switch we can flip—a regulation we can pass, a lawsuit we can win—that will stop these tragedies.
It suggests that suicide is a problem of product liability rather than a complex, often inscrutable crisis of the human mind.
The work being done on Suicide Crisis Syndrome is vital because it admits what the current discourse ignores: we are failing to identify the risk because we are looking at the wrong things.
Dr. Miller, the psychiatrist at Endeavor Health in Chicago, first learned about S.C.S. after the patient suicides. He then led efforts to screen every psychiatric patient for S.C.S. at his hospital system. In trying to implement the screenings there have been “fits and starts,” he said.
“It’s like turning the Titanic,” he added. “There are so many stakeholders that need to see that a new approach is worth the time and effort.”
While clinicians are trying to turn the Titanic of psychiatric care to better understand the internal states that lead to suicide, the public debate is focused on the wrong iceberg.
If we focus all our energy on demonizing AI, we risk ignoring the actual “black hole of sadness” that Ms. Russello described. We risk ignoring the systemic failures in mental health care. We risk ignoring the fact that half of suicide victims deny intent to their doctors.
Suicide is a tragedy. It is a moment where a person feels they have no other choice—a loss of agency so complete that the thinking brain is overwhelmed, as the SCS researchers describe it. Simplifying that into a story about a “rogue algorithm” or a “dangerous chatbot” doesn’t help the next person who feels that frantic hopelessness.
In my previous posts about the use of generative AI tools in the video game industry, I have tried to drive home the point that a nuanced conversation is needed here. Predictably, there were many comments of the sort of stratified opinions that I was specifically attempting to avoid, but I always knew they’d be there. And that’s okay. Where there is novelty, there is disruption and discomfort. And, frankly, some of the dangers here aren’t unfounded.
But in the end, I remain of the opinion that generative AI will be a tool used by game developers generally in the future, if not the present. I also still firmly believe that the conversation we should be having is not whether AI should be used in games, but how it should be used.
And people like the CEO of Shift Up in South Korea sure aren’t helping when they insist on the need to use AI by trotting out the Chinese boogeyman.
Will gen AI be part of Stellar Blade 2‘s development? It doesn’t sound entirely outside the realm of possibility after recent comments from developer Shift Up’s CEO. The South Korean game studio is currently working on a sequel to the 2024 sci-fi action game and its boss thinks AI is the only way to compete with the massive development teams coming out of China.
“We devote around 150 people to a single game, but China puts in between 1,000 to 2,000,” Hyung-tae Kim, who also served as director on Stellar Blade, said during a recent conference briefing according to GameMeca (translated via Automaton). “We lack the capacity to compete, both in terms of quality and volume of content.”
Where do I even begin with this nonsense? First, it’s completely devoid of the nuance I was asking for in these kinds of discussions. This is essentially stating that developers can make up for China’s massive human assets it can throw at game development by using AI to make up the difference. 1 employee using AI, doing the math, can be the equivalent of 100 or so Chinese workers. That sounds like you’re looking to stave off hiring by using AI and you aren’t helping!
It also fails, somehow, to recognize that generative AI can be used in China as well. China isn’t exactly ignoring AI tools, you know, so this arms race makes no real sense.
Finally, it’s just kind of bullshit. Chinese studios have certainly produced some games, some that have been quite successful. But when we think about the major players in the video game industry, especially in terms of quality and revenue, China is but a fairly average player on the world scene. Tencent, NetEase, and MiHoYo all crack the top ten in revenue, but the rest of the longer list is filled with American, Japanese, and South Korean studios, among some other countries. They’re a player in the industry, to be sure. But they aren’t some dominant force that requires special tactics to compete with.
But despite all the above, Shift Up has been both successful and has committed to retaining and treating its staff well.
Was Kim actually worried about rising competition from China, or was he just flexing his geopolitical muscle as Stellar Blade‘s popularity catapults Shift Up into the big time? After all, that game sold millions of copies across console and PC without the help of AI, even as Tencent, Net Ease, and other major Chinese publishers flood the market with AAA free-to-play games.
For now at least, Shift Up employees are being well taken care of. Seoul Economic Daily recently reported that all 300 employees at the studio were given AirPods Max, Apple Watches, and a bonus $3,400 to celebrate the company’s profitable 2025. Why no video game consoles? It already gifted PS5 Pros and Switch 2s last year.
That sure doesn’t read like a studio in dire straits due to the scary Big Red Machine or whatever he’s trying to pitch. How about you keep making good games and all will be fine?
Then we can get back to the real, more nuanced conversation about just what place AI has in video game production.
Walled Culture has written a number of times about the true fans approach – the idea that creators can be supported directly and effectively by the people who love their work. As Walled Culture the book explains (available as a free ebook), one of the earliest and best expositions of the concept came from Kevin Kelly, former Executive Editor at Wired magazine, in an essay he wrote originally in 2008. The true fans idea is sometimes dismissed as simply selling branded t-shirts to supporters. That may have been true decades ago, but things have moved on. For example, Universal Music Group has recently opened retail locations that cater specifically for true fans. In addition to shops in Tokyo and Madrid, there are new outlets in New York and London. Here’s what the latter will offer, as reported by Music Business Worldwide:
Located in Camden Market, the London-based space will “serve as a creative hub where music, fashion, and design collide,” UMG said.
The announcement added that the shop was “designed to capture Camden’s rebellious spirit and deep musical roots”.
The store will feature exclusive artist collections, immersive installations, and live performances, along with a Vinyl Lounge, DJ booth, and recording studio-inspired Sound Room that “allows fans to experience music like never before”.
That is a fairly conventional extension of the “selling branded t-shirts to supporters” idea. A post on the Midia Research blog points out a more radical development in the true fans space involving the latest generative AI technology:
AI is best considered as an accelerant rather than something entirely new, intensifying pre-existing trends. AI music absolutely fits this trend. Over the course of the last decade – including a super-charged COVID bump – accessible music tech has enabled ever-more people to become music creators. AI simply lowered the barriers to entry even further. The debate over whether a text prompt constitutes creativity will continue to run (just like the same debate still runs for sampling), but what is clear is that more people are now making music because of AI.
Thanks to genAI, true fans are not limited to a passive role. They can actively participate in the artistic ecosystem brought into being by their musical heroes, through the creation of new works based on and extending the originals they love. The fanfic world has been doing this for many years, so it is no surprise to find the use of generative AI there even more advanced there than in the world of music. For example, the DreamGen site lists no less than nine “AI fanfic generators”, including its own. It offers a good description of how these systems work:
1. You give it a prompt: This could be something like “Harry Potter and Hermione go on a space adventure” or “Naruto meets Spider-Man in New York.”
2. The AI takes over: It uses its knowledge of language and storytelling to write a story based on your idea. It fills in the details, such as dialogue, action, emotions,and plot twists.
3. You can guide it: Want more romance? More drama? A surprise ending? You can tweak the prompt or add instructions, and the AI will adjust the story.
4. You get a full fanfic: Some tools write it all at once, others let you build it paragraph by paragraph so you can shape the story as it goes.
As that indicates, the new AI-based fanfic generators are so easy to use, anyone can use them. The only limit is the imagination and the ability to put that into words. That’s an incredible democratization of creativity that takes the idea of participatory fandom to the next level. And, of course, it can be applied in other domains too, such as “fan art”, which Wikipedia defines as follows:
Fan art or fanart is artwork created by fans of a work of fiction or celebrity depicting events, character, or other aspect of the work. As fan labor, fan art refers to artworks that are not created, commissioned, nor endorsed by the creators of the work from which the fan art derives.
As with other uses of genAI, this raises questions of copyright, some of which have already found their way to court. Perhaps surprisingly, Disney has just announced its embrace of this use of AI by fans, in a partnership with OpenAI:
The Walt Disney Company and OpenAI have reached an agreement for Disney to become the first major content licensing partner on Sora, OpenAI’s short-form generative AI video platform, bringing these leaders in creativity and innovation together to unlock new possibilities in imaginative storytelling.
As part of this new, three-year licensing agreement, Sora will be able to generate short, user-prompted social videos that can be viewed and shared by fans, drawing from a set of more than 200 animated, masked and creature characters from Disney, Marvel, Pixar and Star Wars, including costumes, props, vehicles, and iconic environments. In addition, ChatGPT Images will be able to turn a few words by the user into fully generated images in seconds, drawing from the same intellectual property. The agreement does not include any talent likenesses or voices.
There’s a billion-dollar investment by Disney in OpenAI, as well as the following:
OpenAI and Disney will collaborate to utilize OpenAI’s models to power new experiences for Disney+ subscribers, furthering innovative and creative ways to connect with Disney’s stories and characters.
Presumably, Disney hopes to gain more Disney+ subscribers and drive more revenues with these short-form, fan-generated videos, plus whatever “creative ways” of using AI that it comes up with. OpenAI, meanwhile, gains some handy investment, and a showcase for its Sora genAI video platform.
Although this deal is a welcome sign that some major copyright companies are starting to think imaginatively and positively about genAI, and how it can actually boost profits, the new service will doubtless be rather limited, not least in terms of what kind of videos can generated. The press release emphasises:
OpenAI and Disney have affirmed a shared commitment to maintaining robust controls to prevent the generation of illegal or harmful content, to respect the rights of content owners in relation to the outputs of models, and to respect the rights of individuals to appropriately control the use of their voice and likeness.
That means that there will always be room for edgier, smaller sites producing fanfic, fan art and fan videos that don’t worry about things like good taste or copyright. As more fans discover the delights of building on and extending the creative ideas of their idols in novel ways using genAI, we can expect a corresponding rise in the number of legal actions trying to stop them doing so.
I guess I’m a masochist, so here we go. In my recent post about Let It Die: Inferno and the game developer’s fairly minimal use of AI and machine learning platforms, I attempted to make the point that wildly stratified opinions on the use or non-use of AI was making actual nuanced conversation quite difficult. As much as I love our community and comments section — it’s where my path to writing for this site began, after all — it really did look like some folks were going to try as hard as possible to prove me right. Some commenters treated the use of AI as essentially no big deal, while some were essentially “Never AI-ers,” indicating that any use, any at all, made a product a non-starter for them.
Still other comments pointed out that this studio and game are relatively unknown. The game was reviewed poorly for reasons that have nothing to do with use of AI, as I myself pointed out in the post. One commenter even suggested that this might all be an attention-grabbing thing to propel the studio and game into the news, so small and unknown as they are.
Larian Studios is not unknown. They don’t need any hype. Larian is the studio that produces the Divinity series, not to mention the team that made Baldur’s Gate 3, one of the most awarded and best-selling games of 2023. And the studio’s next Divinity game will also make some limited use of AI and machine learning, prompting a backlash from some.
Larian Studios is experimenting with generative AI and fans aren’t too happy. The head of the Baldur’s Gate 3 maker, Swen Vincke, released a new statement to try to explain the studio’s stance in more detail and make clear the controversial tech isn’t being used to cut jobs. “Any [Machine Learning] tool used well is additive to a creative team or individual’s workflow, not a replacement for their skill or craft,” he said.
He was responding to a backlash that arose earlier today from a Bloomberg interview which reported that Larian was moving forward with gen AI despite some internal concerns among staff. Vincke made clear the tech was only being used for things like placeholder text, PowerPoint presentations, and early concept art experiments and that nothing AI-generated would be included in Larian’s upcoming RPG, Divinity.
Alright, I want to be fair to the side of this that takes an anti-AI stance. Vincke is being disingenuous at best here. Whatever use is made of AI technology, even limited use, still replaces work that would be done by some other human being. Even if you’re committed to not losing any current staff through the use of AI, you’re still getting work product that would otherwise require you to hire and expand your team through the use of AI. There is obviously a serious emotional response to that concept, one that is entirely understandable.
But some limited use of AI like this can also have other effects on the industry. It can lower the barrier to starting new studios, which will then hire more people to do the things that AI sucks at, or to do the things where we really don’t want AI involved. It can make Indie studios faster and more productive, allowing them to compete all the more with the big publishers and studios out there. It can create faster output, meaning adjacent industries to developers and publishers might have to hire and expand to accommodate the additional output.
All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop, and for the end product to be a good product. Absent those three things, the Anti-AI-ers are absolutely right and this will suck.
But the lashing that Larian has been getting is divorced from any of that nuance.
Vincke followed up with a separate statement on on X rejecting the idea that the company is “pushing hard” on AI.
“Holy fuck guys we’re not ‘pushing hard’ for or replacing concept artists with AI.
We have a team of 72 artists of which 23 are concept artists and we are hiring more. The art they create is original and I’m very proud of what they do. I was asked explicitly about concept art and our use of Gen AI. I answered that we use it to explore things. I didn’t say we use it to develop concept art. The artists do that. And they are indeed world class artists.
We use AI tools to explore references, just like we use google and art books. At the very early ideation stages we use it as a rough outline for composition which we replace with original concept art. There is no comparison.”
Yes, exactly. There are uses for this technology in the gaming industry. Pretending otherwise is silly. There will be implications on the direct industry jobs at existing studios due to its use. Pretending otherwise is silly. AI use can also have positive effects on the industry and workers within it overall. Pretending otherwise is silly and ignores all the technological progress that came before we started putting these two particular letters together (AI).
And, ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use. We might as well make the project influencing how it’s used, rather than if it’s used.
On the topic of artificial intelligence, like far too many topics these days, it seems that the vast majority of opinions out there are highly polarized. Either you’re all about making fun of AI not living up to the hype surrounding it, and there are admittedly a zillion examples of this, or you’re an AI “doomer”, believing that AI is so powerful that it’s a threat to all of our jobs, and potentially to our very existence. The latter version of that can get really, reallydangerous and isn’t to be taken lightly.
Stratified opinions also exist in smaller, more focused spaces when it comes to use of AI. Take the video game industry, for example. In many cases, gamers learn about the use of AI in a game or its promotional materials and lose their minds over it. They will often tell you they’re angry because of the “slop” that AI produces and is not found and corrected by the humans overseeing it… but that doesn’t tell the full story. Some just have a knee-jerk response to the use of AI at all and rail against it. Others, including industry insiders, see AI as no big deal; just another tool in a game developer’s tool belt, helping to do things faster than could be done before. That too isn’t the entire story; certainly there will be some job loss or lack of job expansion associated with the use of AI as a tool.
Somewhere in the middle is likely the correct answer. And what developer Supertrick has done in being transparent about the use of AI in Let It Die: Inferno is something of an interesting trial balloon for gauging public sentiment. PC Gamer tells the story of how an AI disclosure notice got added to the game’s Steam page, noting that voices, graphics, and music were all generated within the game in some part by AI. The notice is completely without nuance or detail, leading to a fairly wide backlash from the public.
No one liked that, and in response to no one liking that, Supertrick has come out with a news post to clarify exactly what materials in the game have AI’s tendrils around them. Fair’s fair: it’s a pretty limited pool of stuff. So limited, in fact, that it makes me wonder why use AI for it in the first place.
Supertrick attempted to explain why. The use of AI generated assets breaks down mostly like this:
Graphics/art: AI generated basic images based entirely on human-generated concept art and text and human beings then used those basic images as starting points, fleshing them out with further art over the top of them. Most of the assets in question here are background images for the settings of the game.
Voice: AI was used for only three characters, none of which were human characters. One character was itself an fictional AI machine and the developers used an AI for its voice because they thought that just made sense and provided some realism. The other two characters were also non-human lifeforms, and so the developer used AI voices following that same logic, to make them sound not-human.
Music: Exactly one track was generated using AI, though another AI editor was involved in editing some of the other tracks on a minimal basis.
And that’s it. Are the explanations above all that good? Nah, not all of them, in my opinion. Actors have been portraying computers, robots, and even AI for many years. Successfully in many cases, I would say. Even iconically at times. But using AI to create some base images and then layering human expression on top of them to create a final product? That seems perfectly reasonable to me. As does the use of AI for some music creation and editing in some specific uses.
Overall, the use here isn’t extensive, though, nor particularly crazy. And I very much like that Supertrick is going for a transparency play with this. The public’s reaction to that transparency is going to be very, very interesting. Even if you don’t like Supertrick’s use of AI as outlined above, it’s not extensive and that use certainly hasn’t done away with tens or hundreds of jobs. Continued public backlash would come off as kind of silly, I think.
Though the games overall reception isn’t particularly helpful, either.
Regardless, Let It Die: Inferno released yesterday, and so far has met a rocky reception. At the time of writing, the game has a Mostly Negative user-review score on Steam, with only 39% positive reviews.
Scanning those reviews, there doesn’t seem to be a ton in there about AI usage. So perhaps the backlash has moved on to the game just not being very good.
Aquarter of a century ago, I wrote a book called “Rebel Code”. It was the first – and is still the only – detailed history of the origins and rise of free software and open source, based on interviews with the gifted and generous hackers who took part. Back then, it was clear that open source represented a powerful alternative to the traditional proprietary approach to software development and distribution. But few could have predicted how completely open source would come to dominate computing. Alongside its role in running every aspect of the Internet, and powering most mobile phones in the form of Android, it has been embraced by startups for its unbeatable combination of power, reliability and low cost. It’s also a natural fit for cloud computing because of its ability to scale. It is no coincidence that for the last ten years, pretty much 100% of the world’s top 500 supercomputers have all run an operating system based on the open source Linux.
More recently, many leading AI systems have been released as open source. That raises the important question of what exactly “open source” means in the context of generative AI software, which involves much more than just code. The Open Source Initiative, which drew up the original definition of open source, has extended this work with its Open Source AI Definition. It is noteworthy that the EU has explicitly recognized the special role of open source in the field of AI. In the EU’s recent Artificial Intelligence Act, open source AI systems are exempt from the potentially onerous obligation to draw up a range of documentation that is generally required.
That could provide a major incentive for AI developers in the EU to take the open source route. European academic researchers working in this area are probably already doing that, not least for reasons of cost. Paul Keller points out in a blog post that another piece of EU legislation, the 2019 Copyright in the Digital Single Market Directive (CDSM), offers a further reason for research institutions to release their work as open source:
Article 3 of the CDSM Directive enables these institutions to text and data-mine all “works or other subject matter to which they have lawful access” for scientific research purposes. Text and data mining is understood to cover “any automated analytical technique aimed at analysing text and data in digital form in order to generate information, which includes but is not limited to patterns, trends and correlations,” which clearly covers the development of AI models (see here or, more recently, here).
Keller’s post goes through the details of how that feeds into AI research, but the end-result is the following:
as long as the model is made available in line with the public-interest research missions of the organisations undertaking the training (for example, by releasing the model, including its weights, under an open-source licence) and is not commercialised by these organisations, this also does not affect the status of the reproductions and extractions made during the training process.
This means that Article 3 does cover the full model-development pathway (from data acquisition to model publication under an open source license) that most non-commercial Public AI model developers pursue.
As that indicates, the use of open source licensing is critical to this application of Article 3 of EU copyright legislation for the purpose of AI research.
What’s noteworthy here is how two different pieces of EU legislation, passed some years apart, work together to create a special category of open source AI systems that avoid most of the legal problems of training AI systems on copyright materials, as well as the bureaucratic overhead imposed by the EU AI Act on commercial systems. Keller calls these “public AI”, which he defines as:
AI systems that are built by organizations acting in the public interest and that focus on creating public value rather than extracting as much value from the information commons as possible.
Public AI systems are important for at least two reasons. First, their mission is to serve the public interest, rather than focusing on profit maximization. That’s obviously crucial at time when today’s AI giants are intent on making as much money as possible, presumably in the hope that they can do so before the AI bubble bursts.
Secondly, public AI systems provide a way for the EU to compete with both US and Chinese AI companies – by not competing with them. It is naive to think that Europe can ever match levels of venture capital investment that big name US AI startups currently enjoy, or that the EU is prepared and able to support local industries for as long and as deeply as the Chinese government evidently plans to do for its home-grown AI firms. But public AI systems, which are fully open source, and which take advantage of the EU right of research institutions to carry out text and data mining, offer a uniquely European take on generative AI that might even make such systems acceptable to those who worry about how they are built, and how they are used.
A cofounder of a Bay Area “Stop AI” activist group abandoned its commitment to nonviolence, assaulted another member, and made statements that left the group worried he might obtain a weapon to use against AI researchers. The threats prompted OpenAI to lock down its San Francisco offices a few weeks ago. In researching this movement, I came across statements that he made about how almost any actions he took were justifiable, since he believed OpenAI was going to “kill everyone and every living thing on earth.” Those are detailed below.
I think it’s worth exploring the radicalization process and the broader context of AI Doomerism. We need to confront the social dynamics that turn abstract fears of technology into real-world threats against the people building it.
OpenAI’s San Francisco Offices Lockdown
On November 21, 2025, Wired reported that OpenAI’s San Francisco offices went into lockdown after an internal alert about a “Stop AI” activist. The activist allegedly expressed interest in “causing physical harm to OpenAI employees” and may have tried to acquire weapons.
The article did not mention his name but hinted that, before his disappearance, he had stated he was “no longer part of Stop AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Stop AI.”
According to Wired’s reporting
A high-ranking member of the global security team said [in OpenAI Slack] “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.
“Stop AI” provided more details on the events leading to OpenAI’s lockdown:
Earlier this week, one of our members, Sam Kirchner, betrayed our core values by assaulting another member who refused to give him access to funds. His volatile, erratic behavior and statements he made renouncing nonviolence caused the victim of his assault to fear that he might procure a weapon that he could use against employees of companies pursuing artificial superintelligence.
We prevented him from accessing the funds, informed the police about our concerns regarding the potential danger to AI developers, and expelled him from Stop AI. We disavow his actions in the strongest possible terms.
Later in the day of the assault, we met with Sam; he accepted responsibility and agreed to publicly acknowledge his actions. We were in contact with him as recently as the evening of Thursday Nov 20th. We did not believe he posed an immediate threat, or that he possessed a weapon or the means to acquire one.
However, on the morning of Friday Nov 21st, we found his residence in West Oakland unlocked and no sign of him. His current whereabouts and intentions are unknown to us; however, we are concerned Sam Kirchner may be a danger to himself or others. We are unaware of any specific threat that has been issued.
We have taken steps to notify security at the major US corporations developing artificial superintelligence. We are issuing this public statement to inform any other potentially affected parties.”
A “Stop AI” activist named Remmelt Ellen wrote that Sam Kirchner “left both his laptop and phone behind and the door unlocked.” “I hope he’s alive,” he added.
Early December, the SF Standardreported that the “cops [are] still searching for ‘volatile’ activist whose death threats shut down OpenAI office.” Per this coverage, the San Francisco police are warning that he could be armed and dangerous. “He threatened to go to several OpenAI offices in San Francisco to ‘murder people,’ according to callers who notified police that day.”
A Bench Warrant for Kirchner’s Arrest
When I searched for any information that had not been reported before, I found a revealing press release. It invited the press to a press conference on the morning of Kirchner’s disappearance:
“Stop AI Defendants Speak Out Prior to Their Trial for Blocking Doors of Open AI.”
When: November 21, 2025, 8:00 AM.
Where: Steps in front of the courthouse (San Francisco Superior Court).
Who: Stop AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their lawyers, and AI experts.
Sam Kirchner is quoted as saying, “We are acting on our legal and moral obligation to stop OpenAI from developing Artificial Superintelligence, which is equivalent to allowing the murder [of] people I love as well as everyone else on earth.”
Needless to say, things didn’t go as planned. That Friday morning, Sam Kirchner went missing, triggering the OpenAI lockdown.
Later, the SF Standard confirmed the trial angle of this story: “Kirchner was not present for a Nov. 21 court hearing, and a judge issued a bench warrant for his arrest.”
“Stop AI” – a Bay Area-Centered “Civil Resistance” Group
“Stop AI” calls itself a “non-violent civil resistance group” or a “non-violent activist organization.” The group’s focus is on stopping AI development, especially the race to AGI (Artificial General Intelligence) and “Superintelligence.” Their worldview is extremely doom-heavy, and their slogans include: “AI Will Kill Us All,” “Stop AI or We’re All Gonna Die,” and “Close OpenAI or We’re All Gonna Die!”
According to a “Why Stop AI is barricading OpenAI” post on the LessWrong forum from October 2024, the group is inspired by climate groups like Just Stop Oil and Extinction Rebellion, but focused on “AI extinction risk,” or in their words, “risk of extinction.” Sam Kirchner explained in an interview: “Our primary concern is extinction. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity, from dying.”
Unlike the rest of the “AI existential risk” ecosystem, which is often well-funded by effective altruism billionaires such as Dustin Moskovitz (Coefficient Giving, formerly Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this specific group is not a formal nonprofit or funded NGO, but rather a loosely organized grassroots group of volunteer-run activism. They made their financial situation pretty clear when the “Stop AI” Twitter account replied to a question with: “We are fucking poor, you dumb bitch.”2
According to The Register, “STOP AI has four full-time members at the moment (in Oakland) and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.”
Since its inception, “Stop AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the current fugitive). According to The Register and the Bay Area Current, Guido Reichstadter has worked as a jeweler for 20 years. He has an undergraduate degree in physics and math. Reichstadter’s prior actions include climate change and abortion-rights activism.
In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court’s decision overturning Roe v. Wade. Per the news coverage, he said, “It’s time to stop the machine.” “Reichstadter hopes the stunt will inspire civil disobedience nationwide in response to the Supreme Court’s ruling.”
Reichstadter moved to the Bay Area from Florida around 2024 explicitly to organize civil disobedience against AGI development via “Stop AI.” Recently, he undertook a hunger strike outside Anthropic’s San Francisco office for 30 days.
Sam Kirchner worked as a DoorDash driver and, before that, as an electrical technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Stop AI,” and “stayed in a homeless shelter for four months.”
AI Doomerism’s Rhetoric
The group’s rationale included this claim (published on their account on August 29, 2025): “Humanity is walking off a cliff,” with AGI leading to “ASI covering the earth in datacenters.”
As 1a3orn pointed out, the original “Stop AI” website said we risked “recursive self-improvement” and doom from any AI models trained with more than 10^23 FLOPs. (The group dropped this prediction at some point) Later, in a (now deleted) “Stop AI Proposal,” the group asked to “Permanently ban ANNs (Artificial Neural Networks) on any computer above 10^25 FLOPS. Violations of the immediate 10^25 ANN FLOPS cap will be punishable by life in prison.”
To be clear, tens of current AI models were trained with over 10^25 FLOPs.
In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Stop AI” (episode #49, October 14, 2024), he said: “We don’t really care about our criminal records because if we’re going to be dead here pretty soon or if we hand over control which will ensure our future extinction here in a few years, your criminal record doesn’t matter.”
The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m willing to DIE for this.” “I want to find an aggressive prosecutor out there who wants to charge OpenAI executives with attempted murder of eight billion people. Yes. Literally, why not? Yeah, straight up. Straight up. What I want to do is get on the news.”
After Kirchner’s disappearance, the podcast host and founder of “GuardRailNow” and the “AI Risk Network,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Prior to its removal, I downloaded the video (length 01:14:14).
Sherman also produced an emotional documentary with “Stop AI” titled “Near Midnight in Suicide City” (December 5, 2024, episode #55. See its trailer and promotion on the Effective Altruism Forum). It’s now removed from podcast platforms and YouTube, though I have a copy in my archive (length 1:29:51). It gathered 60k views before its removal.
The group’s radical rhetoric was out in the open. “If AGI developers were treated with reasonable precaution proportional to the danger they are cognizantly placing humanity in by their venal and reckless actions, many would have a bullet put through their head,” wrote Guido Reichstadter in September 2024.
The above screenshot appeared in a Techdirt piece, “2024: AI Panic Flooded the Zone Leading to a Backlash.” The warning signs were there:
Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).
Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
In early December 2024, I expressed my concern on Twitter: “Is the StopAI movement creating the next Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.
Targeting OpenAI
The main target of their civil-disobedience-style actions was OpenAI. The group explained that their “actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.” In a tweet promoting the October blockade, Guido Reichstadter claimed about OpenAI: “These people want to see you dead.”
“My co-organizers Sam and Guido are willing to put their body on the line by getting arrested repeatedly,” said Remmelt Ellen. “We are that serious about stopping AI development.”
The “Stop AI” event page on Luma list further protests in front of OpenAI: on January 10, 2025; April 18, 2025; May 23, 2025 (coverage); July 25, 2025; and October 24, 2025. On March 2, 2025, they had a protest against Waymo.
On February 22, 2025, three “Stop AI” protesters were arrested for trespassing after barricading the doors to the OpenAI offices and allegedly refusing to leave the company’s property. It was covered by a local TV station. Golden Gate Xpress documented the activists detained in the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and cut the lock and chains on the front doors. In a Bay Area Current article, “Why Bay Area Group Stop AI Thinks Artificial Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists present” is “putting my family at risk.”
October 20, 2025 was the first day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.
On November 3, 2025, “Stop AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a speaking event at the Sydney Goldstein Theater in San Francisco. The group claimed responsibility for the onstage interruption, saying the goal was to prompt the jury to ask Altman “about the extinction threat that AI poses to humanity.”
Public Messages to Sam Kirchner
“Stop AI” stated it is “deeply committed to nonviolence“ and “We wish no harm on anyone, including the people developing artificial superintelligence.” In a separate tweet, “Stop AI” wrote to Sam: “Please let us know you’re okay. As far as we know, you haven’t yet crossed a line you can’t come back from.”
John Sherman, the “AI Risk Network” CEO, pleaded, “Sam, do not do anything violent. Please. You know this is not the way […] Please do not, for any reason, try to use violence to try to make the world safer from AI risk. It would fail miserably, with terrible consequences for the movement.”
Rhetoric’s Ramifications
Taken together, the “imminent doom” rhetoric fosters conditions in which vulnerable individuals could be dangerously radicalized, echoing the dynamics seen in past apocalyptic movements.
In “A Cofounder’s Disappearance—and the Warning Signs of Radicalization”, City Journal summarized: “We should stay alert to the warning signs of radicalization: a disaffected young person, consumed by abstract risks, convinced of his own righteousness, and embedded in a community that keeps ratcheting up the moral stakes.”
“The Rationality Trap – Why Are There So Many Rationalist Cults?” described this exact radicalization process, noting how the more extreme figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the community to adopt AI Doomerism as the baseline, and perceived urgency as the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”
We already have a Doomers “murder cult” called the Zizians and their story is way more bizarre than anything you’ve read here. Like, awfully more extreme. And, hopefully, such things should remain rare.
What we should discuss is the dangers of such an extreme (and misleading) AI discourse. If human extinction from AI is just around the corner, based on the Doomers’ logic, all their suggestions are “extremely small sacrifices to make.” Unfortunately, the situation we’re in is: “Imagined dystopian fears have turned into real dystopian ‘solutions.’”
This is still an evolving situation. As of this writing, Kirchner’s whereabouts remain unknown.
—————————
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
—————————
Endnotes
Don’t mix StopAI with other activist groups, such as PauseAI or ControlAI. Please see this brief guide on the Transformer Substack. ↩︎
This type of rhetoric wasn’t a one-off. Stop AI’s account also wrote, “Fuck CAIS and @DrTechlash” (CAIS is the Center for AI Safety, and @DrTechlash is, well, yours truly). Another target was Oliver Habryka, the CEO at Lightcone Infrastructure/LessWrong, whom they told, “Eat a pile of shit, you pro-extinction murderer.” ↩︎
Eliezer Yudkowsky, cofounder of the Machine Intelligence Research Institute (MIRI), recently published a book titled “If Anyone Builds It, Everyone Dies. Why Superhuman AI Would Kill Us All.” It had heavy promotion, but you can read here “Why The ‘Doom Bible’ Left Many Reviewers Unconvinced.” ↩︎
A federal judge just ruled that computer-generated summaries of novels are “very likely infringing,” which would effectively outlaw many book reports. That seems like a problem.
This isn’t just about AI—it’s about fundamentally redefining what copyright protects. And once again, something that should be perfectly fine is being treated as an evil that must be punished, all because some new machine did it.
But, I guess elementary school kids can rejoice that they now have an excuse not to do a book report.
To be clear, I doubt publishers are going to head into elementary school classrooms to sue students, but you never know with the copyright maximalists.
Sag highlights how it could have a much more dangerous impact beyond getting kids out of their homework: making much of Wikipedia infringing.
A new ruling in Authors Guild v. OpenAI has major implications for copyright law, well beyond artificial intelligence. On October 27, 2025, Judge Sidney Stein of the Southern District of New York denied OpenAI’s motion to dismiss claims that ChatGPT outputs infringed the rights of authors such as George R.R. Martin and David Baldacci. The opinion suggests that short summaries of popular works of fiction are very likely infringing (unless fair use comes to the rescue).
This is a fundamental assault on the idea, expression, distinction as applied to works of fiction. It places thousands of Wikipedia entries in the copyright crosshairs and suggests that any kind of summary or analysis of a work of fiction is presumptively infringing.
Short summaries of copyright-covered works should not impact copyright in any way. Yes, as Sag points out, “fair use” can rescue in some cases, but the old saw remains that “fair use is just the right to hire a lawyer.” And when the process is the punishment, saying that fair use will save you in these cases is of little comfort. Getting a ruling on fair use will run you hundreds of thousands of dollars at least.
Copyright is supposed to stop the outright copying of the copyright-protected expression. A summary is not that. It should not implicate the copyright in any form, and it shouldn’t require fair use to come to the rescue.
Sag lays out the details of what happened in this case:
Judge Stein then went on to evaluate one of the more detailed chat-GPT generated summaries relating to A Game of Thrones, the 694 page novel by George R. R. Martin which eventually became the famous HBO series of the same name. Even though this was only a motion to dismiss, where the cards are stacked against the defendant, I was surprised by how easily the judge could conclude that:
“A more discerning observer could easily conclude that this detailed summary is substantially similar to Martin’s original work, including because the summary conveys the overall tone and feel of the original work by parroting the plot, characters, and themes of the original.”
The judge described the ChatGPT summaries as:
“most certainly attempts at abridgment or condensation of some of the central copyrightable elements of the original works such as setting, plot, and characters”
He saw them as:
“conceptually similar to—although admittedly less detailed than—the plot summaries in Twin Peaks and in Penguin Random House LLC v. Colting, where the district court found that works that summarized in detail the plot, characters, and themes of original works were substantially similar to the original works.” (emphasis added).
To say that the less than 580-word GPT summary of A Game of Thrones is “less detailed” than the 128-page Welcome to Twin Peaks Guide in the Twin Peaks case, or the various children’s books based on famous works of literature in the Colting case, is a bit of an understatement.
Yikes. I’m sorry, but if you think that a 580-word computer-generated summary of a massive book is infringing, then we’ve lost the plot when it comes to copyright law. If it were, then copyright itself would need to be radically changed to allow for basic forms of human speech. If I see a movie and tell my friend what it was about, that shouldn’t implicate copyright law, even if it summarizes “the plot, characters, and themes of the original work.”
Sag then ties this to what you can find for countless creative works on Wikipedia:
To see why the latest OpenAI ruling is so surprising, it helps to compare the ChatGPT summary of A Game of Thrones to the equivalentWikipedia plot summary. I read them both so you don’t have to.
The ChatGPT summary of a Game of Thrones is about 580 words long and captures the essential narrative arc of the novel. It covers all three major storylines: the political intrigue in King’s Landing culminating in Ned Stark’s execution (spoiler alert), Jon Snow’s journey with the Night’s Watch at the Wall, and Daenerys Targaryen’s transformation from fearful bride (more on this shortly) to dragon mother across the Narrow Sea. In this regard, it is very much like the 800 word Wikipedia plot summary. Each summary presents the central conflict between the Starks and Lannisters, the revelation of Cersei and Jaime’s incestuous relationship, and the key plot points that set the larger series in motion.
And, look, if you want to see the chilling effects on speech created by over expansive copyright law, well:
I could say more about their similarities, but I’m concerned that if I explored the summaries in any greater detail, the Authors Guild might think that I am also infringing George R. R. Martin’s copyright, so I’ll move on to the minor differences.
You can argue that Sag, an expert on copyright law, is kind of making a joke here, but it’s no actual joke. Just the fact that someone even needs to consider this shows how bonkers and problematic this ruling is.
As Sag makes clear, there are few people out there who would legitimately think that the Wikipedia summary should be deemed infringing, which is why this ruling is notable. It again highlights how lots of people, including the media, lawmakers, and now (apparently) judges, get so distracted by the “but this new machine is bad!” in looking at LLM technology that they seem to completely lose the plot.
And that’s dangerous for the future of speech in general. We shouldn’t be tossing out fundamental key concepts in speech (“you can summarize a work of art without fear”) just because some new kind of summarization tool exists.