Last month, we discussed NVIDIA’s demo video for its forthcoming DLSS 5 technology and the controversy surrounding it. While I’m going to continue to be of the posture that an injection of nuance is desperately needed in the reaction to AI tools and the like, our comments section largely disagreed with me on that post. That’s cool, that’s what this place is for, and I still love you all.
But this post is not about DLSS 5. Rather, it’s about the video itself and how it was briefly taken down over automated copyright claims thanks to an Italian news channel. Please note that the source material here was written while the video was still down, but it has since been restored.
And now, here we are in April, and NVIDIA’s DLSS 5 announcement trailer is no longer available to watch on YouTube on the company’s official GeForce channel. And no, it’s not because NVIDIA is responding to the feedback and retooling the technology for a re-reveal or re-announcement; it’s now blocked on “copyright grounds.”
A clear mistake, but also one that highlights the limitations of Google’s automated system for YouTube. Apparently, the Italian television channel La7 included footage from the DLSS 5 reveal in a recent broadcast and has since copyrighted it. From there, essentially every video on YouTube with DLSS 5 trailer footage was issued a copyright strike and said to be in violation, with the videos taken down with the following message: “Video unavailable: This video contains content from La7, who has blocked it in your country on copyright grounds.”
Yes, this was clearly a mistake. But it’s a mistake that I’m frankly tired of hearing about, all while Google does absolutely nothing to iterate on its copyright process and systems to mitigate such mistakes. The examples of this very thing are so legion as to be laughable. Whether due to error or due to malicious intent, videos that include content from other videos for the purposes of reporting and commentary, which are then copyrighted and result in takedowns of the source material, happens all the damned time.
This is almost certainly all automated, which means there are no human eyes looking for an error in the flagging of a copyright violation. It just gets tagged as such and taken down. And, no, the irony is not lost on me that we need human eyes to keep an automated copyright takedown on a video about AI from occurring.
What makes this alarming is that the video was taken down with seemingly no human interaction or input, as it’s clear that NVIDIA not only created DLSS 5, for better or worse, but also the trailer that has been a hot topic of discussion this year. We’re assuming this will be resolved fairly quickly. Still, it will be interesting to see whether YouTube responds to this case and claims that false copyright infringement notices like this are prevalent on the platform.
Google hasn’t been terribly interested in commenting on the plethora of cases like this in the past, so I strongly doubt it will now. Which is a damned shame, honestly, because the company really should be advocating for all of the users on its platform, if not especially those that are negatively impacted by this haphazard process.
But, for now, the video is back, so you can go hate-watch it again if you like.
I’m going to trust that most of our audience will have some idea of what McCarthyism was in the 1950s. To summarize very briefly, it was an anti-communist campaign that spread into becoming equally anti-leftist throughout the country, with a specific focus on driving the supposed communist influences out of major media in America, such as radio and Hollywood. This led to a public hyper-vigilant in looking for supposed communists everywhere, as well as plenty of cases of false accusations of communist activity purposefully foisted upon people for personal reasons. This rabid, frothy-mouthed era of suspicion became a major stain on America in the 1950s.
I’m watching a version of this begin to take form around artificial intelligence. I know, I know: there are very real dangers and negative outcomes that could come to be from AI. That was true of communism and our Cold War enemy in the Soviet Union as well. My point is not that AI is great all the time and any pushback against it is invalid. Instead, my point is that we’re starting to see what I’ll call McPromptism, where some percentage of the public looks for AI everywhere it can and, if use is suspected, immediately decries it as terrible and demands that people not engage with the supposed user.
And just like McCarthyism, McPromptism gets its accusations wrong sometimes. You can see a version of that in the story of Aspyr’s remastering of old Tomb Raider games and the horrible outfits that were produced for the protagonist, Lara Croft.
Earlier this week we reported on fan reaction to the latest update to the Tomb Raider I-III Remastered collection, in which the game received a new Challenge Mode, while Lara received a suite of new outfits to wear as rewards. And oh wow, they were bad. Comically bad. So bad, in fact, that one of the remaster’s original artists posted on X to distance himself and his colleagues from the dross. Alongside all of this was the suspicion that genAI might have been involved in the fits’ creation, given just how dreadful they looked. Publisher Aspyr has now finally responded to the claims to insist no AI was used at all, instead stating they were created by “our team of artists.” Which raises more questions.
If you want to see a somewhat humorous look at the outfit textures that are the subject of public complaint, here you go.
On the one hand, for someone like me who is not into the anti-AI dogma out there, it is objectively funny for some people to point at bad video game textures and claim they’re so bad because they’re obviously created using generative AI… only to have the company that made them say, “Nuh uh! It was our human employees who made them!” It’s almost Monty-Python-esque, in a way.
But this default among some in the gaming public to be “This thing in gaming is bad, so it must have been made using AI!” is just one more kind of silly that is out there right now. Aspyr doesn’t exactly have a perfect reputation when it comes to remastering games, after all, and it built that reputation long before genAI came along.
It seems clear that this was a case of images being released to promote the remastered game that Aspyr didn’t live up to in the actual game itself. No AI, just human beings not hitting the mark. It happens all the time. Hell, there is even a chance that AI could have done a better job. Not a certainty by any stretch, but a possibility.
But the real take away from this otherwise minor episode for me was the McPromptism misfire. If you’re going to rage against the literal machine in the video gaming industry, which I think is the wrong stance to take anyway, at least let it be righteous rage.
We’ve been talking a lot of about the use of artificial intelligence lately, for obvious reasons. Many of those conversations have revolved around the video game industry and I’ve been fairly vocal about pushing back against the “all AI is bad everywhere forever” dogma that I see far too often. There are plenty of folks in our community that don’t agree with me on that, and that’s fine. But if the picture you’re getting is that I’m an AI evangelist, that’s simply not true. There are potentially good uses of AI in my view, as well as a whole lot of potential negative outcomes of its use. I’m not blind to that.
And, in the video game industry specifically, one bit of pushback that seems to be sorely needed is on game developers that use generative AI in their games, fail to say so, and then excuse its use as accidental after the fact. That is becoming as common a refrain from game developers as the laughable excuse in trademark instances that is, “Well, I have to be an aggressive jerk about my trademarks or else I lose them.” Neither is true.
The most recent version of this concerns the recent hit launch of Crimson Desert. In what is becoming something analogous to the antiquated process by which people who watch golf tournaments on TV looking for missed rules violations could then send into the PGA, which I’ve coined as McPromptism, new game releases get put under a microscope by people looking to find AI uses within them. Crimson Desert went through this process and, wouldn’t you know it, people found clear uses of AI-generated assets in the game.
The game’s extremely high fidelity and impressive graphics are a big part of the sales pitch, which made it all the more disappointing when players began to come across what appeared to be AI-generated artwork littered throughout the game. In light of the disappointment, developer Pearl Abyss has apologized for including the slop in their game, promising to remove and replace all of it.
“We also acknowledge that we should have clearly disclosed our use of AI,” the Crimson Desert account posted on X. “We are currently conducting a comprehensive audit of all in-game assets and are taking steps to replace any affected content. Updated assets will be rolled out in upcoming patches. In parallel, we are reviewing and strengthening our internal processes to ensure greater transparency and consistency in how we communicate with players moving forward.”
Like I said above, this excuse is getting old. Very old. Game developers and publishers will be more than aware at this point that a sizable percentage of the gaming public is very allergic to the use of AI in games, particularly when that use is not acknowledged at the forefront. If placeholder assets generated by AI are to be used at all in the development of a game, it is inexcusable for a developer to not have a process to remove them in place of human-created art before the game is published. That’s sloppy at best, and a lie of an excuse at worst.
Especially because it’s not like there aren’t other options that have nothing to do with AI.
The practice is becoming more common in AAA developer spaces, but critics argue that setting aside the use of AI in your game, it’s pretty foolish to use temporary assets that don’t call obvious attention to themselves. In games of such massive scale, BRAT-green blocks that scream “DO NOT USE” are much easier to flag than something approximating the final product.
I’m struggling to come up with a counter-argument to that.
I’m still in a place where I think there are valid uses of AI in gaming development. If a dev or publisher wants to explore those uses and, importantly, is upfront about it, there may be a place for that.
But the excuse of laziness when it comes to stripping AI assets out when their use was not intended is lame and needs to go away.
The polarization over any and all uses of artificial intelligence and machine learning continues. And, to be clear, I very much understand why this is all so controversial. Any new technology that has the chance to be transformative will also necessarily be disruptive and that causes fear. Fear that is not entirely unfounded, no matter your other opinions on the matter. If that’s you, cool, I get it.
I’ll start this off by pointing to the latest edition of the Techdirt podcast in which both Mike and Karl engaged in a fantastic discussion about the use of AI. I’ve listened to it twice now; it’s that good. And, while I found myself arguing out loud with the both of them at certain points during the podcast, despite the fact that neither of them could hear my retorts, it presents a grounded, often nuanced conversation, which we need much more of in this space.
And now, in what might be a subconscious attempt by this writer to commit suicide by comments section, let’s talk about that controversial demo of NVIDIA’s forthcoming DLSS 5 technology. What DLSS 5 does compared with previous versions of the technology is indeed new, but what is not new is the introduction of AI and machine learning into the equation. DLSS 2 and 3 had that already, in the form of pixel reconstruction and frame generation. DLSS 5, however, introduced what is being labeled as “neural rendering”, which uses machine learning to alter the lighting and detailed appearances in environments and, most importantly, character rendering over the engine on top of the 2D image output. Here’s the video demo that got everyone talking.
The backlash to the video was wide, immediate, and furious. There was a great deal of talk about the alteration of artistic intent, about whether this changed what the original developers were attempting to portray when they created the games, and, of course, industry jobs. I want to talk about the major complaint pillars seen across many outlets below, but this backlash also supposedly came with death threats foisted upon NVIDIA employees. I would very much hope we could all at least agree that any threats of that nature are completely inappropriate and absurd.
With that, here is what I’ve seen in the backlash and what I’d want to say about it.
Get your damned AI out of my games!
Perhaps not the most common pushback I saw in all of this, but a very common one. And a silly one, too. As I mentioned above, DLSS versions already used some version of AI and machine learning. That isn’t new. How it’s applied is certainly new, but that isn’t the same as the demand to keep AI entirely out of the video game industry.
And if that’s where you are, go ahead and shake your fist at the clouds in the sky. AI is a tool and, as I’ve now said repeatedly, the conversation we should be having is how it’s used in gaming, not if it’s used. That’s because its use is largely a foregone conclusion and it is an open question as to whether its use will be a net benefit or negative overall to the industry. Dogmatic purists on AI have a stance that is understandable, but also untenable. We’re too far down this road to turn around and go home. And if the tech were able to lower the barriers of entry to the gaming industry, acting as the fertilizer that allows a thousand indie studios to sprout roots, would that really be so bad for the gaming ecosystem?
I can appreciate the purists’ point of view. I really can. I just don’t see where they have a place in the conversation when it comes to gaming.
It overrides artistic intent!
Does it? If it did, then hell yes that’s bad. But if it doesn’t, then this concern goes away entirely.
DLSS 5 is built with options and customizable sliders for game developers. That’s really, really important here. At the macro level, a developer that has decided to use DLSS 5, or decided and customized how it’s used in their games, is exercising consent over their products. That should be obvious.
But then we get into really interesting questions of art, the actual artist, and the ownership of that art, because those last two are very different things. As Digital Foundry outlines:
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them – the contentious assets we’ve seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn’t just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we’ve seen endorsements from Bethesda’s Todd Howard and Capcom’s Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators – it needs final game imagery to work at all – but the extent to which it could be viewed as a worrying sign of “things to come” cannot be overstated bearing in mind the reactions elsewhere to generative AI.
That strikes me as a valid and interesting ethical question when it comes to the use of this technology, but one that is probably overwrought. Individual artists who work on video games already have their artistic output live at the pleasure of the game developers they contract with. Those developers already can use this game art in all kinds of ways that the individual artist may not have had in mind when creating it, or indeed have even considered such possibilities. DLSS 5 is just one more version of that, with the main difference being that it involves AI making changes to game images. That’s an important thing to consider, sure, but there are cousins to this ethical question that we’ve all come to accept already. This strikes me more as part of the “all AI is bad all the time” crowd finding a foothold in something other than dogma to grab onto.
Developers and publishers own their games. If they want to use DLSS 5 in those games, there is little other than specific work for hire or other contractual stipulations with individual artists that would keep them from implementing it. If artists don’t like that, I completely understand that point of view, but that’s what contract negotiations and language are for.
Bottom line: I have been as vocal as anyone arguing that video games are a form of art for well over a decade now and I struggle to agree that an optional technology that has approved buy in from game developers and publishers equates to “overriding artistic intent”, writ large.
The faces in these examples look like shit, are “yassified”, or suffer from the uncanny valley effect!
Look, here we’re going to get into matters of opinion. I have to say that when I viewed the demo video myself, I had the opposite reaction. And, yes, this opens me up to claims that I am somehow a massive fan of AI-created pornography (this is where the yassified comments come in), or that I just want all the characters to look “hot” (I’m too old for that shit), or that my older age of 44 means I’ve lost touch with what video games should look like. Despite my genuine respect for the dissenting opinions here, allow me to say this: bullshit.
The caveat to all of this is that the demo revealed very little in the way of this technology working within these games in motion. It’s also certainly true that NVIDIA chose the best potential images to show off its new technology. If the DLSS 5 rendering sucks out loud in a larger in-motion game, or if the images it creates end up being inconsistent throughout gameplay, or if it does just end up looking shitty, then I’ll be right there with you with a torch and pitchfork in hand.
And here’s the other thing to consider with this particular complaint, combined with the previous one about artistic intent: do any of you use visual mods in your games? I do. A ton of them. For a variety of reasons. I have used them to alter the faces and models for games like Starfield and Skyrim, among many others. Do I need to feel bad for altering the artist’s intent? Do I need to apologize for incorporating mods to make characters and environments appear in a way that helps me better connect with the game I’m playing?
Because I’m not going to do either. And I don’t expect you to. Nor do I expect game developers that choose to use this optional technology to beg for forgiveness for their own output.
The hardware demands to run all of this are insane!
Fine, then you’ll get what you want and nobody will be able to use this technology anyway. But I don’t think that will be the case. NVIDIA knows what it will take to run this tech once it leaves the demo stage and goes into production. The idea that they would hype up technology that nobody can use strikes me as unlikely in the extreme.
Conclusion: everyone take a breath
This still strikes me as more of a “all AI is bad” crowd grasping at lots of other things to buttress their pushback than anything else. AI has plenty, plenty of potential pitfalls. Worried about jobs in the gaming industry and elsewhere? Me too! But if you’re not also looking at the potential upsides for the industry, then you’re engaging in dogma, not conversation.
Will DLSS 5 be good? I have no idea and neither do you. Will DLSS 5 alter previously released games in a way that fundamentally alters how we play these games? I have no idea and neither do you. Will it negatively impact the gaming industry when it comes to the number of jobs within it? I have no idea and neither do you.
This was a tech demo. Details on how it works are still trickling out. Most recently, there has been some clarification as to the 2D rendering nature of the technology and what that means for the output on the screen. As an early demo of the technology, feedback is going to be important, so long as it’s informed and reasonable feedback.
The technology may end up being trash and hated for reasons other than “all AI is bad all the time.” If that ends up being the case, I trust the gaming market to work that out for itself. But a lot of the hand-wringing here looks to me to be speculative at best.
There is an old axiom you will have heard of before: don’t let the perfect be the enemy of the good. If we wanted to boil this down to a math equation, it might be described as something like: 0 < any positive integer. It’s not a difficult concept to grasp, typically, until you add in a dash of near-religious ideology into the equation. And that’s where the anti-AI crowd comes in.
Dustin Hubbard heads up Gaming Alexandria, a site dedicated to the preservation of obscure corners of video game history. Focused less on the actual games themselves, Gaming Alexandria instead focuses its efforts on media surrounding those games, such as manuals, box art, and old gaming journalism outputs. To that end, Hubbard’s group has amassed an impressive number of Japanese magazine scans throughout the years. To make this content useful to researchers elsewhere, he built a low-footprint app to make those scans searchable and, more importantly, to translate them. A Patreon page and subscriptions partially funded all of this.
A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours.
“I sincerely apologize,” Hubbard wrote in his apology post. “My entire preservation philosophy has been to get people access to things we’ve never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI.”
And this is where we enter the realm of the silly. I’m not some AI evangelist. I fully recognize that there are error and other problems with AI… and I imagine there always will be, to some extent. AI is not always, or perhaps even mostly, the right tool to use. Nor will it always have benefits that outweigh problems it creates for we human beings.
But a positive number is greater than zero. This was a tool that suddenly made all of this culture content accessible to a wider range of people. Before it was not available to anyone that didn’t have a high-level of knowledge on the Japanese language. Translation errors also happen with human translators, too. We need only look at the ancient religious texts, and the very real wars started over their translations, to understand that.
Hubbard himself attempted to make this point over the weekend.
Writing on Patreon this weekend, Hubbard said he has long been tinkering with an improved automated OCR and translation process that could help turn more of those magazine scans into useful tools for Western researchers. And when he put Google Gemini AI model to the task recently, he said he was “blown away” by the results. While he still recommended using a professional human translator before citing these magazines in any scholarly research, he said the output from the Gemini AI tool “gets you a large percentage of the way there quickly.”
Inspired by those results, Hubbard set to work on a self-described “vibe coded” interface to view the original PDF scans alongside their AI-generated text translations for easy comparison and editing. The result was the Gaming Alexandria Researcher tool, posted to GitHub on Friday and shared with the site’s Patreon backers as a “beta” on Saturday. The tool, which runs locally on Windows, Mac, or Linux, can search, download, and edit Gaming Alexandria’s files from the cloud or sort through local files stored on your own machine.
“This app has been something I never would have dreamed could exist,” Hubbard enthused. “Now I can finally read and enjoy these Japanese magazines I’ve been scanning for years. A large part of that is due to your believing in my work and funding me so thank you so much for that.”
The negative responses he got for all of this are wild. There were calls to boycott the project. Calls to rescind Patreon subscriptions. Max Nichols, a game designer, cancelled his own Patreon membership and decried the project as “worthless and destructive”, likening any output generated using AI-based translations as “looking at history through a clownhouse mirror.”
I would argue that I’d rather get that look than get no look at all. I’d also argue that we need to see very specific examples of AI-created translation errors to understand just how grounded these criticisms are in reality, versus all of this being a case of overstating the case.
Some fans of the site, at least, managed to understand the context here.
For some supporters, though, using machine translations—including ones aided by AI models—is a practical necessity given the size of the task at hand. “There’s no world in which they could ever get hundreds of thousands of pages translated by hand,” game preservationist Chris Chapman wrote on social media. “Error-prone searchability is more useful to more people than none at all.”
“Famitsu alone is over 1,900 issues, each with [a hundred-plus] pages,” journalist and author Felipe Pepe noted. “That’s one magazine from one country. [Human translation] would be ideal, but it’s impossible.”
On the Gaming Alexandria Discord, user asie wrote that people who use tools like Google Lens or DeepL are already using AI-powered OCR and translation tools. At this point, these kinds of tools are “just a fact of reality,” they added.
Again, any positive number is greater than zero. Don’t let the perfect be the enemy of the good. Something is better than nothing.
I don’t know how to explain the negative responses here as anything other than a ideological commitment to disliking anything that even remotely touches upon artificial intelligence. Absolute moral stances certainly have their place, but they sure ought to be used sparingly.
We should all know by now that this iteration of the Trump administration absolutely loves using pop culture imagery, including that of video games, to help message its horrible policies. Want to gloat about ICE terrorizing American cities and generally pissing everyone off when they’re not too busy perforating innocents? Let’s use images from Pokémon and Halo! Want to celebrate the destruction of American health thanks to RFK Jr. being in charge of it? Time to whip together a Stardew Valleymeme!
It’s gross, of course. Wrapping these pop culture images around fascism, particularly where real deaths have been a result, is nauseating.
But if you want to make this absolutely as disgusting as possible, you need only to use video game footage to gloat about the body count America is racking up in its war/non-war with Iran.
On March 4, the official White House Twitter account posted a roughly one-minute-long video featuring numerous clips of real military strikes against different Iranian locations and targets. At the very start of the video is a clip from 2023’s Modern Warfare III that shows a player activating an MGB killstreak. This is a hidden killstreak for players who get 30 kills without dying. Once called, the bomb ends the match. The official video was posted with the caption: “Courtesy of the Red, White & Blue.”
This is disgusting. Using video game footage to gloat about the Iranian body count is simply sick. Set aside what you think about this war. Set aside whether you think this administration has any fucking clue what it is doing and what will come next once it’s done dropping its bombs. Set aside the open question of what our goals actually are here, whether we’re going to see American troops on the ground in Iran, or whether this will end up as another American quagmire in the Middle East. None of that is the point here.
This isn’t a fucking game. It’s war, no matter how hard the president and the Republicans in Congress want to pretend otherwise so that they don’t have to do their damned jobs. War is a very serious matter, a sentence that never should need to be written in the first place. Eschewing that level of seriousness by treating this like it’s some kind of a video game and we’re all just trying to earn trophies and badges for our kill counts is fucking sick.
IRAN: At least 1,230 people killed, including 175 schoolgirls and staff killed in a missile strike on a primary school in Minab in the country’s south on the war’s first day, according to the non-profit humanitarian group Iranian Red Crescent Society. It was unclear if the overall death toll included Islamic Revolutionary Guard Corps military casualties.
Here’s an image of the mass graves Iran says it dug in order to put all of those children to their final rest.
I wonder, are those girls included in the body count to get the White House its Xbox achievement?
War is not a game. Treating it like a video game shows that these are deeply unserious people that are not only running our government, but currently prosecuting a war that they don’t want to call a war. The naked cruelty of it all, rather than treating the enemy and, more importantly, the American people with respect, is horrifying.
That they’re doing it in our name, all the more so.
I’ve been talking about the Stop Killing Games movement for some time now, so important is its mission to me. This collection of volunteers focused on video game and cultural preservation is attempting to whip up public support for legislation to achieve those goals. Currently focused in the EU, the campaign is built primarily on legislating the following rules:
Games sold must be left in a functional state
Games sold must require no further connection to the publisher or affiliated parties to function
The above also applies to games that have sold microtransactions to customers
The above cannot be superseded by end user license agreements
If you can find fault in any of the above, tell me what that would be in the comments. I personally see no such flaws. I particularly can’t find them in the context of a present reality in which games that people paid very real money for are ripped from their digital hands because publishers simply stop supporting them (online games) or because they were designed with planned obsolescence included (single player games that sunset when they can’t check in with servers (hi there, NBA2K games!)).
In order to compel the EU Parliament to take up the issue in session, the petition needed to achieve 1 million signatures. That happened last summer. Step 2 in the process is a review by the EU to validate those signatures, to be sure there is no shady fuckery in them. And that just happened, with the petition boasting one of the smallest percentage of invalidated signatures in memory.
Stop Killing Games volunteer Moritz Katzner has shared an update on the popular European Union Citizens’ Initiative to its official subreddit. The EU has successfully verified 1,294,188 of Stop Killing Games’ 1,448,270 signatures, easily clearing the one million minimum count it needed to move forward in the process.
In the comments, user MikeyIfYouWanna calculated that about 89% of the submitted signatures were legitimate. Katzner agreed, estimating that Stop Killing Games has one of the top three lowest failed signature rates among EU Citizens’ Initiatives.
“We’re sitting at around 10%, while the best-performing initiatives tend to fall in the 10–15% range, which puts us firmly in the upper bracket,” Katzner wrote. “Some initiatives see failure rates as high as 20–25% and still manage to get over the line, but it’s worth noting that the overall sample size is quite small, only 11 initiatives.”
Now, I will admit that I am no expert in how the EU legislative process works, nor how it interacts with the laws of its member countries. Over on the Stop Killing Games subreddit, where this signature achievement was announced, several commenters appeared aligned on how this works moving forward. Here is the best of them, from user AShortUsernameIndeed.
There’s a few steps. The EU issues regulations (which are EU-wide laws) and directives (which are guidelines for national legislation). Since this initiative is framed as a consumer rights issue, the most that can come out of it on the EU level is a directive (because the EU does not have direct legislative powers on consumer rights). The actual laws will then be written by the legislatures of each member country, separately. So the steps are:
get the EU commission and parliament to decide to legislate, then
lobby them to get a directive that actually does what the initiative wants, then
lobby the parliaments of all 27 member states to get the directive transformed into laws that actually do what the initiative wants.
That’s a few years of work ahead, in the best case. We’ll see what happens.
That looks correct, from my own poking around. And it does indeed mean that there are both years of work ahead before this turns into actual European laws and there are millions of lobbying dollars to overcome. But it’s progress, if only incremental.
And while video game preservation has long been important to me, I will admit I never thought I’d see the day when a governmental body such as the EU Parliament would actually take up the issue. Through the power of the internet, a collective appreciation for the preservation of culture, and volunteer work, however, it appears that will happen at the very least.
In my previous posts about the use of generative AI tools in the video game industry, I have tried to drive home the point that a nuanced conversation is needed here. Predictably, there were many comments of the sort of stratified opinions that I was specifically attempting to avoid, but I always knew they’d be there. And that’s okay. Where there is novelty, there is disruption and discomfort. And, frankly, some of the dangers here aren’t unfounded.
But in the end, I remain of the opinion that generative AI will be a tool used by game developers generally in the future, if not the present. I also still firmly believe that the conversation we should be having is not whether AI should be used in games, but how it should be used.
And people like the CEO of Shift Up in South Korea sure aren’t helping when they insist on the need to use AI by trotting out the Chinese boogeyman.
Will gen AI be part of Stellar Blade 2‘s development? It doesn’t sound entirely outside the realm of possibility after recent comments from developer Shift Up’s CEO. The South Korean game studio is currently working on a sequel to the 2024 sci-fi action game and its boss thinks AI is the only way to compete with the massive development teams coming out of China.
“We devote around 150 people to a single game, but China puts in between 1,000 to 2,000,” Hyung-tae Kim, who also served as director on Stellar Blade, said during a recent conference briefing according to GameMeca (translated via Automaton). “We lack the capacity to compete, both in terms of quality and volume of content.”
Where do I even begin with this nonsense? First, it’s completely devoid of the nuance I was asking for in these kinds of discussions. This is essentially stating that developers can make up for China’s massive human assets it can throw at game development by using AI to make up the difference. 1 employee using AI, doing the math, can be the equivalent of 100 or so Chinese workers. That sounds like you’re looking to stave off hiring by using AI and you aren’t helping!
It also fails, somehow, to recognize that generative AI can be used in China as well. China isn’t exactly ignoring AI tools, you know, so this arms race makes no real sense.
Finally, it’s just kind of bullshit. Chinese studios have certainly produced some games, some that have been quite successful. But when we think about the major players in the video game industry, especially in terms of quality and revenue, China is but a fairly average player on the world scene. Tencent, NetEase, and MiHoYo all crack the top ten in revenue, but the rest of the longer list is filled with American, Japanese, and South Korean studios, among some other countries. They’re a player in the industry, to be sure. But they aren’t some dominant force that requires special tactics to compete with.
But despite all the above, Shift Up has been both successful and has committed to retaining and treating its staff well.
Was Kim actually worried about rising competition from China, or was he just flexing his geopolitical muscle as Stellar Blade‘s popularity catapults Shift Up into the big time? After all, that game sold millions of copies across console and PC without the help of AI, even as Tencent, Net Ease, and other major Chinese publishers flood the market with AAA free-to-play games.
For now at least, Shift Up employees are being well taken care of. Seoul Economic Daily recently reported that all 300 employees at the studio were given AirPods Max, Apple Watches, and a bonus $3,400 to celebrate the company’s profitable 2025. Why no video game consoles? It already gifted PS5 Pros and Switch 2s last year.
That sure doesn’t read like a studio in dire straits due to the scary Big Red Machine or whatever he’s trying to pitch. How about you keep making good games and all will be fine?
Then we can get back to the real, more nuanced conversation about just what place AI has in video game production.
I’m going to start this post off with two rhetorical questions.
Do you believe that the use of AI should be free and unfettered in the video game industry and will certainly and overwhelmingly be a positive good for the industry generally?
Do you believe that AI should be banned and never used in the video game industry because it can only produce slop and result in job loss in the industry generally?
My position is simple: anyone answering “yes” to either of those questions is out of the conversation when I’m involved. Dogmatic approaches like those aren’t right, they’re not smart, they’re not helpful, and they will never produce any progress or interesting discussion. They’re a sort of religious beliefs pointed at a terrestrial industry and they make no sense.
And now let me add a rhetorical statement of my own, so that there’s no misunderstanding: every game publisher and developer out there is free to make their own decisions regarding AI, full stop. I’m here to talk, not to make demands.
Now that that’s out of the way, let’s talk about indie publisher Hooded Horse and its “zero AI” policy that it has written into its developer contracts. CEO Tim Bender spoke with Kotaku recently on the topic and he certainly didn’t hold back.
The label he helps run as CEO, Hooded Horse, struck gold after signing the medieval base-builder mega hit Manor Lords, but its library of published games has grown far beyond it in the past two years with releases like the Lego-like tower-defense game Cataclismo, the economic management sim Workers & Resources: Soviet Republic, and the 4X sequel Endless Legend 2. Being strategy games isn’t the only thing they all have in common. They also all adhere to a strict ban on generative AI art.
“I fucking hate gen AI art and it has made my life more difficult in many ways…suddenly it infests shit in a way it shouldn’t,” Bender told me in a recent interview. “It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.’”
Now, if Bender says this has made his life more difficult, I’m going to choose to believe him. Honestly, I can’t imagine why he’d lie about something like that.
But he’s also clearly answered “yes” to rhetorical question #2 I posted above. And I just don’t understand it as a long term contractual policy. If AI largely sucks right now in the gaming industry, and I agree there’s a lot of bad out there, that doesn’t mean it will in the future. If AI has the capability to take some jobs in the industry today, that doesn’t mean it can’t create jobs elsewhere in the industry as well. If some applications of AI in the gaming industry carry with it very real moral questions, that doesn’t mean that every use does.
But when you really dig into Bender’s stated concerns that have led him to a blanket ban on the use of any AI by partner developers, you quickly understand his actual concern is a quality control concern.
“We’ve gotten to the point where we also talk to developers and we recommend they don’t use any gen AI anywhere in the process because some of them might otherwise think, ‘Okay, well, maybe what I’ll do is for this place, I’ll put it as a placeholder,’ right?” continued Bender.
“Like some, people will have this thought, like they would never want to let it in the game, but they’ll think, ‘It can be a placeholder in this prototype build.’ But if that gets done, of course, there’s a chance that that slips through, because it only takes one of those slipping through in some build and not getting replaced or something. […] Because of that, we’re constantly having to watch and deal with it and try to prevent it from slipping in, because it’s cancerous.”
It’s the Larian Studios concept art discussion all over again. Bender doesn’t seem to have an actual problem with developers using AI in developing a game. Instead, it appears he doesn’t want any AI-made product ending up in the finished game. Those are two very different things. But rather than trying to figure out how to QC the developers to make sure the end product is clean of AI, since that seems to be what Bender is after, we get a blanket ban on all AI use everywhere, all the time, by the developers.
Now, to keep things clear, my position is that Bender certainly can do this if he likes. It’s his company, have at it. But when I read this…
“When it comes to gen-AI, it’s not a PR issue, it’s an ethics issue,” Bender said. “The reality is, there’s so much of it going on that the commitment just has to be that you won’t allow it in the game, and if it’s ever discovered, because this artist that was hired by this outside person slipped something in, you get it out and you replace it. That has to be the commitment. It’s a shame that it’s even necessary and it’s a very frustrating thing to have to worry about.”
…I’m left with the impression that I’m listening to someone devoid of nuance reciting a creed rather than fully thinking this through.
AI will be used in gaming. To borrow a phrase, it’s a very frustrating thing to have to even state. It’s tough to get more obvious than that. The question and the conversation, as I keep saying, is about how it will be used, not if it will be used.
And people like Bender have exited that conversation, which is too bad. He’s clearly a good businessman and smart industry guy. We need his voice in the discussion.
Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.
For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.
Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted.
One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.
Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.
Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.
Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.
As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.”
This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.
Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.
If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.