Larian Studios The Latest To Face Backlash Over Use of AI To Make Games

from the too-much-dogma dept

I guess I’m a masochist, so here we go. In my recent post about Let It Die: Inferno and the game developer’s fairly minimal use of AI and machine learning platforms, I attempted to make the point that wildly stratified opinions on the use or non-use of AI was making actual nuanced conversation quite difficult. As much as I love our community and comments section — it’s where my path to writing for this site began, after all — it really did look like some folks were going to try as hard as possible to prove me right. Some commenters treated the use of AI as essentially no big deal, while some were essentially “Never AI-ers,” indicating that any use, any at all, made a product a non-starter for them.

Still other comments pointed out that this studio and game are relatively unknown. The game was reviewed poorly for reasons that have nothing to do with use of AI, as I myself pointed out in the post. One commenter even suggested that this might all be an attention-grabbing thing to propel the studio and game into the news, so small and unknown as they are.

Larian Studios is not unknown. They don’t need any hype. Larian is the studio that produces the Divinity series, not to mention the team that made Baldur’s Gate 3, one of the most awarded and best-selling games of 2023. And the studio’s next Divinity game will also make some limited use of AI and machine learning, prompting a backlash from some.

Larian Studios is experimenting with generative AI and fans aren’t too happy. The head of the Baldur’s Gate 3 maker, Swen Vincke, released a new statement to try to explain the studio’s stance in more detail and make clear the controversial tech isn’t being used to cut jobs. “Any [Machine Learning] tool used well is additive to a creative team or individual’s workflow, not a replacement for their skill or craft,” he said.

He was responding to a backlash that arose earlier today from a Bloomberg interview which reported that Larian was moving forward with gen AI despite some internal concerns among staff. Vincke made clear the tech was only being used for things like placeholder text, PowerPoint presentations, and early concept art experiments and that nothing AI-generated would be included in Larian’s upcoming RPG, Divinity.

Alright, I want to be fair to the side of this that takes an anti-AI stance. Vincke is being disingenuous at best here. Whatever use is made of AI technology, even limited use, still replaces work that would be done by some other human being. Even if you’re committed to not losing any current staff through the use of AI, you’re still getting work product that would otherwise require you to hire and expand your team through the use of AI. There is obviously a serious emotional response to that concept, one that is entirely understandable.

But some limited use of AI like this can also have other effects on the industry. It can lower the barrier to starting new studios, which will then hire more people to do the things that AI sucks at, or to do the things where we really don’t want AI involved. It can make Indie studios faster and more productive, allowing them to compete all the more with the big publishers and studios out there. It can create faster output, meaning adjacent industries to developers and publishers might have to hire and expand to accommodate the additional output.

All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop, and for the end product to be a good product. Absent those three things, the Anti-AI-ers are absolutely right and this will suck.

But the lashing that Larian has been getting is divorced from any of that nuance.

Vincke followed up with a separate statement on on X rejecting the idea that the company is “pushing hard” on AI.

“Holy fuck guys we’re not ‘pushing hard’ for or replacing concept artists with AI.

We have a team of 72 artists of which 23 are concept artists and we are hiring more. The art they create is original and I’m very proud of what they do. I was asked explicitly about concept art and our use of Gen AI. I answered that we use it to explore things. I didn’t say we use it to develop concept art. The artists do that. And they are indeed world class artists.

We use AI tools to explore references, just like we use google and art books. At the very early ideation stages we use it as a rough outline for composition which we replace with original concept art. There is no comparison.”

Yes, exactly. There are uses for this technology in the gaming industry. Pretending otherwise is silly. There will be implications on the direct industry jobs at existing studios due to its use. Pretending otherwise is silly. AI use can also have positive effects on the industry and workers within it overall. Pretending otherwise is silly and ignores all the technological progress that came before we started putting these two particular letters together (AI).

And, ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use. We might as well make the project influencing how it’s used, rather than if it’s used.

Filed Under: , , , ,
Companies: larian studios

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Larian Studios The Latest To Face Backlash Over Use of AI To Make Games”

Subscribe: RSS Leave a comment
9 Comments
Thad (profile) says:

And, ultimately, this technology simply isn’t going away. You can rage against this literal machine all you like, it will be in use.

GenAI is a bubble, it is going to burst, and that is going to have a significant impact on its viability going forward.

It is possible that some limited uses of genAI, like the ones mentioned in this story, will continue. But it is not inevitable. Pretending that a short-term trend is a 100% reliable predictor of where technology is headed in the future is silly.

Space5000 (profile) says:

Artist and good idea exploration

I believe the current use of gen ai is mainly bad in terms of direct use because of the fact that it uses copyrighted images and despite transformative possibility, the direct usage is too risky and heard some do use identical copies, creating a massive risk. Plus this usage without permission of the artists do help replace them.

However when it comes to using the tool to explore ideas and then having actual humans make their own creative intake of the idea, this seems morally equal to common history of thousands of artists downloading copyrighted images to edit, without permission, and then transforming them private and get new good ideas from it, from computer tools not 100% human crafting either.

The only time I could find this usage bad in terms the ethics of permission and usage, is if using it fuels the program itself whereas such fueling benefits malicious use from other people somehow but I don’t know if it works like that. Think of a person going to a stolen artwork page of a harmless work itself, and while taking inspiration from the work itself is harmless, the interaction with the infringer is giving more demand to the infringer which is bad.

However despite all that, some of the arguments from some of the anti-genAI folks are rotten to the core, based off falacies, reeks of massive hypocrisy, and are based off made up moral ideogy that goes avainst actual rights that exist.

One notable tweet was arguing that merely using ai no matter the purpose is bad just because it was based off works without permission, then cited moral rights and copyright. One person pointed out the fact that people often used many images all the time without permission outside of genAI but then anti genAI person said in the lines of “Uhh, that’s different because I like that there is human connection experience with how you get inspired.”…despite the fact that the foundation of copyright and moral rights make no distinction, let alone the fact that photoshop edits are already less human in some way.

Another horrible argument, and this was a dangerous horrible argument too, was that someone in the name of arvalis (guy who makes “real life” pokemon) argued that if actual human beings made 100 percent human made art influenced off an idea that came from an genAI in the first place, then the final product is still using generated ai, just because it influenced them at some point… even though the final work is based off human experienced and human made art in the last place. Same person argued you are not using copyrighted work if you made it by scratch (without ai) making him hypocritical too. This argument implied that artists who worked and gave all their blood, wasted all of their time and that their hard work is all a waste of time, the moment they are “tainted” with AI in some influence inspiration in the first place. This was one of the most disgusting arguments I’ve ever seen in probably my life and it’s sad it came from him. Like if he wanted to argue that using genAI is bad because it creates less distraction of finding concept artists (though there are some problems with that argument) but left the after fact alone, I wouldn’t be as upset.

The argument saying it creates a distraction is flawed too because public domain, and inspired laqful works also helps create distractions, same with having a crazy creative brain from indirect memory of certain works without remember names of who.

Another argument I’ve seen is that we don’t need genAI to get good. This argument misses the fact that it can still help with creative ideas in some cases and faster too. So that is another weird argument.

There are concerns about generated AI art but some of these anti generated AI art folks has gone so down in the barrel to the point of promoting harassment mob against checks notes, actual human artists. It’s gone to the point of telling other artists that their hard work doesn’t count, or doing something no different than lots of traditions is bad, or just because it’s a robot helping them. This isn’t ethical. This isn’t fighting for artists, this is just discouraging artists like her, and some others from making their own interpretation and being creative due to how they got some ideas.

Current AI art has a lot of problems and I would rather make it where it only uses lawful public domain material, and lawful AI license art, and have a credit list each result, but some of these people are out of their minds and are not fully ethical. Like wow.

Arianity (profile) says:

We might as well make the project influencing how it’s used, rather than if it’s used.

Sure, but how? I’m not sure you can (hence the horse armor joke). It’s going to be like other tools: Does it make more money (by making a bigger/better product, shaving cost, etc)? Then the industry will move towards it. There will be exceptions, but they will be niche.

I do think a nuanced approach is best, but consumer behavior is hard (impossible?) to make nuanced. We can’t even get the industry to behave when it comes to things that hurt consumers/workers, like crunch, predatory pricing, sexually harassing female employees etc. Heck, we already can’t even get companies to use ethically sourced training data to begin with. And I don’t know if you can regulate a nuanced use.

To be clear, I don’t think you can stop it. I think maximum outrage at most gets you a slightly larger speedbump. We’re going to get whatever is market optimal regardless of whether it’s good for consumers/workers or not. There’s a reason big-time execs are positively giddy about AI, and it’s not because of indie competition. Whatever influence we have is subordinate to the mighty dollar.

All of this, all of it, relies on AI to be used in narrow areas where it can be useful, for real human beings to work with its output to make it actual art versus slop,

One thing I worry about with concept art specifically, is how it could anchor things. An analogy I’ve seen used is it’s like watching a movie based on a book, and then going back to read the book. The movie will tend to heavily influence how your brain pictures the book. We’re kind of seeing this in other places already- people who use LLMs are starting to pick up speech mannerisms from them.

Rocky (profile) says:

Whatever use is made of AI technology, even limited use, still replaces work that would be done by some other human being. Even if you’re committed to not losing any current staff through the use of AI, you’re still getting work product that would otherwise require you to hire and expand your team through the use of AI.

Soo, the record studios and RIIA et al was right when they said every copy is a lost sale? Right?

Timothy, you are using exactly the same reasoning as they do and to that I say: FUCK YOU! Do better.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt needs your support! Get the first Techdirt Commemorative Coin with donations of $100
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...