Game Publisher Bans Working With Devs That Use Any AI, Rather Than Banning Bad Uses Of AI

from the misfire dept

I’m going to start this post off with two rhetorical questions.

  1. Do you believe that the use of AI should be free and unfettered in the video game industry and will certainly and overwhelmingly be a positive good for the industry generally?
  2. Do you believe that AI should be banned and never used in the video game industry because it can only produce slop and result in job loss in the industry generally?

My position is simple: anyone answering “yes” to either of those questions is out of the conversation when I’m involved. Dogmatic approaches like those aren’t right, they’re not smart, they’re not helpful, and they will never produce any progress or interesting discussion. They’re a sort of religious beliefs pointed at a terrestrial industry and they make no sense.

And now let me add a rhetorical statement of my own, so that there’s no misunderstanding: every game publisher and developer out there is free to make their own decisions regarding AI, full stop. I’m here to talk, not to make demands.

Now that that’s out of the way, let’s talk about indie publisher Hooded Horse and its “zero AI” policy that it has written into its developer contracts. CEO Tim Bender spoke with Kotaku recently on the topic and he certainly didn’t hold back.

The label he helps run as CEO, Hooded Horse, struck gold after signing the medieval base-builder mega hit Manor Lords, but its library of published games has grown far beyond it in the past two years with releases like the Lego-like tower-defense game Cataclismo, the economic management sim Workers & Resources: Soviet Republic, and the 4X sequel Endless Legend 2. Being strategy games isn’t the only thing they all have in common. They also all adhere to a strict ban on generative AI art.

“I fucking hate gen AI art and it has made my life more difficult in many ways…suddenly it infests shit in a way it shouldn’t,” Bender told me in a recent interview. “It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.’”

Now, if Bender says this has made his life more difficult, I’m going to choose to believe him. Honestly, I can’t imagine why he’d lie about something like that.

But he’s also clearly answered “yes” to rhetorical question #2 I posted above. And I just don’t understand it as a long term contractual policy. If AI largely sucks right now in the gaming industry, and I agree there’s a lot of bad out there, that doesn’t mean it will in the future. If AI has the capability to take some jobs in the industry today, that doesn’t mean it can’t create jobs elsewhere in the industry as well. If some applications of AI in the gaming industry carry with it very real moral questions, that doesn’t mean that every use does.

But when you really dig into Bender’s stated concerns that have led him to a blanket ban on the use of any AI by partner developers, you quickly understand his actual concern is a quality control concern.

“We’ve gotten to the point where we also talk to developers and we recommend they don’t use any gen AI anywhere in the process because some of them might otherwise think, ‘Okay, well, maybe what I’ll do is for this place, I’ll put it as a placeholder,’ right?” continued Bender.

“Like some, people will have this thought, like they would never want to let it in the game, but they’ll think, ‘It can be a placeholder in this prototype build.’ But if that gets done, of course, there’s a chance that that slips through, because it only takes one of those slipping through in some build and not getting replaced or something. […] Because of that, we’re constantly having to watch and deal with it and try to prevent it from slipping in, because it’s cancerous.” 

It’s the Larian Studios concept art discussion all over again. Bender doesn’t seem to have an actual problem with developers using AI in developing a game. Instead, it appears he doesn’t want any AI-made product ending up in the finished game. Those are two very different things. But rather than trying to figure out how to QC the developers to make sure the end product is clean of AI, since that seems to be what Bender is after, we get a blanket ban on all AI use everywhere, all the time, by the developers.

Now, to keep things clear, my position is that Bender certainly can do this if he likes. It’s his company, have at it. But when I read this…

“When it comes to gen-AI, it’s not a PR issue, it’s an ethics issue,” Bender said. “The reality is, there’s so much of it going on that the commitment just has to be that you won’t allow it in the game, and if it’s ever discovered, because this artist that was hired by this outside person slipped something in, you get it out and you replace it. That has to be the commitment. It’s a shame that it’s even necessary and it’s a very frustrating thing to have to worry about.”

…I’m left with the impression that I’m listening to someone devoid of nuance reciting a creed rather than fully thinking this through.

AI will be used in gaming. To borrow a phrase, it’s a very frustrating thing to have to even state. It’s tough to get more obvious than that. The question and the conversation, as I keep saying, is about how it will be used, not if it will be used.

And people like Bender have exited that conversation, which is too bad. He’s clearly a good businessman and smart industry guy. We need his voice in the discussion.

Filed Under: , ,
Companies: hooded horse

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Game Publisher Bans Working With Devs That Use Any AI, Rather Than Banning Bad Uses Of AI”

Subscribe: RSS Leave a comment
101 Comments
Anonymous Coward says:

Re:

Because people don’t want it. The difference between a random app and a 30-60 buck difference is you paid way more for one and it is way, way, way, more visible. A game is more like your bank account. You will notice very easily and very quickly if the money doesn’t add up.

https://www.pcgamer.com/games/fighting/fatal-fury-fans-accuse-new-city-of-the-wolves-trailer-of-being-filled-with-ai-slop-all-that-saudi-money-and-cant-pay-someone-a-couple-bucks-to-make-real-s-t/

This comment has been deemed insightful by the community.
Strawb (profile) says:

Re: Re:

People don’t want slop. Like OP says, if it’s good and not slop, I doubt they would care that much.

To use an analogy, a lot of gamers don’t like Unreal Engine 5 because many developers use it in such a way that the game runs like shit.
But two of the biggest indie hits last year, ARC Raiders and CO: Expedition 33, are made in Unreal Engine 5.

Everything comes down to where and how a tool is used.

Anonymous Coward says:

Re: Re:

Because people don’t want it.

Okay. Why’s that our problem?

Computer history is full of developers building stuff that “nobody” wanted. Some of them were okay playing around with technology for the amusement and education of themselves and maybe a few friends. Some became accidentally rich; others wanted to be rich, and hired businesspeople to figure what people would pay for.

Companies come and go, and with games, there have always been some pushing shitty games. Unless you’re an investor in such a company, I don’t see why it’s worth any more of your attention that “Color a Dinosaur”.

This comment has been deemed insightful by the community.
Anonymous Coward says:

How about this.

Stop writing puff pieces on an issue you don’t understand. Instead go fucking interview these people or go and experience working with AI at scale.

What you are fucking missing is that you don’t have 20 people sending you AI garbage you have to waste time sifting through.

What you are missing is that the buyers of video games DON”T WANT AI SHIT and they have made it very clear.

What you don’t get is for a small company, dealing with AI junk, possible copyright lawsuits, and bad pr isn’t worth it for small companies, because it far outweighs any possible benefits.

We are years in at this point and every single claim of 100x,10x, or even 2x has resulted in… nothing. Companies aren’t receiving a return on investment from AI, and at best in most cases it has just resulted in worse products(microslop) or in in the burden being shifted.

“But rather than trying to figure out how to QC the developers to make sure the end product is clean of AI, since that seems to be what Bender is after, we get a blanket ban on all AI use everywhere, all the time, by the developers.”

For the love of god. Go work at a company. Or, go ask for 100 submissions, and see if you can come up with a fool proof way to root out all AI usage. Even when you have the money this would be a near impossible task and more importantly, COST MORE THAN ANY BENEFITS.

Good freaking lord. You might as well be telling a woman she isn’t handling her period correctly for all you understand of the realities of the issues.

Anonymous Coward says:

Re:

The problem is, not even the TD authors are immune to the main problem with these systems. They they are fundamentally extremely impressive whenever they are catering to someone who does not know much about the subject to which they are being applied, but fall apart when observed by an expert on a topic. (Which frankly suggests what may be the ever-elusive use-case for them, filtering out candidates who have falsified their credentials, but I digress).

They fall apart as image generators when their output is presented to someone who understands composition or colour theory. They fall apart as information retrieval tools when the human user knows the subject matter. They fall apart as stenographers when the output is simply read back alongside listening to the original input audio.

They really impress people whose job is not to be experts in the fields to which they are being applied, but rather to have enough of an outside, cursory understanding of the principles to be able to talk about them. Middle managers, many kinds of journalists, and the like.

Anonymous Coward says:

Re: Re:

The main issues with AI is that “it can theoretically be used for anything”. And that sets off a very questionable claim of increased efficiency among leaders of companies.

But the salesmen of AI (not generally the developers!)… Doing a deal with many modern AI-sellers is often like doing a deal with the devil since a number of unforeseen consequences crop up and the usecase originally envisioned has to be shrunk from “theoretically anything” to whatever training data, procedures, laws and moral allows + it will take years of iterations before an appreciable quality can be achieved, let alone human-made quality!

AI is coming and denying it forever and in all cases is a bad idea. But the “get richer fast”-schemes getting sold and the current quality in most applications is killing many reasonable uses in the future because of the blowback the lies it was sold on creates!

Stephen T. Stone (profile) says:

Re: Re: Re:

AI is coming and denying it forever and in all cases is a bad idea. But the “get richer fast”-schemes getting sold and the current quality in most applications is killing many reasonable uses in the future because of the blowback the lies it was sold on creates!

Then maybe the evangelists who love to keep trying to manufacture widespread acceptance of generative AI should’ve kept their fucking mouths shut a little more than they did. They have no one to blame for the bubble bursting except themselves.

Anonymous Coward says:

Re: Re:

“I can’t refute any of the statements so I will just dismiss you entirely” – bo knows, known pedophile and grok user

Funny thing is, I know the subject well giving that I manage and deploy AI and ML systems for an enterprise, and configure and setup AI tooling.

None of it has even resulted in actual profit.

Anonymous Coward says:

Re: Re: Re:2

Irrelevant.

If the corporations aren’t seeing productivity increases from the tech over a long enough period as determined by managers like this guy they’ll start asking or demanding vendors stop raising prices to bundle in this tech to hide that it is a red line revenue item for both the ML devs and the customers.

Anonymous Coward says:

Re:

Frankly that would be a better use of Tim’s time than these articles being an aloof pedantic herald of the technology.

He should be taking his skills for 100 days and seeing the reality of the dev cycle under non-disclosure agreements doing nothing but talking to these managers experiencing slop creeping in and the social media consequences for indies and medium sized AA devs.

THEN he can do informed writing about a tech that, conceptually is fine but is consistently mis-used and also the math doesn’t add up for what the backend infrastructure costs versus how they can make revenue.

OpenAI is about to enshittify ads into ChatGPT which should tell you all you need to know about the unholy market crash coming when institutional investors get sick of math not adding up, more data centers and hardware than they need sitting in warehouses not fully built and customers being more confused and angry about generative AI than considering it an asset built into consumer and dev workflows.

Anonymous Coward says:

bad argument

there’s a reason there’s an overwhelming 90% “all ai is shit” response often. It’s because a lot of parts of ai are basically shit. You don’t hear as much “parts of ai are good” because they don’t rely much on copilot or the shitty limited versions of chat that exist to harvest everyone’s input and make public any information you have it analyze to the detriment of everyone.

There are uses for ai, but chatgpt, copilot and Gemini are not it. Not because they’re ai, but because they are marketing and a hamstrung set of functionality. Probably know vc garbage being peddled, intuitively.

The best case is a self hosted chat instance which has hyper inflated pricing lately. Which shouldn’t be shocking as to why it doesn’t happen often.

This comment has been deemed insightful by the community.
VJGoh says:

As a game dev

He’s 100% correct.

There’s a saying in our industry (and probably a few others): if you want to make sure something ships, mark it as temporary. I have shipped SO MANY temporary fixes. So many things that weren’t supposed to be released were released. Bioware (a company I used to work for) once shipped a text file with the text “ass plugging cum bubble” in it, because it was random text that was in a text file that got shipped out and nobody thought to check.

If your processes aren’t good enough to ensure that temporary assets never make it out—and to be 100% clear: nobody’s are—then you should avoid them and the hassle it comes with. Particularly when it comes to art assets.

Indeed, YOU’RE the one with the binary position. He didn’t say that NO AI was allowed, he said that no AI ART was allowed. He didn’t say AI coding agents weren’t allowed, or that AI anything else was excluded.

You’ve rounded up a perfectly reasonable position to “I say all AI is bad”. That’s on you.

You (We, Us) says:

Re:

Quote from above comment
“He didn’t say that NO AI was allowed, he said that no AI ART was allowed. He didn’t say AI coding agents weren’t allowed, or that AI anything else was excluded.”

Quote from article
““It is now written into our contracts if we’re publishing the game, ‘no fucking AI assets.’””

Did you think they spelled “art” wrong when they typed “assets?”

Anonymous Coward says:

Re: As a gamer

I am 100% certain that someone (probably Paradox) is leveraging the knowledge behind current AI tech to build a better enemy AI for their 4X or grand strategy games, and I welcome this. Shit, I’d happily upload all of my games to the company if it would help; last-gen enemy AI is atrocious in every game I’ve tried. If the CPU can beat me without an 800% economy bonus, I would be ecstatic.

Anonymous Coward says:

Re: Re:

They are likely trying. IIRC, they are already using some oldschool AI-trickery during development (self-learning techniques etc.), but as with everything AI currently, developing a well-functioning AI for a specific reactive purpose will take years and it will get some degree of setback at every new feature. It is absolutely possible to develop AI-generated opponents, but if it is feasible to get to a sufficient state in a modern game development environment like Paradox games is the question. AI can easily become unsustainable economically and in terms of time needed for the development.

Candescence (profile) says:

It’s a multifaceted problem, really – not only do you need to actually be sure that something is AI-generated or not, which can be hard to do sometimes, but if you just wanna use AI-generated materials as placeholders, you really want to be absolutely sure they’re not in the final product. (Personally for the latter I would just have a big fat “PLACEHOLDER” folder that all AI-generated materials should go into that should be completely deleted by the time the game goes gold.) There’s also the potential ethical issues at play.

In that vein, I can see why Tim Bender would rather not deal with any of that shit and be able to keep things relatively simple if something generated does somehow end up in a project.

Drew Wilson (user link) says:

Re:

I think there is a far easier way to handle placeholders. Back in the day, game developers would use a flat bright pink texture on anything that doesn’t have final art as a placeholder. If a bright pink texture was found, it was very easy to say that it was not meant to be there and that the developer needs to replace that texture. It’s ugly as heck, but being pretty isn’t the job of a placeholder, it’s to get the developers attention and remind them to replace that with something proper.

As for text, it’s just a case of thinking of something that can be searched for in the code that would not otherwise be found. I think Super Mario RPG did that in development by using a phrase something along the lines of “go world!”

Either way, I honestly do not see why any developer would want to use AI to create placeholder assets in a game. All you are doing is making it much harder to detect and replace before the final product is released. You’re just begging to get caught using AI at that point.

Anonymous Coward says:

Re: Re:

They want to use it because they don’t actually care about AI going out the door and want a placeholder that looks more finished to impress their managers

Which tbh, i can’t entirely blame them, but if the goal is not to ship AI assets, the only guaranteed solution is definitely just not to use them as placeholders in the first place. I just get why that’s not the goal for a lot of studios.

Arianity (profile) says:

Re:

(Personally for the latter I would just have a big fat “PLACEHOLDER” folder that all AI-generated materials should go into that should be completely deleted by the time the game goes gold.)

While that is a start, from what I’ve heard is many companies already do this, and stuff still slips through. All it takes is one contractor, or someone working at home late on a deadline, etc.

It’s part of why people use standardized stuff like Lorem Ipsum, which you can at least CTRL+F. And even that gets missed sometimes.

Candescence (profile) says:

Re: Re:

Diligent commit checking helps with that, at least, and depending on the engine it’s way easier to know what’s still being used than it used to be. Unreal Engine lets you see what each thing is referencing or being referenced by, and if you delete something the engine will warn you if it’s being referenced by anything else in the project. You’ll know if something’s actually being used if you try to delete the entire placeholder project and if that warning pop-up occurs.

But yeah, it is one of the biggest pains in the ass when it comes to project management.

Seamus Waldron says:

Its how you use the tool, not the tools fault

AI coding is happening, it won’t be stopped. For one man bands and small teams it allows you to punch above your weight and focus on the product not the plumbing.

Anti-AI is the modern Luddite. The sweep of new AI tools is the new loom. Embrace it or be left behind, that is the reality we are in and there is no turning back.

Anonymous Coward says:

Re:

Let me start with this crap about the innocence of tools.
Guns are tools: a specific subset called ‘weapons.’ The purpose of these specific tools is to hurt and kill. It’s not a matter of who uses them or how, that doesn’t change that killing is what they’re designed for, and so sane countries regulate them appropriately.
So.
With that in mind.
What, exactly, is generative AI for?
From my perspective, it was made not to help, but to devalue human artists and their work.
But feel free to disagree.
Tell me: what do you really think it was made to do?

Anonymous Coward says:

Re: Re:

The cynic in me says, like MOST of these technologies, it was made WITHOUT any functional use cases in mind, and now as the companies find they need to make trillions of dollars to break even and have no path to liquidity, they are falling over themselves in search of a problem to which the “solution” they have built can be applied…

Anonymous Coward says:

Re:

The luddites were right: they saw machines destroying their standard of living, and then the machines did indeed destroy their standard of living. The decisions they made in the context of the world they were living in were perfectly rational.

It’s only with hindsight that it’s clear industrialization was a benefit (or was it? Climate change may yet prove us wrong)

AI is also not yet a loom: it comes nowhere close to the quality and consistency of a professional, whereas a loom did (or even exceeded). And we certainly don’t yet have hindsight to say that it was a great benefit to society, either.

Stephen T. Stone (profile) says:

Re: Re: Re:

The Luddites rioted on the side of people. Their problem wasn’t with technology, but with capitalists⁠—i.e., bosses and executives who control the means of production⁠—using technology to replace people entirely. To put it in modern terms: They were fine with technology that helped them do a job better, but not with being iced out of a job because their boss thought replacing them with a robot would be more “cost efficient” than paying people a living wage.

And if I had to wager a guess, I’d bet on Luddites being okay(-ish) with technology meant to replace humans in dangerous, life-threatening tasks where such technology would be better suited for those purposes. Generative AI is…not that.

This comment has been deemed insightful by the community.
Drew Wilson (user link) says:

Re:

Anti-AI is the modern Luddite. The sweep of new AI tools is the new loom. Embrace it or be left behind, that is the reality we are in and there is no turning back.

If AI didn’t suck so bad, you might have a point. The problem is, AI sucks so hard, few people want it.

AI companies have gotten to the point of desperately trying to give it away for free to encourage people to use it these days. This by cramming it literally in everything that is useful. WordPress? Shove AI into it. Microsoft Windows? Shove AI into users faces. Vehicles? Shove AI into that too.

AI companies have been pressured for over a year by their investors to show that developing AI is worth the investment dollars. The problem is that it wasn’t and companies are freaking out that those investment dollars are going to disappear at a faster rate if they don’t show that adoption rates are as advertised in their lofty ambitious announcements.

This comment has been deemed insightful by the community.
Arianity (profile) says:

They’re a sort of religious beliefs

So are the in between answers. Ultimately they all go back to values of how you balance different priorities. If you value the human expression part above all else, you’re going to get a very different answer than if you see it as more of a tool. No view is inherently right, they’re fundamentally subjective, but they do involve things like ethics/morality, aesthetics, etc. Same calculus, different weights.

But he’s also clearly answered “yes” to rhetorical question #2 I posted above.

Eh, not really? There’s a bunch of different ways to get to his position without answering “yes”. Even reading through the whole article, he doesn’t seem to say anything at all about the industry as a whole, just how he deals with it.

But rather than trying to figure out how to QC the developers to make sure the end product is clean of AI, since that seems to be what Bender is after, we get a blanket ban on all AI use everywhere, all the time, by the developers.

Reading the article does not seem to suggest it’s just the end product that is the concern here. e.g. moral issues surrounding tools trained on plagiarizing other people’s work..

That said- The experience for a lot of people (AI-enjoyers and haters alike) is that there is fundamentally no way to guarantee QC. The nature of it is you risk something slipping through, because there are no foolproof AI detection methods post facto. The difference is how willing they are to tolerate inevitable slipups, not whether you tried to find a QC method. Even with a ban (which ends up being a form of QC, mind you), it’s not 100%.

AI will be used in gaming.

And paintings will be mass produced by robots. Doesn’t mean a painter can’t still choose to hone his craft because he appreciates it as an artform for expression. To this day, there are artists who do not use any electronic assistance in their art. Part of the how it will be used in the industry conversation is how/when/why some opt out.

This comment has been deemed insightful by the community.
Bloof (profile) says:

The war on consent by AI enthusiasts continues. Consumers don’t want genAI in games, a publisher actually listens and declines to publish games made using it and that’s bad apparently, despite there being plenty of other routes open to those who want to produce slop based games.

You don’t want the market to be able to decide, you don’t want people to make an informed decison when platforms require GenAI usage be listed, you want people to have to use the environmentally disastrous plagiarism engine in everything no matter what. Ultimately, AI pushers care more about having access to, or profiting from a new toy more than they do people.

Anonymous Coward says:

Re:

You don’t want the market to be able to decide

The market will decide.

you don’t want people to make an informed decison when platforms require GenAI usage be listed

That seems like a weird requirement. You call it “slop”, and then suggest people won’t notice unless they’re told? I’m also not told whether any developer is on Russia’s side in their war with Ukraine; maybe I’d want to know, but if so, I’m gonna have to do research. How about they tell us if they’re releasing it with known major bugs? Or if the game will require us to agree to license terms, can they be made to tell us those in advance, on the box or download page?

“This game is not fun” and “the developer was really half-assing it”, though, are things that the market is already optimized to tell us. Pretty much every day, I see some “scene” group release a movie that doesn’t even merit an IMDb entry, and maybe once a week I see something with a rating around 3/10. 90% of everything is crap, but most people never even realize how deep “the bottom of the barrel” goes.

Anonymous Coward says:

Re: Re: Re:

People have found AI assets in games they have already paid for, so yes, they are in fact noticing these things and they wouldn’t have bought them with advance warning.

The way the market would solve that is with game reviews. I have no doubt that a handful of people have noticed such things and been annoyed. But if the concerns become widespread, game reviewers will have significant incentive to warn people about it.

One should, of course, be looking at reviews. There are any number of things that might annoy people, like if levelling up is made exceptionally difficult just so a company can sell character upgrades. Or there was that infamous water level in the NES Ninja Turtles game (someone reverse-engineered the game to figure out how it could’ve been programmed better). People learned with the E.T. game, in 1982, not to just fork over money and hope for the best.

Strawb (profile) says:

Re:

Consumers don’t want genAI in games

Correction: people don’t want bad GenAI in games. ARC Raiders uses GenAI for certain voice lines based on internal training material made from hired VA’s, and that game is a massive success.

You don’t want the market to be able to decide

Criticising someone for making a decision based on perceived bad reasoning is not the same as saying the market shouldn’t decide.

Moreover, Geigner literally writes “Bender can certainly do this if he likes”.

you don’t want people to make an informed decison when platforms require GenAI usage be listed

The author never expressed that opinion.

you want people to have to use the environmentally disastrous plagiarism engine in everything no matter what.

The author never expressed that opinion, either.

This comment has been deemed insightful by the community.
n00bdragon (profile) says:

This is going to go the same place as those companies who promise they “won’t do crunchtime” go. It’s still there, but it’s just not talked about or given another name.

“We don’t use AI. We use Adaptive Gaming Enhancement Technologies and then we don’t mention them to anyone.”

Stephen T. Stone (profile) says:

Re:

To draw a distasteful and unfair sociopolitical parallel: It’s like how the people who endorse and promote the torturous practice of “conversion ‘therapy’ ” have changed what they call it (“sexual orientation change efforts”, or SOCE, is one example) to avoid all the negative connotations of the more well-known term.

Anonymous Coward says:

Re:

“We don’t use AI. We use Adaptive Gaming Enhancement Technologies and then we don’t mention them to anyone.”

I’ve already seen this, basically, with internet-based discussions around people who say they want a web-browser switch to disable all “A.I.” features. Then, several comments later, some are saying “of course I don’t want it to disable automatic translation; that’s actually useful, and not what I meant by A.I.”

Bloof (profile) says:

Re: Re:

If you’re talking about the firefox debacle, those voices who wanted that particular featureleft on only said so after being given a social media poll where No AI and Off by default were not an option and the vast majority of replies were angry about that. The social media person admitted on Mastodon they were trying to shepherd people into giving approval for what they had already decided to do so this wasn’t a case of mixed messages from the userbase, people were pretty clear about it but ignored entirely.

Anonymous Coward says:

Re: Re: Re:

I wasn’t aware of a poll or that it had turned into a “debacle”, but I suppose we can always trust Mozilla to make something into a debacle. Like, it might’ve been Pocket where they added a non-removable toolbar button, people said they didn’t want it, and the response was basically “trust us, you will want it, and it’ll be amazing”… until it disappeared with no fanfare. Us long-time users have basically developed Stockholm syndrome (“it’s not so terrible [once you’ve added a hundred custom user.js settings to de-shittify it, by disabling almost all the features of the last decade]”). How about they fix that file-download dialog that’s been blatent shit since Firefox 1? (ever had 10 of those things open on top of each other?)

Anyway, I’m not even sure Firefox was the subject of that conversation. Every browser vendor’s talking about adding such features, and I don’t even really care as long as they’re inert until invoked by me (and I’ve intentionally consented to any data transmission that may result, not just hit a key by accident). But I don’t want them running in the background to waste my CPU and RAM, storing tracking data on my computer without telling me, and certainly don’t want them casually talking to external services not associated with the sites I was trying to view.

It’s kind of a similar situation with games. Better computer-controlled opponents could potentially be a great use of this technology. Auto-generating levels or other content, well, I guess I’d have to see how they turn out (“frog cop” could be fun!). In that sense, I’m with Timothy. I’ve played enough bad games, or games with bad parts, that this is just another way to make bad games—or maybe not bad.

Bo Knows says:

Bo Knows PR nonsense

Morality ploys like this won’t last 5 years… but maybe it will generate a few extra sales for their games in the meantime. Then they will fall behind in production costs and development time compared to their peers and they can decide if they want to keep riding a horse, or upgrade to a car.

Absolutely no know is going to be able to tell the difference by the end of 2026.

Drew Wilson (user link) says:

Re:

Absolutely no know is going to be able to tell the difference by the end of 2026.

AI was going to eliminate the need to hire artists by the end of 2023. Didn’t happen.

AI was going to eliminate a vast majority of jobs by the end of 2024. Didn’t happen.

AI was going to put almost everyone out of work by the end of 2025 and no one was going to distinguish reality from AI by that point in time. Didn’t happen.

AI predictions like that that go down in flames are a dime a dozen.

Thad (profile) says:

And people like Bender have exited that conversation, which is too bad. He’s clearly a good businessman and smart industry guy. We need his voice in the discussion.

Sure seems like he’s in the discussion to me. He’s discussed his position at length and in detail. He’s made it extremely clear where he stands; you just want him to stand somewhere else.

Anonymous Coward says:

If you believe him that its made his life more difficult, it seems incredibly disingenuous to assume it’s a dogmatic rejection of AI, and not just him trying to make his life easier, especially when he doesn’t ever actually say what you propose he’s thinking.

Maybe he knows his customers, and knows they’ll make things harder for him if they discover AI assets in his games? That seems to explain everything he said without accusing him of being short-sighted and dogmatic.

Stephen T. Stone (profile) says:

Re: Re: Re:

I don’t have that power and I don’t believe I deserve that power. Alls I’m doing here is pointing out that the “this is happening whether you like it or not” line that AI evangelists trot out about the inevitability of AI “taking over” sounds a hell of a lot like a rapist who tells their victim “this is going to happen so lie back and accept it” before the rape to try to make their victim more compliant. It is an attempt to manufacture consent where little-to-none exists, and I do have the right⁠—and I deserve the right⁠—to say as much. If’n you don’t like when I do that, tell me how I’m wrong. But don’t presume to say I’m trying to “dictate” how (or even if) people use the Emptiness Machine. Use it or don’t; that’s your call⁠—I can criticize you if you choose to use it, but I can’t actually stop you from using it.

Anonymous Coward says:

Your privilege is showing.

Never thought i would be so at odds with Techdirt writers, but here we are.

All AI usage supports, condones, and normalizes all AI usage.

Any decent “AI” has been used for years, properly, in research environments (science,whatever). It is designed and trained for specific types of tasks by smart people with proper training input, if any. It all works as intended in its proper silo, and the results are interrogated sanely.

All other usa of AI is a growing environmental, economic, and supply chain disaster. We already lost component manufacture to fucking blockchain and AI. Glad you don’t mind the component unavailability and increased costs.

Glad you aren’t affected by increased energy costs.

Glad you aren’t affected by new data centers wrecking your neighborhood.

Glad you don’t mind the operation, even illegal operation of on-site pollution-vomiting generators.

Glad you don’t mind the internet flooded with even more, dumber shit, at scale.

Glad you don’t mind all the guaranteed awful usage of AI, at a scale far larger than any “good” use, some of which techdirt has reported on, and some not.

Glad youdon’t mind the infinite security problems and implication which can never be fixed.

Glad you don’t mind AI software and hardware being crammed into every cinsumer product with zero choice.

Must be nice.

But note that AI is the extraction class’ tech version of Trumpian authoritarian nonsense. Eventually, it will come for you.

Sorry this post is a non-starter in your book.

But best wishes.
And not that it matters, but i yet trust and respect everyone at td on nearly all other subject matter. On this, i just cannot wrap my head around how y’all can’t see it for what it is.

This comment has been deemed insightful by the community.
Eldakka (profile) says:

While I mostly agree with the author that it should be a naunced take, I think the author is themselves igoring a nuance:

And I just don’t understand it as a long term contractual policy. If AI largely sucks right now in the gaming industry, and I agree there’s a lot of bad out there, that doesn’t mean it will in the future.

The nuance the author is missing is that contracts can be changed. So in the future if AI ever becomes useful, less of a QC problem the CEO seems concerned about, then future contracts could remove or adjust the language regarding AI use.

The fact that right now this company doesn’t want AI in their products due to the state AI is in now causing QC issues doesn’t mean that in the future their stance, and the contracts, won’t change.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Repeating history

AI will be used in gaming. To borrow a phrase, it’s a very frustrating thing to have to even state. It’s tough to get more obvious than that. The question and the conversation, as I keep saying, is about how it will be used, not if it will be used.

That argument reminds me of something but I can’t quite place it… ah yes:

Micro-transactions will be used in gaming. To borrow a phrase, it’s a very frustrating thing to have to even state. It’s tough to get more obvious than that. The question and the conversation, as I keep saying, is about how micro-transactions will be used, not if micro-transactions will be used.

Just because execs might want to force AI into everything possible doesn’t mean people have to be happy about it or feel obligated to just shrug if off ‘because it’s going to happen anyway whether people like it or not’.

If you have a service or product that people actually want then you don’t need to try to shove it down their throats because people will be happy to see and use it, so if those pushing AI want to get a positive response rather than the negative sort they have been then it’s on them to make a case that’s not just ‘It’s being added/used whether you like it or not so deal with it.’

Scott Larson (user link) says:

RE: reciting a creed rather than fully thinking this through.

…I’m left with the impression that I’m listening to someone devoid of nuance reciting a creed rather than fully thinking this through.

AI will be used in gaming. To borrow a phrase, it’s a very frustrating thing to have to even state. It’s tough to get more obvious than that. The question and the conversation, as I keep saying, is about how it will be used, not if it will be used.

And people like Bender have exited that conversation, which is too bad. He’s clearly a good businessman and smart industry guy. We need his voice in the discussion.

I disagree. I do think its great that AI is used in many situations that improve things, esp. in the medical industry. But the way in which its being developed: For-profit/No Transparency, Data center power load which increases costs to Consumer electricity, and many more areas that I don’t care to list right now creates a perfect storm of harms that I will not be apart of. The technology is being developed by organizations that are furthering authoritarian agendas. If people are using local based AI tech that doesn’t encourage the cloud based models that the authoritarian tech industry is apart of, then yeah, im more open to that.

I can see his point, better to not use it in its entirety until things change, IMHO.

Anonymous Coward says:

Tim Bender isn't even agreeing with rethorical #2,

and I’m not sure how you’re reading his statements to pretend that he does. With the current state of generative AI, his position is entirely reasonable.
Currently gen-AI comes with risks and ethical concerns that far outweigh, its real-world benefits in many situations. You can’t expect people to behave as if an imagined future, when this may not the case any more, is already present.

M. Rosenbloom (profile) says:

"AI Is Inevitable" is false framing

Commenting because I’ve really hit my limit on the AI boosting that’s happening on Techdirt. I understand that I might be in the minority here, but I’ve gone from mild discomfort and dislike of genAI to absolute refusal to use it or use products derived from it. It’s too long a story to get into here, but suffice to say that if genAI is going to be in all games from now on, then I simply won’t be into any new games.

But here’s the thing, I don’t believe that genAI will be in all games – in fact I believe that social refusal and economic realities will ensure that it won’t. The use of a specific technology is not inevitable; there are many examples of technologies that failed to become entrenched even if they were better than alternatives. And I don’t believe that genAI technologies are in fact better than alternatives, for workers, for the environment, for anyone except a few tech CEOs and bosses looking to discipline labor.

And even if the tech sticks around for many games, there is a large movement of people and devs seeking games created without these things! And one of the people who will seek games made without these things is me.

I’ve really valued the commentary and reporting I’ve found on this website over the years even and especially on areas where I’ve disagreed with the ideas, but I think here you’re making an especially bad set of decisions.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...