If You’re Going To Defend AI And Whine About Its Critics, You Should Probably Be Honest About Its Actual Harms
from the first-do-no-harm dept
I think this recent post by AI industry CEO Matt Shumer is worth a read. In it, he basically explains how quickly LLMs (large language models) are evolving to supplant many developers and programmers, and how that disruption is coming to other industries quickly. He also warns critics of AI to adjust their priors and realize the AI tools you mocked just six months ago, aren’t the ones in use today:
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
While the post is interesting (with the understanding this is somebody making and selling automation software), you might notice something: absolutely nowhere in the blog post does he meaningfully acknowledge the widespread problems with existing AI use. Either because his financial self-interest doesn’t allow for honest acknowledgment of them, or because he simply doesn’t find those aspects all that interesting.
Maybe both.
There’s no mention of how these tools are causing corporations to blow past their already tepid climate goal; no mention of how the affluent, surveillance-obsessed exec dictating its trajectory enthusiastically cozied up to fascists; no mention of how Elon Musk and Mark Zuckerberg’s data centers are funneling pollution directly into black neighborhoods; zero mention of the technofascist plan to leverage AI to decimate unions; no mention of the weird and precarious financial shell games powering the sector.
This New York Times article from a couple weeks ago is probably a better example of this art form. It’s an article, ostensibly about why the public has been so hostile to AI, that takes until the THIRTY-EIGHTH paragraph to actually try and explain some of the reasons. And even then it’s kind of a throwaway paragraph that doesn’t wrestle seriously with any of the criticism:
“The tech executives who are betting their companies’ futures on the triumph of A.I. have many resources to make sure it happens. They can spend even more money to build even more data centers. On the other hand, data centers around the country are increasingly a target of opposition for local residents who dislike the noise, the disruption, the secrecy and the lack of community benefits like jobs.”
Distilling the animosity against AI as just some random grumbling about “noise” and ambiguous “disruption” is a very weird and conscious choice, and I’d argue that this minimization, a reflection of the establishment press’ need to appease and protect access to corporate power, is itself a major contributor to growing hostility toward AI.
The fact that much of the public animosity to AI may be linked to the fact that its salesmen have overtly and enthusiastically enabled fascism just isn’t mentioned. The Times doesn’t think that’s relevant.
The fact that many U.S. billionaires see AI largely as a way to lazily cut corners and obliterate unions (see: its rushed adoption in journalism outlets like the LA Times or Politico) isn’t mentioned either. That the goal for most AI executives is to power this latest technological revolution completely free of any corporate oversight whatsoever? Again, somehow not deemed relevant.
Stories like this cling to a narrative that vaguely imply people are generally angry about AI due to some ambiguous flaw in their “perception,” likely caused by the way AI is being portrayed to the public on the tee vee:
“The A.I. companies seem increasingly alert to a perception problem. This year’s Super Bowl featured A.I.-themed ads that were defensive or just odd. Amazon’s ad showed A.I. proposing ways to kill Chris Hemsworth. The twist at the end: A.I. disarms him with a promised massage.”
And while there certainly are people who are intractably hostile to all aspects of automation and simply refuse to engage with it on any level (including understanding it), a huge swath of the animosity is being driven by historic and justified anger at the extraction class.
That anger and energy is good, and just, and will likely serve us well in the months and years to come. I’d argue it deserves a wide berth; including by tech industry insiders and AI advocates who don’t want to live under permanent kakistocracy staffed by weird zealots who operate at a third-grade reading level, openly enthusiastic about their grand visions for a permanent mass-surveillance murder autocracy.
Stories like this Times piece will often fixate on the AI “doomer narrative” (SkyNet will kill us all), but downplay that this specific strain of doomerism (very often pushed by wealthy industry insiders), often exists to both misrepresent what LLMs are capable of, but also to direct attention away from more realism-based criticism the industry doesn’t really want to talk about.
That’s not to say people can’t or shouldn’t be excited by evolutions in automation. But it is to say if you’re an AI advocate and you’re not also talking seriously about the very valid reasons so many people are pissed off, you’re not really talking seriously about the subject at all. You’re in marketing.
Filed Under: ai, automation, chatgpt, climate, data centers, denialism, fascism, grok, journalism, llm, pollution


Comments on “If You’re Going To Defend AI And Whine About Its Critics, You Should Probably Be Honest About Its Actual Harms”
AI will change the world
And running unchecked AI systems will bankrupt companies.
Glad to see more pushback, but it’s no surprise that for-profit news organizations will intentionally ignore or downplay the issues with forced mass-adoption of “AI” into every conceivable aspect of our lives. They’re bought-and-paid for by the same groups pushing the trend in the first place.
There’s not much of a “free press” when it’s so easy to buy.
I am inherently distrustful of anyone who’s answer to an argument is “my position has changed since the argument started”.
Also, let’s be really clear-eyed about who Matt Schumer is. “AI Industry CEO” is doing a lot of heavy lifting here. His company, HyperWriteAI, has dozens of employees (38), and his “product” appears to be repackaging a single old GLM version into hundreds of different “tools” and then selling them to anyone doofy enough to not realize ChatGPT does all that better for free. A quick google search of their products reveals that most users seem to think they are outdated trash and that this company is more of a fly-by-night investment-bro trap than an actual service which hopes to make money from satisfied users.
Putting “CEO” at the end of your title doesn’t make your opinions worth reading. It means you spent a couple hundred bucks to form an LLC. This guy is a salesman. His arguments are not worth the time it takes to rebut them. Just ignore him and let him scam some more venture capital fools so that the house of cards falls down faster.
Re:
TANSTAAFAI.
AI companies spend many times more in building and running their products than they get back in revenue. At some point either the price will go up (by a lot), or the company goes bankrupt. In both cases, the users will have to face vendor lock-in.
Re:
Which is why it’s important to say “AI Industry CEO”. 20 years ago, I’m sure we had similar opinions from outsourcing industry CEOs. Like “I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the [Indian programmers] what I want, walk away from my computer for four [weeks], and come back to find the work done.”
It was puffery back then, as anyone who worked with the results learned. Not because the code was written by entities who were bad at it, but because—as I stated recently in another thread—”describe what I want built” is the hard part, and what programming actually is (with coding being the translation of that to a computer language). I suspect the same is true of what this person is hyping. A bad programmer ends up with a thing that technically matches everything they asked for, and finds it isn’t what they really wanted.
I’ve worked under good and bad project managers. Either way, the young me was left wondering “what does this person do?” But it turns out that a good manager quickly realizes when things are just starting to go astray, and communicates as necessary to very subtly move things back in the direction of what should’ve been asked for; the project just seems to go smoothly, and we’re done before we know it. A bad manager hopes to walk away and come back to find the thing magically finished—which doesn’t happen; our goals change drastically at irregular intervals, and we get get grumpy about being given improper direction and having to throw away work.
This comment has been flagged by the community. Click here to show it.
This sounds familiar. Hmm let me see…
The world: Wildfires are out of control and burning everything in sight!
Mike Masnick: Well actually, fire can be useful too. For example, I use it to light scented candles for my nightly bubble bath.
Re:
Hey c’mon, that was just a gentle jab.
This all AI industry is doing all it can, and a much faster pace than any other industry in the humankind, to be distrustful.
A lot of lies about accuracy, a lot of empty promises about automatizing humans work, a lot of hallucinations about AGI, a lot of nonsense about AI being smarter than all Nobel Prices that ever lived, a lot of immoral stances about being able to cure cancer, a lot of bullshits from theses CEO…
And above all, hundreds billions invested into data centers (including GPUs, electricity and staff) turned into massive debt because there is actually very poor revenue, and because OpenAI and Anthropic hide the cost to keep offering $20/m subscriptions.
And I’m not talking about how this industry is accelerating global warming when it use to be able to fix it.
So when I heard someone tells me how much AI will replace humans, I always ask: “So where is the money you’re making from it?” And the answer is always: “Well, so far, it’s costing me hundreds bucks a month but soon…”
After all, OpenAI may worth about $1T, and Anthropic about $500B. They must be making a indecent amount of cash every day, right?
Re:
If one defines a “decent” amount of cash as “enough money to pay the rent and keep the lights on”, then yes, they would definitionally be making an indecent amount.
Re: Re:
Actually, if they were making enough money to pay rent and electricity that would still be an indecent amount. Instead they make just enough to survive until the next cash injection, which will pay the rent and power bills until… repeat cycle.
This. This is my primary problem with it.
While I do have concerns about how expensive (energy and otherwise) it is, and about how societally disruptive it may become, my primary issue is simply that I cannot trust these technologies because I cannot trust the sociopathic ghouls making them, nor trust their reasons for forcing them onto us.
I wound up having to get a new phone and there were four different AI apps in it, as well as a dedicated physical button to summon them.
Why? For what, for whom? Not for me; none of this serves me. If I wanted an AI, I know where to look. But I simply don’t need these things. Half the shit or more that they can do I don’t need done, and the rest I’d just rather do for myself because it’s things I like doing.
And the increasingly obnoxious evangelizers blame me for not happily accepting that the unsatisfiable vampires bleeding the world dry are filling ever more of our lives with their scammy, pre-enshittified garbage that I never asked for and don’t need.
Use AI if you want; it serves some people’s needs. I can’t blame any random individual for using AI any more than I can blame them for still driving a gas vehicle.
Just then do me the courtesy of not giving me shit for not liking, wanting, or needing these things.
There’s too much money being splashed around for honesty to be a factor, and platforming CEOs and presenting the Roko’s Basilisk nutters as the opposing viewpoint, despite their being pro AI by and large, helps keep the ad money flowing, or to get them a pat on the head from Bezos.
Once the VC money dries up, look forward to the press pretending they always had grave concerns, which went largely ignored, or were stuffed at the tail end of puff pieces for the larest startup.
Re:
Yeah, it’s VERY telling that most of the headlines with anything scary or negative about “AI” are from CEOs and techbros hyping up how dangerous it’ll be because of how powerful/capable/cool they want you to think it is. We just can’t expect anyone with skin in the game to have an honest appraisal at this point.
Re:
Assuming that “the press” as we know it, that consistently engaged in this semi-covert boosterism, survives its own forays into AI, and is in any meaningful sense still around to pretend it was actually warning us all.
Sounds like we dont need CEOs anymore.
Re:
I feel like all the C-suite executive roles are the only jobs computers should take.
Re: Re: 'Sure it regularly hallucinates, but it's still better at business than them.'
AI would have a much better public reputation if the jobs it were replacing were the ones it could and should replace, that of the techbros and execs who keep pushing it like the second coming of digital Jesus.
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed…”
And this is exactly what a nefarious person wants you to do. Blindly believe that everything the computer says is accurate and trustworthy. We already are seeing people trying to subvert AI to their will.
It’s going to be a fun time for a while, while people relearn you have to still think for yourself and verify things. Especially when a large code base is eventually generated by AI, verified by AI for errors/malware, approved by AI, and deployed by AI.
A significant amount of software is pretty common stuff — database interactions, UI interactions, business rules and logic — so it really isn’t hard to see how AI could generate code like that, even quite “good” code. That’s the nugget of truth that underlies a lot of these … lies.
Can this dude get code with “no corrections needed”?? Probably?? It sounds like his code is pretty standard stuff (but if so, why would anyone need to buy his stuff?).
Does that mean AI can do all things? No. Can AI even do the easy things with “no corrections needed”? Depends.
(I get that nuance doesn’t sell and this dude is trying to sell, but FFS can we just get over the AI-hype already)
Re:
Because, yet again, copyright law is holding the world back. Someone has already written code like what we need, but is legally prohibiting people from using it without payment.
It’s doubtful any phone manufacturer will ever have to buy an operating system kernel or embeddable SQL database engine again. They just download free stuff off the internet. And in my view as a programmer, that’s good. We shouldn’t all be re-inventing the wheel because of legal restrictions, and shouldn’t be doing stuff computers can do just as well. We should be doing the stuff nobody/nothing else can, which is what makes it interesting and challenging.
I could say the same about television script-writing for example. A computer can randomly pick a dictionary word to name a show, then come up with a premise—let’s say “Gusty: Troubled detective Gusty Campbell solves homicides in rural Wyoming”. And then just crank out plots and motives, which are usually pretty predictable anyway. But we already have about a hundred shows like that, and I’d prefer something truly original.
I’m all for criticizing AI, but some of the articles linked here are being presented kind of misleadingly. I only started checking because the first article link I clicked on didn’t quite match the claim made in this post, so I checked a few more out of curiosity.
Like, the Cornell Chronicle link is reporting on an article in Nature Sustainability, but that research is on predictions (both high- and low-emission outcomes) for 2030, while the sentence it’s linked it makes it out to be on AI emissions today. There’s a similar Inside Climate News article linked later that is getting their climate claims from a WoodMac report, but if financial interest is enough to dismiss the pro-AI side here it’s worth pointing out that WoodMac is a large consulting firm that works with chemical, energy, and mining companies, and their climate report accordingly hypes up the need for more investment in these industries.
Also, the claim that the NYT article linked doesn’t address specific reasons to oppose AI before paragraph 38 just doesn’t seem correct. Prior to that point, the article covers polling that shows public distrust and opposition to AI, copyright lawsuits against AI, the possibility of AI being an economic bubble, unwanted AI integration in browsers and apps, and dystopian comments from tech CEOs about the future of humanity. Paragraph 38 is just the first mention of opposition to data centers, and is followed by a section on concerns about the social and political influence of AI. As much as I dislike NYT tech coverage and think this is an awful article that gives way too much airtime to the opinions of tech CEOs, the way it is being presented here only works if you ignore everything outside of the two quoted paragraphs.
Defenses of AI always read as AI-generated because of how little they ever engage with reality.
His argument that AI tools have gotten better feels hollow when he’s an AI evangelist who is actively ignoring that LLMs still regularly hallucinate, and even coding LLMs are basically useless for anything more complex than boilerplate code.
The AI industry has a problem in that people just don’t want to pay for their products, either because what’s free is enough or they don’t see the value in the paid subscriptions. No wonder the evangelists are desperately trying to hype this tech up as much as possible, but it’s a fool’s errand, considering the ridiculous amount of investment going into the industry but with barely any actual revenue made from it.
I just want to see the bubble burst already.
Re:
Hell, OpenAI published a result saying that their hallucination rate is up to like, 40%, and is climbing as they make their model more sophisticated.
Every advance that has automated a task has been paid for through the loss of a worker’s income. Yes, yes, yes, I know all about statistics: new kinds of jobs replace the old ones. That’s great for the kids, but it does me no good: I am not a statistic, but a living, breathing person who no longer has a job. The reality is that the costs of the transition from one job to another are paid by the displaced workers, their families, and their communities, not by the people implementing the automated systems.
Every automation transfers wealth away from the workers to the employers.
I am not against automation, per se. I am against taking away livelihoods without fully compensating that loss. Whatever savings the machine enables should not include labour costs. The wage should still be paid. First directly to the displaced worker, then to a kind of universal basic income fund.
Climate goals are unachievable because India and China won’t actually reduce emissions until they catch up to Europe and North America economically. The emissions just shift from high income economies to lower income economies as the pricing becomes better for less advanced economies.
Re:
Hasn’t China already been working on reducing its emissions over the past few years, especially with the uptick in electric vehicle sales? Also, the responsibility of reducing global emissions also falls on Europe and North America (and the United States specifically), so let’s not act like China and India are the only countries to exclusively have that responsibility.
Re: Re:
China claims to be working to curb growth in emissions. Not actual cuts.
If there is a technological shift that makes fossil fuel emissions less profitable it could solve the problem, but in the mean time capitalism will just shift where the emissions come from.
Re: Re:
do you have any real numbers that actually prove that china and India are actually reducing emissions.
mostly worldwide it seems to be theatre. They shout to the rooftop they plan to reduce x by 10% in the future and x is 0.1% of emissions, in the meantime they build more but hey real soon now each will have 10% less emissions.
In practical terms china has been flat since 2023 ie perhaps 12.3 gigatons a year in 2023 and 12.1 gigatons in 2025.
On AI, yes ai is much better then 6 months ago in terms of programming. You do have to be real careful about how you define what you want it do, you just can;t mention something because you plan to figure it out as you go.
sometimes it makes dumb mistakes and it takes a few iterations to get it no don’t do what I told you not do in the second iterations.
am I worried, no I am 62 I am not going to be doing this for that many more decades.
as per the discussion on is’nt most code pretty much the same? yep in the last 40 years I could count the innovative things I have done that nobody else has done previously on 2 hands and more then half only beat the market by a year or two.
Please tell me which AI agent he is using where he can tell the AI what he wants, and a few hours later, come back to a perfect program? I have spent days trying to get AI agents to write some VBA code for me, and it is NEVER usable without days of fussing with the AI to get it running properly. For more complex code, it usually forgets what it is doing halfway through. It will switch to a different programming language within a module!!!
Say without saying?
Say “Essentially, I’m just a trained monkey,” without saying “Essentially, I’m just a trained monkey”?
I think you’re either missing a transition or using “wide berth” incorrectly. Giving something a wide berth means staying away from it.
I hate to break it to you, but claims that AI has improved have been around for a year or two at least. Yet, the research continues to show either marginal improvement or a complete lack of improvement in the technology itself.
Actual research into AI as a software developer shows it’s horrendously bad… like, only managing to complete software development tasks 15% of the time bad https://www.freezenet.ca/ai-cant-even-replace-software-engineers-so-why-the-hype/
In fact, more recently, AI was quietly blamed for the Amazon outage back in December: https://www.freezenet.ca/amazon-employees-blames-ai-for-aws-outage-in-december/
Saying AI has rapidly improved is just one of the many tired lines I hear from AI BS artists frequently.
Wanna bet?
I would be a lot less skeptical about “AI” if it weren’t for the fact that, every time I hear one of these CEOs touting its “benefits” (said benefits going mostly to CEOs) I get distracted totaling up the odds of seeing them on trial for fraud within a decade…
Fuel shortage means no wasting fuel on data centres
Turn them all off and watch Nvidia stock plummet!
There are plenty of people who well understand it and are still intractably hostile. Because they understand it.
No “gains” from “automation” can fix the soaring energy costs dumped on everyone else but the “AI” companies, the environmental damage of illegal generators and forcing open coal-fired power generation, the destroyed and insanely costly for normal people who want normal compute in their homes, and everything else mentioned in the article, and more.
People who understand “AI” know it is just Enshittification As A Service.
I've actually studied AI
and ML and NLP and the rest, in graduate school – although in fairness I spent more time on statistical pattern recognition. But I did have the good fortune to study under some of the people who wrote the textbooks, I paid attention, and I’m still paying attention.
There are valid use cases for this technology; for example, screening medical images for anomalies that should be closely examined by a specialist, or analyzing a centuries-old manuscript to determine whether or not the putative author is the actual author. But these require careful, judicious application by people with domain-specific expertise; the kind of slop being thrown around (e.g. “code an application that does X”) is completely inadequate.
Also worth noting is that these use cases are limited to relatively small domains: they have to be in order to fine-tune the models to do one thing extremely well. This is not at all what these AI companies are doing: they want their models to be everything to everyone because $$, and in doing so they’re building and hyping erratic mediocrity…that will always be erratic mediocrity because that’s how it works.
I wish this wasn’t happening, because the empty promises of these CEOs are obscuring the real promise of this technology. But I’m not going to get my wish; the hype cycle around virtual reality (a failure) and blockchain (a failure) is being repeated and when the bubble bursts — as it will — the damage will be incredible.
Hopefully Masnick actually read this post.
If AI does your job for you then sounds like you shoudn't have it
“I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.”
Assuming for the sake of the argument/humor that everything he just said is correct it certainly sounds like he just made a compelling case for why AI has made him redundant or at the very least vastly overpaid, so well done on arguing for the loss of your job I guess?
So now that you've read Matt Shumer's piece
Take a look at Ed Zitron’s annotated rebuttal to it. Zitron has been debunking AI bullshit for years now, and he goes through Shumer’s piece line by line and pretty much tells us why it’s all nonsense. Basically: don’t believe an AI evangelist, whose job it is to sell you a monthly subscription to AI, when they say AI is going to revolutionize the world.
But don’t believe me, read it for yourself:
https://www.dropbox.com/scl/fi/qw6k5c3m575cq21p7jjac/Something-Big-Is-Coming-Annotated.pdf?e=3&noscript=1&rlkey=qlr0mgnlpjifo5xkon2crhrhw&dl=0
If You’re Going To Defend AI And Whine About Its Critics, You Should Probably Be Honest About Its Actual Harms, Mike
Re:
He’s deleting comments calling him out on this, watch out.
Re: Re:
I’ll take things that have never happened for $1,000.
Re: Re:
Speaking as a long-time Techdirt commenter: I have never seen any comment get deleted because its content disagreed with a Techdirt author’s writing. The only comments I ever see get deleted are spam messages, and even that’s a rarity. “Every accusation, a confession” is a saying for a reason, though…
Re: Re: Re:
Saw two different posts calling out Mike’s hypocrisy last night that were both gone in the morning. I suppose it is jumping the gun to say he’s the one who removed them, but someone did.
Re: Re: Re:2
No comments were removed. None. Don’t lie.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:3
“I plead ‘Nuh-uh!’ your honour.”
Re: Re: Re:4
Don’t know what to tell you. Every comment that was made on this post remains there today and not a single one was deleted. You can go and look and there are plenty that criticize me. I don’t give a shit.
What I find funny is that you’re so desperate to believe things that aren’t true that you just keep lying.
What a weird, pathetic existence.
Grow the fuck up dipshit.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:5
You’re not mad. Got it, I won’t put in the newspaper that you were mad.
Re: Re: Re:2
I have had issues with the topics and some of the sources used for guest columns and have pointed it out, usually right at the start of the comments section and I have never had any cause to worry about those comments being deleted. Whatever you may think of Mike, he’s not thin skinned and pro censorship, and he certainly isn’t afraid of people who say he’s wrong.
This isn’t reddit, there’s no overzealous agenda moderation here.
Conversely
If you are going to make criticisms about AI and whine about its proponents you should also be honest about the positives AI technology brings because it is not all negative any more than it is positive.
There are extremists on both sides of the AI debate who need to grow up and learn some nuance frankly
Re:
By all means, tell us about the positives of AI technology, especially generative AI. Oh, and make sure they’re actual positives instead of negatives framed as positives by the capitalist vultures who want all the money at the cost of permanently pushing the lower class out of entire career paths and into poverty (if not bankruptcy and homelessness). Remember: Machines can’t buy goods and services!
Re:
Those positives are, for regurgitation engines specifically, that people think they’re getting more done faster.
Although, on study, it tends to turn out that this perception is false.
Re:
If genAI is had any genuine benefits other than making money for the worst people in society and making the most obnoxious coders even more so, we would be hearing them constantly. You would be shouting them from the rooftops, Google, MS and OpenAI would be running billion dollar ad campaigns to make people think the environmental and societal costs are worth it, and it would wash away all the ethical issues… Instead we have the worst people in tech bragging about how they’re submitting code to open source projects without understanding how it works, we have Elon running gas turbines in poor neighbourhoods so he can create Nazipedia and childporn, Google fighting to avoid having to disclose how much water and power they use and the Nvidia, OpenAi and Oracle engaged in a game of theoretical money swapping so scammy it would make Tether blush.
Kind of surreal to see this on Techdirt, of all places. This nails exactly why the pro-AI articles get so much pushback, and it’d go a long way if other writers incorporated it into their writing beyond just a throwaway sentence.
Really appreciate your writing on this, Karl.
This comment has been flagged by the community. Click here to show it.
AI is a helpful tool