The Policy Risk Of Closing Off New Paths To Value Too Early
from the historical-analogies dept
Artificial intelligence promises to change not just how Americans work, but how societies decide which kinds of work are worthwhile in the first place. When technological change outpaces social judgment, a major capacity of a sophisticated society comes under pressure: the ability to sustain forms of work whose value is not obvious in advance and cannot be justified by necessity alone.
As AI systems diffuse rapidly across the economy, questions about how societies legitimate such work, and how these activities can serve as a supplement to market-based job creation, have taken on a policy relevance that deserves serious attention.
From Prayer to Platforms
That capacity for legitimating work has historically depended in part on how societies deploy economic surplus: the share of resources that can be devoted to activities not strictly required for material survival. In late medieval England, for example, many in the orbit of the church made at least part of their living performing spiritual labor such as saying prayers for the dead and requesting intercessions for patrons. In a society where salvation was a widely shared concern, such activities were broadly accepted as legitimate ways to make a living.
William Langland was one such prayer-sayer. He is known to history only because, unlike nearly all others who did similar work, he left behind a long allegorical religious poem, Piers Plowman, which he composed and repeatedly revised alongside the devotional labor that sustained him. It emerged from the same moral and institutional world in which paid prayer could legitimately absorb time, effort, and resources.
In 21st-century America, Jenny Nicholson earns a sizeable income sitting alone in front of a camera, producing long-form video essays on theme parks, films, and internet subcultures. Yet her audience supports it willingly and few doubt that it creates value of a kind. Where Langland’s livelihood depended on shared theological and moral authority emanating from a Church that was the dominant institution of its day, Nicholson’s depends on a different but equally real form of judgment expressed by individual market participants. And she is just one example of a broader class of creators—streamers, influencers, and professional gamers—whose work would have been unintelligible as a profession until recently.
What links Langland and Nicholson is not the substance of their work or any claim of moral equivalence, but the shared social judgment that certain activities are legitimate uses of economic surplus. Such judgments do more than reflect cultural taste. Historically, they have also shaped how societies adjust to technological change, by determining which forms of work can plausibly claim support when productivity rises faster than what is considered a “necessity” by society.
How Change Gets Absorbed
Technological change has long been understood to generate economic adjustment through familiar mechanisms: by creating new tasks within firms, expanding demand for improved goods and services, and recombining labor in complementary ways. Often, these mechanisms alone can explain how economies create new jobs when technology renders others obsolete. Their operation is well documented, and policies that reduce frictions in these processes—encouraging retraining or easing the entry of innovative firms—remain important in any period of change.
That said, there is no general law guaranteeing that new technologies will create more jobs than they destroy through these mechanisms alone. Alongside labor-market adjustment, societies have also adapted by legitimating new forms of value—activities like those undertaken by Langland and Nicholson—that came to be supported as worthwhile uses of the surplus generated by rising productivity.
This process has typically been examined not as a mechanism of economic adjustment, but through a critical or moralizing lens. From Thorstein Veblen’s account of conspicuous consumption, which treats surplus-supported activity primarily as a vehicle for status competition, to Max Weber’s analysis of how moral and religious worldviews legitimate economic behavior, scholars have often emphasized the symbolic and ideological dimensions of non-essential work. Herbert Marcuse pushed this line of thinking further, arguing that capitalist societies manufacture “false needs” to absorb surplus and assure the continuation of power imbalances. These perspectives offer real insight: uses of surplus are not morally neutral, and new forms of value can be entangled with power, hierarchy, and exclusion.
What they often exclude, however, is the way legitimation of new forms of value can also function to allow societies to absorb technological change without requiring increases in productivity to be translated immediately into conventional employment or consumption. New and expanded ways of using surplus are, in this sense, a critical economic safety valve during periods of rapid change.
Skilled Labor Has Been Here Before
Fears that artificial intelligence is uniquely threatening simply because it reaches into professional or cognitive domains rest on a mistaken historical premise. Episodes of large-scale technological displacement have rarely spared skilled or high-paid forms of labor; often, such work has been among the first affected. The mechanization of craft production in the nineteenth century displaced skilled cobblers, coopers, and blacksmiths, replacing independent artisans with factory systems that required fewer skills, paid lower wages, and offered less autonomy even as new skilled jobs arose elsewhere. These changes were disruptive but they were absorbed largely through falling prices, rising consumption, and new patterns of employment. They did not require societies to reconsider what kinds of activity were worthy uses of surplus: the same things were still produced, just at scale.
Other episodes are more revealing for present purposes. Sometimes, social change has unsettled not just particular occupations but entire regimes through which uses of surplus become legitimate. In medieval Europe, the Church was the one of the largest economic institutions just about everywhere, clerical and quasi-clerical roles like Langland’s offered recognized paths to education, security, status, and even wealth. When those shared beliefs fractured, the Church’s economic role contracted sharply—not because productivity gains ceased but because its claim on so large a share of surplus lost legitimacy.
To date, artificial intelligence has not produced large-scale job displacement, and the limited disruptions that have occurred have largely been absorbed through familiar adjustment mechanisms. But if AI systems begin to substitute for work whose value is justified less by necessity than by judgment or cultural recognition, the more relevant historical analogue may be less the mechanization of craft than the narrowing or collapse of earlier surplus regimes. The central question such technologies raise is not whether skilled labor can be displaced or whether large-scale displacement is possible—both have occurred repeatedly in the historical record—but how quickly societies can renegotiate which activities they are prepared to treat as legitimate uses of surplus when change arrives at unusual speed.
Time Compression and its Stakes
In this respect, artificial intelligence does appear unusual. Generative AI tools such as ChatGPT have diffused through society at a pace far faster than most earlier general-purpose technologies. ChatGPT was widely reported to have reached roughly 100 million users within two months of its public release and similar tools have shown comparably rapid uptake.
That compression matters. Much surplus has historically flowed through familiar institutions—universities, churches, museums, and other cultural bodies—that legitimate activities whose value lies in learning, spiritual rewards or meaning rather than immediate output. Yet such institutions are not fixed. Periods of rapid technological change often place them under strain–something evident today for many–exposing disagreements about purpose and authority. Under these conditions, experimentation with new forms of surplus becomes more important, not less. Most proposed new forms of value fail, and attempts to predict which will succeed have a poor historical record—from the South Sea Bubble to more recent efforts to anoint digital assets like NFTs as durable sources of wealth. Experimentation is not a guarantee of success; it is a hedge. Not all claims on surplus are benign, and waste is not harmless. But when technological change moves faster than institutional consensus, the greater danger often lies not in tolerating too many experiments, but in foreclosing them too quickly.
Artificial intelligence does not require discarding all existing theories of change. What sets modern times apart is the speed with which new capabilities become widespread, shortening the interval in which those judgments are formed. In this context, surplus that once supported meaningful, if unconventional, work may instead be captured by grifters, legally barred from legitimacy (by say, outlawing a new art form) or funneled into bubbles. The risk is not waste alone, but the erosion of the cultural and institutional buffers that make adaptation possible.
The challenge for policymakers is not to pre-ordain which new forms of value deserve support but to protect the space in which judgment can evolve. They need to realize that they simply cannot make the world entirely safe, legible and predictable: whether they fear technology overall or simply seek to shape it in the “right” way, they will not be able to predict the future. That means tolerating ambiguity and accepting that many experiments will fail with negative consequences. In this context, broader social barriers that prevent innovation in any field–professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry to small firms–deserve a great deal of scrutiny. Even if the particular barriers in question have nothing to do with AI itself, they may retard the development of surplus sinks necessary to economic adjustment. In a period of compressed adjustment, the capacity to let surplus breathe and value be contested may well determine whether economies bend or break.
Eli Lehrer is the President of the R Street Institute.
Filed Under: ai, business models, jobs, labor


Comments on “The Policy Risk Of Closing Off New Paths To Value Too Early”
How do we flag an article for being trolling/spam?
History and reality have shown that regardless, they will fuck over the poor to line their own pockets.
Oh an essay from the head of a thinktank, let’s look at what they stand for…
https://www.influencewatch.org/non-profit/r-street-institute/
They seem like lovely people who should be listened to and not another front group for the same Heartland Institute\Heritage Foundation\Fedaralist Society ghouls, screaming ‘DEREGULATE!’ while driving the world to the brink.
Re:
I get the skepticism, but having worked with them on various projects going back many years, I think it’s wrong to characterize them the way you have. Yes, they effectively splintered out of Heartland, but specifically because they very much disagreed with Heartland on a bunch of issues, and even the link you presented shows the many ways in which they stray pretty far from the standard Federalist Society/Heritage positions on things many of us find worthwhile, like climate and mail-in voting.
Also, it’s cool to call stuff out like this, but I do wish people would engage with the actual material in the article? Damning people solely for their association (while misrepresenting that association) is pretty weak sauce.
Re: Re:
No need. The stench of certain associations never comes off, particularly if you remain in the ‘center-right’ space.
This is a debate that was already won against the right wing very early on and your vouching for him is a further attempt to steamroll and reverse that verdict.
The issue isn’t his argument on it’s face, as already being noted below, the issue is “What is he omitting or not telling us?”
His associations scream he has an agenda that is going to land somewhere different than his advocacy for continued experimentation with a highly destructive technology that, so long as its application is over-broad or monetized, has no redeeming value.
Re: Re:
Speaking personally, I only call it out when it’s not obvious to a layperson, and they won’t take the time to google what R street is (and yeah… they should, the info is right there. But they won’t. It’s human behavior). If it were more of a household name like Rand Paul or whoever I wouldn’t bother.
It’s worth engaging with the article as it is, but it’s also useful context to know if someone is talking their book from the past decade pre-AI, or if they’re consistent/trustworthy. Especially with professional think tanks where the writing tend to be slick.
The compromise I’ve kind of settled on is a quick comment on association, and then a separate comment on the actual merits later, after actually digesting it. I think it kind of works out?
Re: Re: Sick of the defense
You have been shilling for AI deregulators since that Cindy Cockburn or whatever. It was a series of 8 posts about how “AI is actually really good guise.”
Explain to me how AI even keeps going at the burn rate of money being put into it. Is it wise to shill and cape for a technology that literally benefits no one except scammers and talentless people (and electricity companies I guess)? In addition, explain to me, Mike Masnick (I usually like your articles), why warming the earth at this shocking rate is worth it for a bunch of tech CEOs to goose money out of other Business Idiots? I am poor. As shit. And an artist. AI has only made it harder to do what I love, because the Business Idiots will not see the value of doing something once and having it work. They would rather burn billions in the hope that someday they will never have to pay humans at all. This is gross. Also your answer is basically “well, they are from heartland but tHiS Is dIfFeRenT.”
More disappointing chaff from techdirt.
Please everyone go read Ed Zitron’s wheresyoured.at
And you should know who he is, Mike. I think you do, and it is why I find Techdirt’a defense of these types of articles and AI in general disgusting.
Re: Re: Re:
This is a misrepresentation of what she wrote.
I agree there is a bubble. It is likely to burst and a lot of people will lose a lot of money. But that won’t make the underlying technology go away.
I mean, that’s literally false. The technology is absolutely overhyped and oversold, but it has been providing tremendous benefits to many people, myself included.
I mean, the whole point of this article is to suggest why that’s probably not going to be the case.
As I explained in my article yesterday: https://www.techdirt.com/2026/02/10/how-to-think-about-ai-is-it-the-tool-or-are-you/ it’s true that some people think that’s how AI will be used, but I really don’t think it will do very much replacing of people. It’s not good at that.
Of course. I’ve been on his podcast. I think he’s fundamentally wrong about almost everything related to AI. I do not think he understands how the technology is used at all. But he’s pot committed. He’s raised his profile entirely based on his prediction that it has no positive uses and he can’t change course now, even as more people are seeing positive uses of it.
It’s right to be skeptical of the hype. It’s correct that many people are using the tech poorly. But that was also true of the internet itself.
More and more people are seeing real value out of it, especially with the release of the recent coding agents. I know you will think that’s just hype and nonsense, and that’s cool for you. But it is legitimately helpful to me every damn day. So when you say no one can get any value out of it, I know that’s not true.
Re: Re: Re:2
You are willfully ignoring the point.
‘in the hope’ is the key. These CEOs and shareholders are hoping to replace people, and trying to force that desire to become truth as fast as possible by throwing resources at it. They’re clinging to magical thinking for as long as each individual has the resources to burn. You’re right, LLMs† are not good at replacing people.
The people in charge of multiple large companies refuse to accept that.
They’re causing cascading damage in pursuit of their delusion, in real time. People have been laid off from jobs that LLMs can’t actually replace them in, and there’s no work in their trained field until the delusion shatters. (This kills people.) Homes and infrastructure have been torn down to build data warehouses and digital mines – that might not even finish getting built. (This kills people.) Clean water sources are being poisoned, in places that cannot afford to filter or clean it up, affecting people who didn’t choose this risk. (This kills people, Mike.)
The examples of damage are plentiful. The victims are too numerous to list by name anymore.
But sure, I bet that generated code is totally fun and worth it.
†your inconsistent forced conflating of LLMs/’Generative AI/ChatGPT-style programs with the rest of the Machine Learning categories no longer grants LLMs false legitimacy – it tarnishes those fields instead. You know which the commenters here are talking about.
Stop playing at discourse, or fall silent already.
Re: Re: Re:3
I have been quite clear that CEOs who think it does more than it does are wrong and the market will take care of that when they fail.
No. This is legitimately interesting and important technology and I will not stop talking about the implications (both good and bad) of it. Or how to think about improving the good while mitigating the bad.
I get that some of you wish to stick your head in the sand and pretend the tech will magically go away. I think that is a very silly position.
Techdirt has always been about discussing the implications of technological change and innovation. And I’m not going to stop doing that just because you don’t like the fact that people want to actually discuss both the good and bad aspects of it.
Re: Re: Re:4
I don’t think it will magically go away. I think thermodynamics will take care of that in the long run, but only after ruinous quantities of our planet’s dwindling resources have been squandered. I’m done hearing predictions that do not account for entropy and declining energy supplies.
Re: Re: Re:4
And what will the market do, right now, to mitigate the suffering currently inflicted by CEOs who have not yet failed?
Acknowledging that these CEOs are wrong does not address the point people are trying to get through to you. This technology is not being used in a vacuum, so we won’t discuss it as if it is.
Why should we wait until they run out of resources? The CEOs are harming people, right now, because of their delusion. If someone delusionally believes they’re literally God and spends all their money on “I’m the real God” posters to prove it, whatever. If that person assaults their neighbors, breaks into others homes to put up the posters, and kicks puppies to prove it… We would not wait until they run out of money, or posters, or the delusion shatters on its own. We’d intervene, to minimize the destruction and suffering inflicted on others.
This guest post is verbose fluff without any relevant interventions or implications of the technology – and they aren’t even bothering to show up and defend their work. Your comments are littered with bad faith argument tactics. You are not engaging in discourse, you’re playing at it.
Re: Re: Re:5
These things do take time, and I’m sorry but the rest of the world not acting in the way you think is right is not a sign of bad things. It just means you’re not the dictator of the world. That’s probably good.
The thing that I am doing is trying to call out both the good and bad uses of AI to encourage more of the good and less of the bad. It doesn’t always succeed, but honestly people who insist that speaking out is a failure if any bad things continue are so boring and disconnected from reality they tend not to be worth anyone’s time.
Re: Re: Re:4
You should actually discuss the bad aspects of it once in a while instead of coming off as an increasingly desperate shill for AI, then.
Re: Re: Re:5
You should read Techdirt more often.
https://www.techdirt.com/2025/09/11/business-insider-pulls-40-essays-after-getting-conned-by-ai-using-scammers/
https://www.techdirt.com/2024/12/10/washington-post-ingeniously-leverages-ai-to-undermine-history-and-make-search-less-useful/
https://www.techdirt.com/2025/02/20/bbc-study-finds-ai-chatbots-routinely-incapable-of-basic-news-synopses/
https://www.techdirt.com/2025/11/04/fox-news-fell-for-ai-generated-rage-bait-rewrote-story-to-pretend-it-didnt/
https://www.techdirt.com/2023/10/27/the-ai-journalism-revolution-continues-to-go-poorly-as-gannett-accused-of-making-up-fake-humans-to-obscure-lazy-ai-use/
https://www.techdirt.com/2024/06/12/yet-another-company-caught-using-ai-to-quietly-create-fake-journalists-and-fake-journalism/
https://www.techdirt.com/2025/03/05/the-la-times-political-rating-ai-is-a-silly-joke-aimed-at-validating-wealthy-media-ownerships-inherent-bias/
https://www.techdirt.com/2025/01/29/apple-has-to-pull-its-ai-news-synopses-because-they-were-routinely-full-of-shit/
https://www.techdirt.com/2023/11/21/ai-is-supercharging-our-broken-healthcare-systems-worst-tendencies/
https://www.techdirt.com/2025/05/21/whoops-chicago-sun-times-publishes-ai-generated-summer-guide-full-of-made-up-recommended-books-nonexistent-people/
https://www.techdirt.com/2023/09/28/silicon-valley-starts-hiring-poets-to-fix-shitty-writing-by-undercooked-ai/
https://www.techdirt.com/2023/09/08/g-o-media-gives-another-crash-course-on-perils-of-replacing-human-journalists-with-half-baked-ai/
And that’s just the first page of results on a quick search.
Maybe, just maybe, we can publish about both the good and the bad of AI and it doesn’t make us “desperate shills for AI”?
Re: Re: Re:6
When are the articles on how companies are swapping their supply chains to make computer parts for data centers, driving up the costs on components that consumers would use to build their own PCs, as well as delaying and increasing the price of things that consumers actually want?
Valve has had to delay the release date and price announcement of their new hardware (Steam Machine, Steam Frame, etc.) when it seemed like a concrete launch was actually pretty soon, and their already-existing Steam Deck is completely sold out, with speculation that the ongoing RAM and component shortages from said funneling to data centers have caused something that people actually want, and have been able to previously buy, to effectively have been taken off the market.
And as I’ve mentioned in prior articles: How do we get to that Resonant Computing future where people use their own machines to build their own bespoke software if they can’t afford their own computers? And the AI tools you’re telling people to use to take matters into their own hands and build their own bespoke software like you did, those tools are helping to fuel the crisis.
Re: Re: Re:7
See, if you were here discussing this in good faith, you would at least admit that you were full of shit in claiming we never cover the problems with AI instead of immediately moving the goalposts.
As for the specific requests, fuck off. We’re a small fucking team. We cover maybe 5% of what I’d love to be able to cover. If I don’t cover your pet topic… and that makes you claim I’m not serious about a topic… fuck off. Unless you’re willing to fund us to hire more writers… we do what we can.
I don’t care that the Steam Deck is sold out. Honestly not a big deal to me and I only write what I care about. If one of the other writers brings me a story on that topic then we’ll see, but honestly… just seems like a temporary supply chain issue that will get worked out.
Re: Re: Re:8
I’m not even the same AC you replied to with the string of links. I know y’all have covered issues with AI in the past. But the RAM price increases and component shortages have been happening for at least a couple of months, very visibly.
And it’s not just the Steam Deck, Steam Machine and more that’s the issue. It’s not just toys & games at issue. RAM prices and other computer components are getting jacked up in price thanks to companies swapping their supply chains to serve data centers for AI & more.
Yeah, maybe the “free market” will see this as a problem and work it out. But it feels to me, the direction that things have been going with how corporations hate people actually owning things, like they’d be fine with all of us doing stuff via dumb terminals that have to connect to VMs and cloud machines.
Re: Re:
For every one issue they disagree on, they agree on a dozen more so let’s not kid ourselves these are good people with great takes, they recently attacked Donald Trump for even paying lip service to a social democrat policy position because it’s bad for the lender class for crying out loud. They come off as a mouthpiece for the same old right wing people with the same views, trying to hedge their bets on climate change, so when there’s a reckoning they can pump money into this swarming mass of ALEC linked clowns instead.
Would you be platforming their take on any other issue but AI? Because it seems like on most issues these are the exact types of people who would be being called out here for bad takes, the people they’re connected to or their lobbying efforts.
The R Street Institute is an American center-right think tank …The institute’s stated mission is to “engage in policy research and outreach to promote free markets and limited, effective government.” link
In case you’re wondering how this article purportedly about AI ended up at “field–professional licensing, limits on free expression, overly zealous IP laws, regulatory bars on the entry…particular barriers in question have nothing to do with AI itself”
ai;dr
Re:
I’m going to steal that. But I’m going to leave a funny vote too because I’m not a jerk.
Re:
I know this term made the rounds today… and beyond the amusing nature of the repurposed acronym, it honestly confuses me. Is it that you don’t want to read any articles that try to grapple with the impact of AI? Would you prefer a Techdirt that doesn’t touch on a topic that is impacting much of the tech world?
(Realizing this can be read in an accusatory manner, and it’s not meant to be… I’m genuinely curious what you are trying to get across with this comment).
Re: Re:
It’s that nobody wants to read slop generated by AI.
Re: Re: Re:
Of course no one wants to read slop.
But what makes you say this is slop?
Re: Re:
In this specific case, I just wanted to have a little fun with the term and help it circulate. 😁
But since I’m here and I don’t think I’ve made it as clear as possible, this is for the record: I’m in favor of uses of what we colloquially call “AI” that aren’t necessarily about replacing humans in important fields or “democratizing creativity” in terms of replacing human-made art. The use of “AI” in medical fields to catch diagnoses in ways that can help improve treatments and save lives, for example, is a use that I’m all for. All that “we can generate art, so why bother paying artists” type of shit is what I stand wholeheartedly against.
Re: Re: Re:
Sure. I get that, and my piece yesterday addressed many of those points exactly.
And while “ai;dr” is cute, it feeds into the narrative that many other commenters here of “all ai evil, must die, anyone who talks about it seriously is a shill and evil and must die” and… that’s so fucking pointless. I mean this whole thread is full of pathetic non-replies that are just hating on AI, rather than addressing the substance of the article. So I was a bit disappointed that it felt like you were contributing to that.
Re: Re: Re:
I wasn’t really talking about all automated machine learning and I think Mike knows that. I am soeaking specifically about generative AI used to replace humans, but Mike seems to think that won’t happen because… idk, he thinks because it can’t actually replace humans somehow means CEOs wouldnt collaborate with each other to lower expectations for the quality of the media we consume in order to further serf-ify their customers and wring every bit of ad money out of us as possible.
But… why would they not do that? I dont live in a vacuum. CEOs all over the place are salivating at the idea of no workforce. Burning billions of dollars in the hopes of destroying the value human labor for most people. And I’m supposed to not use the knowledge of the world I have and assume them to continue to be disgusting ghouls who would LITERALLY DESTROY THE PLANET rather than pay their fair share or pay people for the labor that MAKES THEM MONEY. But whatever. You’re fine, mr. Stone. Im still upset about Mike.
Re: Re: Re:2
Oh, I totally think they’ll do that. Absolutely. But that’s actually why I want people to understand both what the tech can and cannot do in order to be able to fight back. And that’s why I think using the tech thoughtfully to fight back itself is useful. This is why I’ve been moving so much stuff from the cloud onto my own machine that I control with my own software. Claude Code lets me do that. It’s giving me back control. I’m getting rid of all sorts of cloud services I used to pay for, because I don’t need them any more. I’m giving myself more autonomy.
Re: Re: Re:3
What happens when all the hardware that people could run all that software on becomes debilitatingly expensive thanks to parts companies shifting all their manufacture to supply data centers? You can afford your own machine you control, but it’s is factually getting a lot more pricy thanks to the data infrastructure that Claude Code and more requires.
Re: Re:
You know what they mean. You know, that’s why you set up a series of false equivalencies and strawmen in your comment.
Are you done playing yet?
Re:
I mean, if someone can’t be bothered to write it, why should I be bothered to read it?
Piss off.
My problem with AI is it’s another blatant fucking scam directly in line with crypto and NFTs.
Once again we’re getting techbroligarch half-baked bullshit getting shoved down all our throats, without the damnedest bit of regard for any actual needs or desires– let alone whether or not there is even enough legitimate utility in it.
Come back with something I actually want and could get use out of. It’s being pushed, not being sold. It’s not really a product or service.
AI isn’t actually for us.
It’s just their latest attempt to discover an infinite money trick.
And until we tax and regulate them into fucking oblivion like we should, they will continue to fuck everyone’s lives over in their attempt to find one.
Being billionaires isn’t enough for these broken, hollow creatures, and when they’re all trillionaires, that won’t be enough either.
Eating the rich is good for them, too.
It’s an intervention.
Re:
Microsoft shoved their regurgitation engine into fucking Notepad.
I tend to think that the enthusiasm with which someone pushes ‘AI’ correlates directly to how much time they spend bullshitting someone in their job, because bullshitting people is the one thing these machines seem to be able to do well.
Re: Re:
Notice how much these
EpsteinAI Bros struggle with the concept of consent.Re: Re:
I never thought I’d see the day when the words “zero-day RCE exploit in Notepad” would show up in my daily news.
Reducing the impact of AI to a set of abstract value propositions obscures the potential harms it could have in the real world.
At this point major insurers are looking to drop coverage of AI because they can’t measure the risk and they’ve already had to payout some hefty dollars.
Given the lack of prudence we’ve demonstrated so far is it a surprise that people who professionally assess risk are ready to abandon this line of business?
Why Policymakers Must Allow New Value to Emerge
One of the most important insights here is the idea that societies need space to legitimize new forms of value rather than prematurely closing them off.
In highly regulated industries like aviation, logistics, procurement, and finance, we’re already seeing how AI is reshaping task structures faster than institutions can comfortably adapt. The real risk isn’t simply job displacement — it’s whether workforce systems, training pipelines, and regulatory frameworks allow new skill pathways to emerge before older ones contract.
History shows that when innovation outpaces institutional flexibility, adjustment becomes painful. That’s why continuous reskilling and adaptive professional education matter so much during periods of compressed technological change. We’re seeing this especially with AI adoption across operational sectors (for example, how structured AI skills training is becoming embedded into professional development: The Complete Guide to AI Prompt Courses – Wingsway Training Institute
).
Policymakers can’t predict which new forms of work will endure — but protecting the ecosystem that allows experimentation, skill development, and institutional adaptation may ultimately be the most important safeguard.
Stop trying to make fetch happen.
Why is everyone upset. This is just new technology. It doesn’t destroy jobs like buggy whip manufacturers, it creates new jobs like car manufacturing plants.
New tech beats out the old but new jobs are always created. I mean who is going to program these new machines? Learn to code. Right?
Generally I wouldn’t care about AI at all, and I didn’t… until it started hoovering up billions and billions of dollars (every month?) for hardware, power, and resources and more resources. At this point AI is a cancer consuming everything it touches.
Why is techdirt publishing so many articles that boil down to: AI is good actually
Re:
I mean, we’re not. We’ve published plenty of articles criticizing AI and plenty of articles exploring the potential implications, both good and bad, of AI. If you want to consider exploring the potential benefits of AI (always written with caveats that the tech is not perfect and has plenty of challenges) “boiling down to AI is good actually” then, well, that’s a you problem, I’m afraid.
But we discuss the implications of innovation and technological change. If you wish to put your head in the sand rather than deal with the actual real world implications, that’s on you.
AI is a tool. It has many uses that are good and many uses that are bad. Like many tools it has many externalities, some of which are likely to be positive, some of which are already clearly negative.
A serious person is able to discuss all of those. An unserious person insists that this tech (which many people already find value in) has no value at all.
I like to have serious conversations about the implications of tech. If you don’t want to, go head over to Reddit or whatever.
Re: Re:
These serious people who are finding value seem to be ignoring subsidies and externalities.
That sure is a lot of words to make a bog-standard “buggy whips” analogy.
Re:
Like, I don’t really disagree with the overall thrust of the article — if we’re going to add restraints on AI (either governmental or contractual) we need to be wary of unintended consequences — but I can’t say as I’m impressed by selective historical comparisons.
AI need not be the enemy, but it currently is
I oppose AI, but that is because I do not trust our government to ensure the technology becomes anything more than another way for wealthy elites to strip the world for parts. If we are to embrace AI, the government must do whatever is necessary to prevent it from screwing over commoners. If I lose my job because of AI, the government must support me financially, help me get a new job, etc. I don’t think anyone who’s been paying attention these past few years expects the Trump regime to do that–God knows a lot of the companies trying to use AI aren’t. None of the advances AI promises will mean anything if only a small group of parasites at the top reap the benefits; indeed, if AI just makes the rich richer at the expense of everyone else, then I say, “Death to AI.”
Re:
Why is your job and yourself now so much more important than the buggy whip maker?
Re: Re:
I think you misunderstood me. I still need to eat. If my employer has decided to replace me with a machine, I need a new job. Finding a new job takes time, and not everyone has a reserve of money to live off of while they search. If I need new skills to be able to get a decent job, who’s going to pay for my training? If a particular line of work becomes obsolete, someone–be it the government or the company that laid off all those people–must support the newly-unemployed and help them adjust. Unemployed people aren’t going to just lie down and die for the good of the stock market. As things stand, I don’t see any rich AI bros making any sort of meaningful effort to help those they’re laying off. If a new technology only helps wealthy shareholders at the expense of everyone else, then I’m inclined to call it a detriment to society rather than a boon.
What value?
Is it the value of jacking up everyone’s electric rates to subsidize billionaire nonsense again?
Or is it the value of finding new ways to pump billions more tons of carbon dioxide into the atmosphere?
I’d love to read an article on Techdirt about how parts companies are catering to AI bros making data centers and driving up the cost of components like RAM and more for the average consumer. How do we get to that Resonant Computing future if nobody can afford to build PCs anymore and has to rent laptops or cloud computing from a corporation?
I’m spending thousands of dollars to get my bachelor’s degree and would very much like for there to be entry-level jobs available for me next year after I graduate. The CEOs and tech writers like Mike shoving AI down everyone’s throats and wanting the tech to be “here to stay”, do not actually want me to get a job. When the bubble bursts, I will either 1) Get laid off if I found a place to work with my degree or 2) Find it even more difficult to get a job in a major recession.
“Sure, your degree and job prospects with that degree may be ruined by AI, but have you thought of becoming a vlogger or streamer (which you may not have the skills or charisma for) and generating value that way?”
Re:
The world does not owe CS grads 6 figure salaries.
I liked the article.