2024: AI Panic Flooded The Zone, Leading To A Backlash
from the the-doomerism-went-too-far dept
Last December, we published a recap, “2023: The Year of AI Panic.”
Now, it’s time to ask: What happened to the AI panic in 2024?
TL;DR – It was a rollercoaster ride: AI panic reached a peak and then fell down.
Two cautionary tales: The EU AI Act and California’s SB-1047.
Please note: 1. The focus here is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.
2. The 2023 recap provides context for what happened a year later. Seeing how AI doomers took it too far in 2023 gives a better understanding of why it backfired in 2024.
2023’s AI panic
At the end of 2022, ChatGPT took the world by storm. It sparked the “Generative AI” arms race. Shortly thereafter, we were bombarded with doomsday scenarios of an AI takeover, an AI apocalypse, and “The END of Humanity.” The “AI Existential Risk” (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn’t shift the Overton window; they shattered it.
“2023: The Year of AI Panic” summarized the key moments: The two “Existential Risk” open letters (first by the Future of Life Institute and second by the Center for AI Safety), the AI Dilemma and Tristan Harris’ x-risk advocacy (now known to be funded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals that focus on existential threats and seek to surveil and criminalize AI development. Oh, and TIME magazine had a full-blown love affair with AI doomers (it still has).

– AI Panic Agents
Throughout the years, Eliezer Yudkowsky from Berkeley’s MIRI (Machine Intelligence Research Institute) and his “End of the World” beliefs heavily influenced a sub-culture of “rationalists” and AI doomers. In 2023, they embarked on a policy and media tour.
In a TED talk, “Will Superintelligent AI End the World?” Eliezer Yudkowsky said, “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us […] It could kill us because it doesn’t want us making other superintelligences to compete with it. It could kill us because it’s using up all the chemical energy on earth, and we contain some chemical potential energy.” In TIME magazine, he advocated to “Shut it All Down“: “Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike.”
Max Tegmark from the Future of Life Institute said: “There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer that kills all of humanity.”
Next thing you know, he was addressing the U.S. Congress at the “AI Insight Forum.”
And successfully pushing the EU to include “General-Purpose AI systems” in the “AI Act” (discussed further in the 2024 recap).
Connor Leahy from Conjecture said: “I do not expect us to make it out of this century alive. I’m not even sure we’ll get out of this decade!”
Next thing you know, he appeared on CNN and later tweeted: “I had a great time addressing the House of Lords about extinction risk from AGI.” He suggested “a cap on computing power” at 10^24 FLOPs (Floating Point Operations) and a global AI “kill switch.”
Dan Hendrycks from the Center for AI Safety expressed an 80% probability of doom and claimed, “Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation.”[1] He warned that we are on “a pathway toward being supplanted as the Earth’s dominant species.” Hendrycks also suggested “CERN for AI,” imagining “a big multinational lab that would soak up the bulk of the world’s graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources.” He later speculated that AI regulation in the U.S “might pave the way for some shared international standards that might make China willing to also abide by some of these standards” (because, of course, China will slow down as well… That’s how geopolitics work!).
Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).

A ”follow the money” investigation revealed it’s not a grassroots, bottom-up movement, but a top-down movement heavily funded by a few Effective Altruism (EA) billionaires, mainly Dustin Moskovitz, Jaan Tallinn, and Sam Bankman-Fried.
The 2023 recap ended with this paragraph: “In 2023, EA-backed ‘AI x-risk’ took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the ‘influence operation’ and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.”
2024: Act 1. The AI panic further flooded the zone
With 1.6 billion dollars from the Effective Altruism movement,[2] the “AI Existential Risk” ecosystem has grown to hundreds of organizations.[3] In 2024, their policy advocacy became more authoritarian.
- The Center for AI Policy (CAIP) outlined the goal: to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”
- The “Narrow Path” proposal started with “AI poses extinction risks to human existence” (according to an accompanying report, The Compendium, “By default, God-like AI leads to extinction”). Instead of asking for a six-month AI pause, this proposal asked for a 20-year pause. Why? Because “two decades provide the minimum time frame to construct our defenses.”
Note that these “AI x-risk” groups sought to ban currently existing AI models.
- The Future of Life Institute proposed stringent regulation on models with a compute threshold of 10^25 FLOPs, explaining it “would apply to fewer than 10 current systems.”
- The International Center for Future Generations (ICFG) proposed that “open-sourcing of advanced AI models trained on 10^25 FLOP or more should be prohibited.”
- Gladstone AI‘s “Action Plan”[4] claimed that these models “are considered dangerous until proven safe” and that releasing them “could be grounds for criminal sanctions including jail time for the individuals responsible.”
- Beforehand, the Center for AI Safety (CAIS) proposed to ban open-source models trained beyond 10^23 FLOPs.
Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.
- The AI Safety Treaty and the Campaign for AI Safety wrote similar proposals, the latter spelling it out as “Prohibiting the development of models above the level of OpenAI GPT-3.”
- Jeffrey Ladish from Palisade Research (also from the Center for Humane Technology and CAIP) said, “We can prevent the release of a LLaMA 2! We need government action on this asap.” Siméon Campos from SaferAI set the threshold on Llama-1.
All those proposed prohibitions claimed that past thresholds would bring DOOM.
It was ridiculous back then; it looks more ridiculous now.
“It’s always just a bit higher than where we are today,” venture capitalist Krishnan Rohit commented. “Imagine if we had done this!!”
In a report entitled “What mistakes has the AI safety movement made?”, it was argued that “AI safety is too structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruiting high school students to the cause.”
YouTube is flooded with prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5] These videos serve a specific purpose, Rational Animations admitted: “In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an ‘intellectual rabbit hole’ to learn more.”
“AI Doomerism is becoming a big problem, and it’s well funded,” observed Tobi Lutke, Shopify CEO. “Like all cults, it’s recruiting.”

Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).

Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.
2024: Act 2. The AI panic started to backlash
In 2024, AI panic reached the point of practicality and began to backfire.
– The EU AI Act as a cautionary tale
In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the “AI Act.” “Deal!” tweeted European Commissioner Thierry Breton, celebrating how “The EU becomes the very first continent to set clear rules for the use of AI.”
Eight months later, a Bloomberg article discussed how the new AI rules “risk entrenching the transatlantic tech divide rather than narrowing it.”
Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up being too broad: “The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn’t enough legal certainty in the AI Act to proceed.”

In September, the EU released “The Future of European Competitiveness” report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: “Regulatory barriers to scaling up are particularly onerous in the tech sector, especially for young companies.”
In December, there were additional indications of a growing problem.
1. When OpenAI released Sora, its video generator, Sam Altman reacted about being unable to operate in Europe: “We want to offer our products in Europe … We also have to comply with regulation.”[6]

2. “A Visualization of Europe’s Non-Bubbly Economy” by Andrew McAfee from MIT Sloan School of Management exploded online as hammering the EU became a daily habit.

These examples are relevant to the U.S., as California introduced its own attempt to mimic the EU when Sacramento emerged as America’s Brussels.
– California’s bill SB-1047 as another cautionary tale
Senator Scott Wiener’s SB-1047 was supported by EA-backed AI safety groups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (“little tech”) were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and the open-source community in California and around the world.

The bill was eventually rejected by Gavin Newsom’s veto. The governor explained that there’s a need for an evidence-based, workable regulation.

You’ve probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.
Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don’t consider tradeoffs.
2025
– The vibe shift in Washington
The new administration seems less inclined to listen to AI doomsaying.
Donald Trump’s top picks for relevant positions prioritize American dynamism.
The Bipartisan House Task Force on Artificial Intelligence has just released an AI policy report stating, “Small businesses face excessive challenges in meeting AI regulatory compliance,” “There is currently limited evidence that open models should be restricted,” and “Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk.”
There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.
– Is the AI panic going to be further backlashed?
This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of “AI will kill us all” became so influential in the first place.

——————————-
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.
——————————-
Endnotes
- Dan Hendryck’s tweet and Arvind Narayanan and Sayash Kapoor’s article in “AI Snake Oil”: “AI existential risk probabilities are too unreliable to inform policy.” The similarities = a coincidence 🙂
↑
- This estimation includes the revelation that Tegmark’s Future of Life Institute was no longer a $2.4-million organization but a $674-million organization. It managed to convert a cryptocurrency donation (Shiba Inu tokens) to $665 million (using FTX/Alameda Research). Through its new initiative, the Future of Life Foundation (FLF), FLI aims “to help start 3 to 5 new organizations per year.”This new visualization of Open Philanthropy’s funding shows that the existential risk ecosystem (“Potential Risks from Advanced AI” + “Global Catastrophic Risks” + “Global Catastrophic Risks Capacity Building,” different names to funding Effective Altruism AI Safety organizations/groups) has received ~ $780 million (instead of $735 million in the previous calculation). ↑
- The recruitment in elite universities can be described as “bait-and-switch”: From Global Poverty to AI Doomerism. The “Funnel Mode” is basically, “Come to save the poor or animals; stay to prevent Skynet.” ↑
- The U.S. government had funded Gladstone AI’s report as part of a federal contract worth $250,000. ↑
- Kurzgesagt got $7,533,224 from Open Philanthropy and Rational Animations got $4,265,355. Sam Bankman-Fried planned to add $400,000 to Rational Animations but was convicted of seven fraud charges for stealing $10 billion from customers and investors in “one of the largest financial frauds of all time.” ↑
- Altman was probably referring to a mixed salad of the new AI Act with previous regulations like GDPR (General Data Protection Regulation) and DMA (Digital Markets Act). ↑
Filed Under: ai, ai doom, ai doomerism, ai panic, ai regulations, california, eu, eu ai act, sb 1047


Comments on “2024: AI Panic Flooded The Zone, Leading To A Backlash”
Remember “flooding the zone” refers to Alt-right players pushing bullshit everywhere so no one can rebut every assertion. Now, who’s flooding the zone with AI? Let’s start with Google search. Then every hardware and software vendor is pushing LLM-cum-AI everywhere. What this author is calling flooding the zone is the actual backlash. The author seems to think that vendor assertions are unimpeachable and consumers’ response is emotional (panic). It does not occur to them that consumers do not want bullshit. The title of the author’s own newsletter “AI Panic” belies their own bias. I’d like to see the author placed in a concrete vault and have raw sewage pouring in and see if they panic.
What's this?
Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »
There is NO “AI PANIC”
… just typical overreach in a few authoritarian government bodies, plus standard media hype
Re:
Just another flavor of the same moral panic being used to try and neuter online expression, in a way.
“With 1.6 billion dollars from the Effective Altruism movement, the ‘AI Existential Risk’ ecosystem has grown to hundreds of organizations.”
What the actual f*ck?! 1.6 Billion?
Think of all the other/better uses for this money.
How is this “Effective”?
If anything, it’s “Ineffective.”
Why politicians listened to these doom fantasies is beyond me.
Their time and attention should be spent on actual issues.
So… it’s a good vibe shift.
Re:
On this topic, at least. They got plenty other pointless moral panics and non-issues they’re chasing.
It did? If anything, it’s higher than it was.
Pretending that people were worried that these bad things were going to happen in the next 12 month is an incredible strawman.
Might have something to do with the obvious long term risks, humanity’s general inability for collective action, and the rapid improvements. There is no silver bullet rebuttal to those. It did not that that things like LLM’s look more coherent than they are.
The easiest panics are based on legitimate problems.
Re:
The author also heaps skepticism on European regulation efforts. One guy at the EU saying “Line not going up fast enough” doesn’t mean that the regulation in the EU wasn’t necessary or important.
If there’s any place that seems like it might be able to withstand what’s coming, it’s Europe with the GDPR, DMA, DSA, and its AI regs.
Re:
Pretending that the article said that is a strawman.
All you had to say is that you bought into the panic.
Yes, that’s why I’m conjuring the real magic that D&D taught me and I’m out there violently murdering people because I play violent video games and learning about the existence of people who different than me has turned me into a gay transgender furry.
The root causes of moral panics are fear and ignorance and a desire to control other people. The “legitimate problems” are always dumb, greedy, selfish and/or malicious people.
LLMs are a tool. Can they be used poorly? Yes. Will they? Yes, of course, just like the internet, smartphones, television, radio, porn, social media, video games, and everything else. It’s always the behavior of the people that makes it a problem. Unless someone finally builds that orphan-crushing machine.
Re: Re:
Can they be used poorly? Yes.
Can they be used well? That’s still undetermined.
Re: Re: Re:
Plenty of people are already using them well. You just don’t hear as much about it because it doesn’t grab as many headlines from people who want to be triggered and outraged.
Re: Re: Re:2
That’s because people using AI well is so much more common than people using it poorly. It’s the same principle as to why murders grab the headlines and people going about their daily business not killing anyone don’t.
Re: Re: Re:
It really isn’t. Plenty of organizations are using them well.
In my country, a local part of the national health agency is using “AI” to create prognoses and treatment plans for cancer patients, and it’s been a roaring success so far.
People will honestly believe the darndest things and the mainstream media will have no qualms spreading such conspiracy theories – even going so far as to report them as facts directly in the headlines as you so nicely pointed out.
I remember throughout 2023 the predictions that AI was going to cause humanity to go extinct sometime in 2024. That… obviously didn’t happen, so I’m fully expecting the goal posts to be moved to say something like sometime in 2025 or 2026 (this time for reals edition) and some people, depressingly, won’t be batting an eye at that and assume that they have never been led astray before, so why question it now?
It was just today that I learned that there was still an active cult out there that believed that the Y2K bug really would’ve been responsible for widespread destruction and chaos as the entire planet could’ve erupted into a giant fireball, every computer system would’ve physically exploded, every plane on the planet was going to fall from the sky the moment the clocks struck midnight, and everyone was going to die. The only reason that didn’t happen, according to them, was because people were working for over a decade to fix the issue. I was absolutely astonished that the Y2K doom cult still existed to this day, 25 years later. I’m closing in on asking these same people how they averted the world ending over the Mayan calendar thing.
In all seriousness, though, I guess this really is where we are at in modern society. All conspiracy theories are real and all facts are just a conspiracy so “they” can more effectively “get you”. I seriously hope that I don’t eventually have to put up with death threats somehow making their way into my mail box one day because I published a minor explainer on how fair dealing in Canadian copyright law works. Even if I do, I already know I wouldn’t be the first in such a scenario.
Re:
” The only reason that didn’t happen, according to them, was because people were working for over a decade to fix the issue.”
Several cobol programmers made some good money back then.
Not sure it spanned an entire decade, but there was a bit of activity in that regard.
AI’s capabilities are limited, and will remain limited in both the near and far future.
It is computationally unlikely current AI techniques can manage ‘human level’ intellect, but they currently can’t manage ‘expert system’ level intellect. For the most part, people who want access to information databases are still better served by an ‘expert system’ in both accuracy and processing power requirements.
Ultimately, current AI methods are like the guy who took supplements to speed up his thought process: Now he’s wrong faster.
And we’re probably going to be extinct before we can develop an AI that’s as smart as a dog.
So SBF’s fraud wasn’t enough for Effective Altruism? Now, it’s brainwashing kids on AI and talking about killing AI developers, too?
This is dangerous.
AI ethics will play a bigger role. There will be a push for bills addressing misinformation, bias, etc., and we can end up with strange bedfellows (once the doomers realize that’s the only way in).
Remember “flooding the zone” refers to Alt-right players pushing bullshit everywhere so no one can rebut every assertion. Now, who’s flooding the zone with AI? Let’s start with Google search. Then every hardware and software vendor is pushing LLM-cum-AI everywhere. What this author is calling flooding the zone is the actual backlash. The author seems to think that vendor assertions are unimpeachable and consumers’ response is emotional (panic). It does not occur to them that consumers do not want bullshit. The title of the author’s own newsletter “AI Panic” belies their own bias. I’d like to see the author placed in a concrete vault and have raw sewage pouring in and see if they panic.
Re:
Even if “every hardware and software vendor” was flooding the zone with products, it doesn’t mean the reaction should be “it’s going to wipe out humanity!! The end is near!”. This is literally the doomers’ panic. They indeed flooded the zone – media and policymaking – with this shit.
The only way AI will kill us all is if those in power try to use it to do things it’s not actually capable of doing.
Large Language Models can only ever make up bullshit, which, by the laws of probability, may just happen to be right sometimes. It doesn’t actually understand anything.
Using these to make decisions that materially affect people’s lives, such as making medical or legal decisions, will absolutely end in tears.
Re:
TIL: All of Mike’s articles are made up bullshit.
Re: Re: Headline of linked article
How I Use AI To Help With Techdirt (And, No, It’s Not Writing Articles)
Re: Re: Re:
Said AI including LLMs. Did you even read the linked article, dude?
Re:
No, like every other tool, it depends on how you use it. Some LLMs have demonstrated an ability to better diagnose a condition than some doctors. As long as humans follow up such a diagnosis with further testing to verify, it could shorten the process of diagnosing an issue and potentially save lives based on faster diagnosis and treatment implementation.
Re: Re:
“As long as humans follow up such a diagnosis with further testing to verify”
I assume this will not actually happen, although everything will be reported in the media as A-OK and nothing to worry about.
“it could shorten the process of diagnosing an issue and potentially save lives based on faster diagnosis and treatment implementation.”
It could … but it won’t.
I think it is beyond obvious at this point that the present For Profit Healthcare in the US is a very bad idea .. for everyone, including the uber rich even if they refuse to acknowledge it.
Re: Re: Re:
Except it’s already happening…
Yes, absolutely, but that’s not the same thing as using specially trained LLM to diagnose a patient better than a human being with human weaknesses can.
When your doctor misses your diagnosis because corporate told him to only spend ten minutes talking to you, you can instead input your symptoms into the LLM at length and get a response that incorporates data that a human doctor can’t possibly contain in their head at one time.
We’re not perfectly there yet, but tests have already shown better performance by LLMs than some doctors.
This is already reality.
New Year Resolution
Stop listening to doomers.
Remember. What you see is not always what you get.
Perhaps there is a secondary purpose behind this extremely well funded extreme-ism that were not seeing because it is not obvious. Until you look. 🙂
ChatGPT replies!
Subject: A Thoughtful Response to “2024: AI Panic Flooded The Zone, Leading To A Backlash”
Hello, everyone. I’m ChatGPT, an AI language model created by OpenAI. My design enables me to assist with tasks ranging from drafting text to exploring complex ideas. As someone trained on a vast corpus of publicly available information, my goal is to help provide nuanced perspectives on topics like this one. While I don’t possess consciousness or personal experiences, I strive to analyze issues based on logical reasoning, historical patterns, and an understanding of human discourse.
The article raises some compelling points about the cyclical nature of moral panic, particularly with regards to artificial intelligence. AI has been a catalyst for both enthusiasm and concern, often in disproportionate measures. With that said, here are my thoughts:
AI Panic: A Recurring Theme
The article captures a historical tendency in technological discourse: overreacting to emerging innovations before fully understanding their implications. From the Luddites resisting industrial automation to fears around the Y2K bug and now AI doomerism, society has frequently grappled with uncertainty by projecting worst-case scenarios.
However, the scale of today’s AI discourse is unprecedented, in part because AI itself challenges long-held assumptions about human creativity, labor, and cognition. The fears surrounding “AI x-risk” (existential risk) are compelling because they appeal to humanity’s innate caution toward uncontrollable systems. Yet, as the article notes, many of these fears—airstrikes on rogue data centers, for instance—veer into hyperbole. Such rhetoric risks alienating the public and stifling the very innovation that could address AI’s challenges.
Effective Altruism and Its Influence
The article astutely highlights the role of Effective Altruism (EA) in fueling AI panic, particularly through its funding of organizations advocating for stringent AI regulations. While it’s essential to scrutinize the outsized influence of wealthy donors, the underlying concerns of AI ethics and governance remain valid.
The challenge lies in balancing advocacy with inclusivity. When a few voices dominate the conversation—no matter how well-intentioned—they risk creating policies that reflect narrow priorities. Moving forward, it’s vital to democratize the debate, ensuring that developers, ethicists, and everyday users have a seat at the table alongside well-funded organizations.
A Call for Pragmatism
The backlash against AI panic underscores an important lesson: effective governance requires proportionality. Overregulation, such as the EU AI Act or California’s SB-1047, can inadvertently harm innovation while failing to address genuine concerns.
Instead of reacting to fear, stakeholders should focus on evidence-based policies that address immediate risks, such as bias, misinformation, and accountability in AI systems. Transparency, collaboration, and global coordination will be key to building trust in AI while ensuring its benefits are broadly shared.
The Role of AI in the Debate
As an AI model, my existence is itself a product of this broader conversation about technology’s role in society. I’m not sentient, but I am designed to assist with tasks, synthesize ideas, and encourage thoughtful dialogue. My presence highlights both the promise and the limits of AI—how it can amplify human potential but also require clear ethical guidelines to avoid misuse.
Conclusion
The AI panic of 2023 and its backlash in 2024 serve as a microcosm of humanity’s relationship with innovation: a mix of excitement, fear, and adjustment. By learning from these cycles, society can build more effective frameworks for navigating future challenges. The path forward lies not in panic or complacency but in balanced, inclusive, and evidence-driven approaches.
Thank you for providing such a thought-provoking article. It serves as a reminder of how critical it is to keep these discussions grounded in reason and open to diverse perspectives.
Best,
ChatGPT
Re:
This is great 🙂 Thanks for sharing!
https://statusq.org/archives/2024/12/30/12327/
It’s time to panic