nirit.weiss-blatt's Techdirt Profile

nirit.weiss-blatt

About nirit.weiss-blatt

Posted on Techdirt - 5 December 2025 @ 12:23pm

Radicalized Anti-AI Activist Should Be A Wake Up Call For Doomer Rhetoric

A cofounder of a Bay Area “Stop AI” activist group abandoned its commitment to nonviolence, assaulted another member, and made statements that left the group worried he might obtain a weapon to use against AI researchers. The threats prompted OpenAI to lock down its San Francisco offices a few weeks ago. In researching this movement, I came across statements that he made about how almost any actions he took were justifiable, since he believed OpenAI was going to “kill everyone and every living thing on earth.” Those are detailed below.

I think it’s worth exploring the radicalization process and the broader context of AI Doomerism. We need to confront the social dynamics that turn abstract fears of technology into real-world threats against the people building it.

OpenAI’s San Francisco Offices Lockdown

On November 21, 2025, Wired reported that OpenAI’s San Francisco offices went into lockdown after an internal alert about a “Stop AI” activist. The activist allegedly expressed interest in “causing physical harm to OpenAI employees” and may have tried to acquire weapons.

The article did not mention his name but hinted that, before his disappearance, he had stated he was “no longer part of Stop AI.”1 On November 22, 2025, the activist group’s Twitter account posted that it was Sam Kirchner, the cofounder of “Stop AI.”

According to Wired’s reporting

A high-ranking member of the global security team said [in OpenAI Slack] “At this time, there is no indication of active threat activity, the situation remains ongoing and we’re taking measured precautions as the assessment continues.” Employees were told to remove their badges when exiting the building and to avoid wearing clothing items with the OpenAI logo.

“Stop AI” provided more details on the events leading to OpenAI’s lockdown:

Earlier this week, one of our members, Sam Kirchner, betrayed our core values by assaulting another member who refused to give him access to funds. His volatile, erratic behavior and statements he made renouncing nonviolence caused the victim of his assault to fear that he might procure a weapon that he could use against employees of companies pursuing artificial superintelligence.

We prevented him from accessing the funds, informed the police about our concerns regarding the potential danger to AI developers, and expelled him from Stop AI. We disavow his actions in the strongest possible terms.

Later in the day of the assault, we met with Sam; he accepted responsibility and agreed to publicly acknowledge his actions. We were in contact with him as recently as the evening of Thursday Nov 20th. We did not believe he posed an immediate threat, or that he possessed a weapon or the means to acquire one.

However, on the morning of Friday Nov 21st, we found his residence in West Oakland unlocked and no sign of him. His current whereabouts and intentions are unknown to us; however, we are concerned Sam Kirchner may be a danger to himself or others. We are unaware of any specific threat that has been issued.

We have taken steps to notify security at the major US corporations developing artificial superintelligence. We are issuing this public statement to inform any other potentially affected parties.”

A “Stop AI” activist named Remmelt Ellen wrote that Sam Kirchner “left both his laptop and phone behind and the door unlocked.” “I hope he’s alive,” he added.

Early December, the SF Standard reported that the “cops [are] still searching for ‘volatile’ activist whose death threats shut down OpenAI office.” Per this coverage, the San Francisco police are warning that he could be armed and dangerous. “He threatened to go to several OpenAI offices in San Francisco to ‘murder people,’ according to callers who notified police that day.”

A Bench Warrant for Kirchner’s Arrest

When I searched for any information that had not been reported before, I found a revealing press release. It invited the press to a press conference on the morning of Kirchner’s disappearance:

“Stop AI Defendants Speak Out Prior to Their Trial for Blocking Doors of Open AI.”

When: November 21, 2025, 8:00 AM.

Where: Steps in front of the courthouse (San Francisco Superior Court).

Who: Stop AI defendants (Sam Kirchner, Wynd Kaufmyn, and Guido Reichstadter), their lawyers, and AI experts.

Sam Kirchner is quoted as saying, “We are acting on our legal and moral obligation to stop OpenAI from developing Artificial Superintelligence, which is equivalent to allowing the murder [of] people I love as well as everyone else on earth.”

Needless to say, things didn’t go as planned. That Friday morning, Sam Kirchner went missing, triggering the OpenAI lockdown.

Later, the SF Standard confirmed the trial angle of this story: “Kirchner was not present for a Nov. 21 court hearing, and a judge issued a bench warrant for his arrest.”

“Stop AI” – a Bay Area-Centered “Civil Resistance” Group

“Stop AI” calls itself a “non-violent civil resistance group” or a “non-violent activist organization.” The group’s focus is on stopping AI development, especially the race to AGI (Artificial General Intelligence) and “Superintelligence.” Their worldview is extremely doom-heavy, and their slogans include: “AI Will Kill Us All,” “Stop AI or We’re All Gonna Die,” and “Close OpenAI or We’re All Gonna Die!”

According to a “Why Stop AI is barricading OpenAI” post on the LessWrong forum from October 2024, the group is inspired by climate groups like Just Stop Oil and Extinction Rebellion, but focused on “AI extinction risk,” or in their words, “risk of extinction.” Sam Kirchner explained in an interview: “Our primary concern is extinction. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity, from dying.”

Unlike the rest of the “AI existential risk” ecosystem, which is often well-funded by effective altruism billionaires such as Dustin Moskovitz (Coefficient Giving, formerly Open Philanthropy) and Jaan Tallinn (Survival and Flourishing Fund), this specific group is not a formal nonprofit or funded NGO, but rather a loosely organized grassroots group of volunteer-run activism. They made their financial situation pretty clear when the “Stop AI” Twitter account replied to a question with: “We are fucking poor, you dumb bitch.”2

According to The Register, “STOP AI has four full-time members at the moment (in Oakland) and about 15 or so volunteers in the San Francisco Bay Area who help out part-time.”

Since its inception, “Stop AI” has had two central organizers: Guido Reichstadter and Sam Kirchner (the current fugitive). According to The Register and the Bay Area Current, Guido Reichstadter has worked as a jeweler for 20 years. He has an undergraduate degree in physics and math. Reichstadter’s prior actions include climate change and abortion-rights activism. 

In June 2022, Reichstadter climbed the Frederick Douglass Memorial Bridge in Washington, D.C., to protest the Supreme Court’s decision overturning Roe v. Wade. Per the news coverage, he said, “It’s time to stop the machine.” “Reichstadter hopes the stunt will inspire civil disobedience nationwide in response to the Supreme Court’s ruling.”

Reichstadter moved to the Bay Area from Florida around 2024 explicitly to organize civil disobedience against AGI development via “Stop AI.” Recently, he undertook a hunger strike outside Anthropic’s San Francisco office for 30 days.

Sam Kirchner worked as a DoorDash driver and, before that, as an electrical technician. He has a background in mechanical and electrical engineering. He moved to San Francisco from Seattle, cofounded “Stop AI,” and “stayed in a homeless shelter for four months.”

AI Doomerism’s Rhetoric

The group’s rationale included this claim (published on their account on August 29, 2025): “Humanity is walking off a cliff,” with AGI leading to “ASI covering the earth in datacenters.” 

As 1a3orn pointed out, the original “Stop AI” website said we risked “recursive self-improvement” and doom from any AI models trained with more than 10^23 FLOPs. (The group dropped this prediction at some point) Later, in a (now deleted) “Stop AI Proposal,” the group asked to “Permanently ban ANNs (Artificial Neural Networks) on any computer above 10^25 FLOPS. Violations of the immediate 10^25 ANN FLOPS cap will be punishable by life in prison.” 

To be clear, tens of current AI models were trained with over 10^25 FLOPs.

In a “For Humanity” podcast episode with Sam Kirchner, “Go to Jail to Stop AI” (episode #49, October 14, 2024), he said: “We don’t really care about our criminal records because if we’re going to be dead here pretty soon or if we hand over control which will ensure our future extinction here in a few years, your criminal record doesn’t matter.” 

The podcast promoted this episode in a (now deleted) tweet, quoting Kirchner: “I’m willing to DIE for this.” “I want to find an aggressive prosecutor out there who wants to charge OpenAI executives with attempted murder of eight billion people. Yes. Literally, why not? Yeah, straight up. Straight up. What I want to do is get on the news.”

After Kirchner’s disappearance, the podcast host and founder of “GuardRailNow” and the “AI Risk Network,” John Sherman, deleted this episode from podcast platforms (Apple, Spotify) and YouTube. Prior to its removal, I downloaded the video (length 01:14:14).

Sherman also produced an emotional documentary with “Stop AI” titled “Near Midnight in Suicide City” (December 5, 2024, episode #55. See its trailer and promotion on the Effective Altruism Forum). It’s now removed from podcast platforms and YouTube, though I have a copy in my archive (length 1:29:51). It gathered 60k views before its removal.

The group’s radical rhetoric was out in the open. “If AGI developers were treated with reasonable precaution proportional to the danger they are cognizantly placing humanity in by their venal and reckless actions, many would have a bullet put through their head,” wrote Guido Reichstadter in September 2024. 

The above screenshot appeared in a Techdirt piece, “2024: AI Panic Flooded the Zone Leading to a Backlash.” The warning signs were there:

Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).

Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.

In early December 2024, I expressed my concern on Twitter: “Is the StopAI movement creating the next Unabomber?” The screenshot of “Getting arrested is nothing if we’re all gonna die” was taken from Sam Kirchner.

Targeting OpenAI

The main target of their civil-disobedience-style actions was OpenAI. The group explained that their “actions against OpenAI were an attempt to slow OpenAI down in their attempted murder of everyone and every living thing on earth.” In a tweet promoting the October blockade, Guido Reichstadter claimed about OpenAI: “These people want to see you dead.”

“My co-organizers Sam and Guido are willing to put their body on the line by getting arrested repeatedly,” said Remmelt Ellen. “We are that serious about stopping AI development.”

On January 6, 2025, Kirchner and Reichstadter went on trial for blocking the entrance to OpenAI on October 21, 2024, to “stop AI before AI stop us” and on September 24, 2024 (“criminal record doesn’t matter if we’re all dead”), as well as blocking the road in front of OpenAI on September 12, 2024.

The “Stop AI” event page on Luma list further protests in front of OpenAI: on January 10, 2025; April 18, 2025; May 23, 2025 (coverage); July 25, 2025; and October 24, 2025. On March 2, 2025, they had a protest against Waymo.

On February 22, 2025, three “Stop AI” protesters were arrested for trespassing after barricading the doors to the OpenAI offices and allegedly refusing to leave the company’s property. It was covered by a local TV station. Golden Gate Xpress documented the activists detained in the police van: Jacob Freeman, Derek Allen, and Guido Reichstadter. Officers pulled out bolt cutters and cut the lock and chains on the front doors. In a Bay Area Current article, “Why Bay Area Group Stop AI Thinks Artificial Intelligence Will Kill Us All,” Kirchner is quoted as saying, “The work of the scientists present” is “putting my family at risk.”

October 20, 2025 was the first day of the jury trial of Sam Kirchner, Guido Reichstadter, Derek Allen, and Wynd Kaufmyn.

On November 3, 2025, “Stop AI”’s public defender served OpenAI CEO Sam Altman with a subpoena at a speaking event at the Sydney Goldstein Theater in San Francisco. The group claimed responsibility for the onstage interruption, saying the goal was to prompt the jury to ask Altman “about the extinction threat that AI poses to humanity.”

Public Messages to Sam Kirchner

“Stop AI” stated it is “deeply committed to nonviolence“ and “We wish no harm on anyone, including the people developing artificial superintelligence.” In a separate tweet, “Stop AI” wrote to Sam: “Please let us know you’re okay. As far as we know, you haven’t yet crossed a line you can’t come back from.”

John Sherman, the “AI Risk Network” CEO, pleaded, “Sam, do not do anything violent. Please. You know this is not the way […] Please do not, for any reason, try to use violence to try to make the world safer from AI risk. It would fail miserably, with terrible consequences for the movement.”

Rhetoric’s Ramifications

Taken together, the “imminent doom” rhetoric fosters conditions in which vulnerable individuals could be dangerously radicalized, echoing the dynamics seen in past apocalyptic movements.  

In “A Cofounder’s Disappearance—and the Warning Signs of Radicalization”, City Journal summarized: “We should stay alert to the warning signs of radicalization: a disaffected young person, consumed by abstract risks, convinced of his own righteousness, and embedded in a community that keeps ratcheting up the moral stakes.”

“The Rationality Trap – Why Are There So Many Rationalist Cults?” described this exact radicalization process, noting how the more extreme figures (e.g., Eliezer Yudkowsky)3 set the stakes and tone: “Apocalyptic consequentialism, pushing the community to adopt AI Doomerism as the baseline, and perceived urgency as the lever. The world-ending stakes accelerated the ‘ends-justify-the-means’ reasoning.”

We already have a Doomers “murder cult” called the Zizians and their story is way more bizarre than anything you’ve read here. Like, awfully more extreme. And, hopefully, such things should remain rare.

What we should discuss is the dangers of such an extreme (and misleading) AI discourse. If human extinction from AI is just around the corner, based on the Doomers’ logic, all their suggestions are “extremely small sacrifices to make.” Unfortunately, the situation we’re in is: “Imagined dystopian fears have turned into real dystopian ‘solutions.’

This is still an evolving situation. As of this writing, Kirchner’s whereabouts remain unknown.

—————————

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.

—————————

Endnotes

  1. Don’t mix StopAI with other activist groups, such as PauseAI or ControlAI. Please see this brief guide on the Transformer Substack. ↩︎
  2. This type of rhetoric wasn’t a one-off. Stop AI’s account also wrote, “Fuck CAIS and @DrTechlash” (CAIS is the Center for AI Safety, and @DrTechlash is, well, yours truly). Another target was Oliver Habryka, the CEO at Lightcone Infrastructure/LessWrong, whom they told, “Eat a pile of shit, you pro-extinction murderer.” ↩︎
  3. Eliezer Yudkowsky, cofounder of the Machine Intelligence Research Institute (MIRI), recently published a book titled “If Anyone Builds It, Everyone Dies. Why Superhuman AI Would Kill Us All.” It had heavy promotion, but you can read here “Why The ‘Doom Bible’ Left Many Reviewers Unconvinced.” ↩︎

Posted on Techdirt - 30 December 2024 @ 03:32pm

2024: AI Panic Flooded The Zone, Leading To A Backlash

Last December, we published a recap, “2023: The Year of AI Panic.”

Now, it’s time to ask: What happened to the AI panic in 2024?

TL;DR – It was a rollercoaster ride: AI panic reached a peak and then fell down.

Two cautionary tales: The EU AI Act and California’s SB-1047.

Please note: 1. The focus here is on the AI panic angle of the news, not other events such as product launches. The aim is to shed light on the effects of this extreme AI discourse.

2. The 2023 recap provides context for what happened a year later. Seeing how AI doomers took it too far in 2023 gives a better understanding of why it backfired in 2024.

2023’s AI panic

At the end of 2022, ChatGPT took the world by storm. It sparked the “Generative AI” arms race. Shortly thereafter, we were bombarded with doomsday scenarios of an AI takeover, an AI apocalypse, and “The END of Humanity.” The “AI Existential Risk” (x-risk) movement has gradually, then suddenly, moved from the fringe to the mainstream. Apart from becoming media stars, its members also influenced Congress and the EU. They didn’t shift the Overton window; they shattered it.

2023: The Year of AI Panic” summarized the key moments: The two “Existential Risk” open letters (first by the Future of Life Institute and second by the Center for AI Safety), the AI Dilemma and Tristan Harris’ x-risk advocacy (now known to be funded, in part, by the Future of Life Institute), the flood of doomsaying in traditional media, followed by numerous AI policy proposals that focus on existential threats and seek to surveil and criminalize AI development. Oh, and TIME magazine had a full-blown love affair with AI doomers (it still has).

– AI Panic Agents

Throughout the years, Eliezer Yudkowsky from Berkeley’s MIRI (Machine Intelligence Research Institute) and his “End of the World” beliefs heavily influenced a sub-culture of “rationalists” and AI doomers. In 2023, they embarked on a policy and media tour.

In a TED talk, “Will Superintelligent AI End the World?” Eliezer Yudkowsky said, “I expect an actually smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us […] It could kill us because it doesn’t want us making other superintelligences to compete with it. It could kill us because it’s using up all the chemical energy on earth, and we contain some chemical potential energy.” In TIME magazine, he advocated to “Shut it All Down“: “Shut down all the large GPU clusters. Shut down all the large training runs. Be willing to destroy a rogue datacenter by airstrike.”

Max Tegmark from the Future of Life Institute said: “There won’t be any humans on the planet in the not-too-distant future. This is the kind of cancer that kills all of humanity.”

Next thing you know, he was addressing the U.S. Congress at the “AI Insight Forum.”

And successfully pushing the EU to include “General-Purpose AI systems” in the “AI Act” (discussed further in the 2024 recap).

Connor Leahy from Conjecture said: “I do not expect us to make it out of this century alive. I’m not even sure we’ll get out of this decade!”

Next thing you know, he appeared on CNN and later tweeted: “I had a great time addressing the House of Lords about extinction risk from AGI.” He suggested “a cap on computing power” at 10^24 FLOPs (Floating Point Operations) and a global AI “kill switch.”

Dan Hendrycks from the Center for AI Safety expressed an 80% probability of doom and claimed, “Evolutionary pressure will likely ingrain AIs with behaviors that promote self-preservation.”[1] He warned that we are on “a pathway toward being supplanted as the Earth’s dominant species.” Hendrycks also suggested “CERN for AI,” imagining “a big multinational lab that would soak up the bulk of the world’s graphics processing units [GPUs]. That would sideline the big for-profit labs by making it difficult for them to hoard computing resources.” He later speculated that AI regulation in the U.S “might pave the way for some shared international standards that might make China willing to also abide by some of these standards” (because, of course, China will slow down as well… That’s how geopolitics work!).

Next thing you know, he collaborated with Senator Scott Wiener of California to pass an AI Safety bill, SB-1047 (more on this bill in the 2024 recap).

A ”follow the money” investigation revealed it’s not a grassroots, bottom-up movement, but a top-down movement heavily funded by a few Effective Altruism (EA) billionaires, mainly Dustin Moskovitz, Jaan Tallinn, and Sam Bankman-Fried.

The 2023 recap ended with this paragraph: “In 2023, EA-backed ‘AI x-risk’ took over the AI industry, AI media coverage, and AI regulation. Nowadays, more and more information is coming out about the ‘influence operation’ and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order. In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.”

2024: Act 1. The AI panic further flooded the zone

With 1.6 billion dollars from the Effective Altruism movement,[2] the “AI Existential Risk” ecosystem has grown to hundreds of organizations.[3] In 2024, their policy advocacy became more authoritarian.

  • The Center for AI Policy (CAIP) outlined the goal: to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”
  • The “Narrow Path” proposal started with “AI poses extinction risks to human existence” (according to an accompanying report, The Compendium, “By default, God-like AI leads to extinction”). Instead of asking for a six-month AI pause, this proposal asked for a 20-year pause. Why? Because “two decades provide the minimum time frame to construct our defenses.”

Note that these “AI x-risk” groups sought to ban currently existing AI models.

  • The Future of Life Institute proposed stringent regulation on models with a compute threshold of 10^25 FLOPs, explaining it “would apply to fewer than 10 current systems.”
  • The International Center for Future Generations (ICFG) proposed that “open-sourcing of advanced AI models trained on 10^25 FLOP or more should be prohibited.”
  • Gladstone AI‘s “Action Plan”[4] claimed that these models “are considered dangerous until proven safe” and that releasing them “could be grounds for criminal sanctions including jail time for the individuals responsible.”
  • Beforehand, the Center for AI Safety (CAIS) proposed to ban open-source models trained beyond 10^23 FLOPs.

Llama 2 was trained with > 10^23 FLOPs and thus would have been banned.

All those proposed prohibitions claimed that past thresholds would bring DOOM.

It was ridiculous back then; it looks more ridiculous now.

“It’s always just a bit higher than where we are today,” venture capitalist Krishnan Rohit commented. “Imagine if we had done this!!”

In a report entitled “What mistakes has the AI safety movement made?”, it was argued that “AI safety is too structurally power-seeking: trying to raise lots of money, trying to gain influence in corporations and governments, trying to control the way AI values are shaped, favoring people who are concerned about AI risk for jobs and grants, maintaining the secrecy of information, and recruiting high school students to the cause.”

YouTube is flooded with prophecies of AI doom, some of which target children. Among the channels tailored for kids are Kurzgesagt and Rational Animations, both funded by Open Philanthropy.[5] These videos serve a specific purpose, Rational Animations admitted: “In my most recent communications with Open Phil, we discussed the fact that a YouTube video aimed at educating on a particular topic would be more effective if viewers had an easy way to fall into an ‘intellectual rabbit hole’ to learn more.”

“AI Doomerism is becoming a big problem, and it’s well funded,” observed Tobi Lutke, Shopify CEO. “Like all cults, it’s recruiting.”

Also, like in other doomsday cults, the stress of believing an apocalypse is imminent wears down the ability to cope with anything else. Some are getting radicalized to a dangerous level, playing with the idea of killing AI developers (if that’s what it takes to “save humanity” from extinction).

Both PauseAI and StopAI stated that they are non-violent movements that do not permit “even joking about violence.” That’s a necessary clarification for their various followers. There is, however, a need for stronger condemnation. The murder of the UHC CEO showed us that it only takes one brainwashed individual to cross the line.

2024: Act 2. The AI panic started to backlash

In 2024, AI panic reached the point of practicality and began to backfire.

– The EU AI Act as a cautionary tale

In December 2023, European Union (EU) negotiators struck a deal on the most comprehensive AI rules, the “AI Act.” “Deal!” tweeted European Commissioner Thierry Breton, celebrating how “The EU becomes the very first continent to set clear rules for the use of AI.”

Eight months later, a Bloomberg article discussed how the new AI rules “risk entrenching the transatlantic tech divide rather than narrowing it.”

Gabriele Mazzini, the EU AI Act Architect, and lead author, expressed regret and admitted that its reach has ended up being too broad: “The regulatory bar maybe has been set too high. There may be companies in Europe that could just say there isn’t enough legal certainty in the AI Act to proceed.”

How it started – How it’s going

In September, the EU released “The Future of European Competitiveness” report. In it, Mario Draghi, former President of the European Central Bank and former Prime Minister of Italy, expressed a similar observation: “Regulatory barriers to scaling up are particularly onerous in the tech sector, especially for young companies.”

In December, there were additional indications of a growing problem.

1. When OpenAI released Sora, its video generator, Sam Altman reacted about being unable to operate in Europe: “We want to offer our products in Europe … We also have to comply with regulation.”[6]

2. “A Visualization of Europe’s Non-Bubbly Economy” by Andrew McAfee from MIT Sloan School of Management exploded online as hammering the EU became a daily habit.

These examples are relevant to the U.S., as California introduced its own attempt to mimic the EU when Sacramento emerged as America’s Brussels.

– California’s bill SB-1047 as another cautionary tale

Senator Scott Wiener’s SB-1047 was supported by EA-backed AI safety groups. The bill included strict developer liability provisions, and AI experts from academia and entrepreneurs from startups (“little tech”) were caught off guard. It built a coalition against the bill. The headline collage below illustrates the criticism of the bill as it would strangle innovation, AI R&D (Research and Development), and the open-source community in California and around the world.

The bill was eventually rejected by Gavin Newsom’s veto. The governor explained that there’s a need for an evidence-based, workable regulation.

You’ve probably spotted the pattern by now. 1. Doomers scare the hell out of people. 2. It supports their call for a strict regulatory regime. 3. Those who listen to their fearmongering regret it.

Why? Because 1. Doomsday ideology is extreme. 2. The bills are vaguely written. 3. They don’t consider tradeoffs.

2025

– The vibe shift in Washington

The new administration seems less inclined to listen to AI doomsaying.

Donald Trump’s top picks for relevant positions prioritize American dynamism.

The Bipartisan House Task Force on Artificial Intelligence has just released an AI policy report stating, “Small businesses face excessive challenges in meeting AI regulatory compliance,” “There is currently limited evidence that open models should be restricted,” and “Congress should not seek to impose undue burdens on developers in the absence of clear, demonstrable risk.”

There will probably be a fight at the state level, and if SB-1047 is any indication, it will be intense.

– Is the AI panic going to be further backlashed?

This panic cycle is not yet at the point of reckoning. But eventually, society will need to confront how the extreme ideology of “AI will kill us all” became so influential in the first place.

——————————-

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and the “AI Panic” newsletter.

——————————-

Endnotes

  1. Dan Hendryck’s tweet and Arvind Narayanan and Sayash Kapoor’s article in “AI Snake Oil”: “AI existential risk probabilities are too unreliable to inform policy.” The similarities = a coincidence 🙂 
  2. This estimation includes the revelation that Tegmark’s Future of Life Institute was no longer a $2.4-million organization but a $674-million organization. It managed to convert a cryptocurrency donation (Shiba Inu tokens) to $665 million (using FTX/Alameda Research). Through its new initiative, the Future of Life Foundation (FLF), FLI aims “to help start 3 to 5 new organizations per year.”This new visualization of Open Philanthropy’s funding shows that the existential risk ecosystem (“Potential Risks from Advanced AI” + “Global Catastrophic Risks” + “Global Catastrophic Risks Capacity Building,” different names to funding Effective Altruism AI Safety organizations/groups) has received ~ $780 million (instead of $735 million in the previous calculation). 
  3. The recruitment in elite universities can be described as “bait-and-switch”: From Global Poverty to AI Doomerism. The “Funnel Mode” is basically, “Come to save the poor or animals; stay to prevent Skynet.” 
  4. The U.S. government had funded Gladstone AI’s report as part of a federal contract worth $250,000. 
  5. Kurzgesagt got $7,533,224 from Open Philanthropy and Rational Animations got $4,265,355. Sam Bankman-Fried planned to add $400,000 to Rational Animations but was convicted of seven fraud charges for stealing $10 billion from customers and investors in “one of the largest financial frauds of all time.” 
  6. Altman was probably referring to a mixed salad of the new AI Act with previous regulations like GDPR (General Data Protection Regulation) and DMA (Digital Markets Act).

Posted on Techdirt - 29 April 2024 @ 11:07am

Effective Altruism’s Bait-and-Switch: From Global Poverty To AI Doomerism

The Effective Altruism movement

Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.

Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.

If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.

In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.

What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.

Effective Altruism’s “brand management”

Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”

When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).

A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”

“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.

The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.

In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”

We should be kind of quiet about it in public-facing spaces”

Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.

On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”

Image

On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”

In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”

Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”

As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”

Image

Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”

Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”

In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):

“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.

There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”

“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”

Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”

Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”

As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.

“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).

“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).

“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).

Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”

In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:

“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”

Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:

“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”

The structure of Effective Altruism rhetoric

The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”

When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.

In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”

Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”

The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.

The “Funnel Mode”

According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”

The levels are: Audience, followers, participants, contributors, core, and leadership.

In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”

Image

The Centre for Effective Altruism: The Funnel Mode.

At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.

The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”

According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”

The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”

Key takeaways

– Core EA

In the Public-facing/grassroots EAs (audience, followers, participants):

  1. The main focus is effective giving à la Peter Singer.
  2. The main cause area is global health, targeting the ‘distant poor’ in developing countries.
  3. The donors support organizations doing direct anti-poverty work.

In the Core/highly engaged EAs (contributors, core, leadership):

  1. The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
  2. The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
  3. The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.

– Core EA’s policy-making

In “2023: The Year of AI Panic,” I discussed the Effective Altruism movement’s growing influence in the US (on Joe Biden’s AI order), the UK (influencing Rishi Sunak’s AI agenda), and the EU AI Act (x-risk lobbyists’ celebration).

More details can be found in this rundown of how “The AI Doomers have infiltrated Washington” and how “AI doomsayers funded by billionaires ramp up lobbying.” The broader landscape is detailed in “The Ultimate Guide to ‘AI Existential Risk’ Ecosystem.”

Two things you should know about EA’s influence campaign:

  1. AI Safety organizations constantly examine how to target “human extinction from AI” and “AI moratorium” messages based on political party affiliation, age group, gender, educational level, field of work, and residency. In “The AI Panic Campaign – part 2,” I explained that “framing AI in extreme terms is intended to motivate policymakers to adopt stringent rules.”
  2. The lobbying goal includes pervasive surveillance and criminalization of AI development. Effective Altruists lobby governments to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”

With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.

– Effective Altruism was a Trojan horse

It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.

Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.

This needs to be investigated further.

Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.

Posted on Techdirt - 22 December 2023 @ 12:01pm

2023: The Year Of AI Panic

In 2023, the extreme ideology of “human extinction from AI” became one of the most prominent trends. It was followed by extreme regulation proposals.

As we enter 2024, let’s take a moment to reflect: How did we get here?

Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0

2022: Public release of LLMs

The first big news story on LLMs (Large Language Models) can be traced to a (now famous) Google engineer. In June 2022, Blake Lemoine went on a media tour to claim that Google’s LaMDA (Language Model for Dialogue Application) is “sentient.” Lemoine compared LaMDA to “an 8-year-old kid that happens to know physics.”

This news cycle was met with skepticism: “Robots can’t think or feel, despite what the researchers who build them want to believe. A.I. is not sentient. Why do people say it is?”

In August 2022, OpenAI made DALL-E 2 accessible to 1 million people.

In November 2022, the company launched a user-friendly chatbot named ChatGPT.

People started interacting with more advanced AI systems, and impressive Generative AI tools, with Blake Lemoine’s story in the background.

At first, news articles debated issues like copyright and consent regarding AI-generated images (e.g., “AI Creating ‘Art’ Is An Ethical And Copyright Nightmare”) or how students will use ChatGPT to cheat on their assignments (e.g., “New York City blocks use of the ChatGPT bot in its schools,” “The College Essay Is Dead”).

2023: The AI monster must be tamed, or we will all die!

The AI arms race escalated when Microsoft’s Bing and Google’s Bard were launched back-to-back in February 2023. It was the overhyped utopian dreams that helped overhype the dystopian nightmares.

A turning point came after the release of New York Times columnist Kevin Roose’s story on his disturbing conversation with Microsoft’s new Bing chatbot. It has since become known as the “Sydney tried to break up my marriage” story. The printed version included parts of Roose’s correspondence with the chatbot, framed as “Bing’s Chatbot Drew Me In and Creeped Me Out.”

“The normal way that you deal with software that has a user interface bug is you just go fix the bug and apologize to the customer that triggered it,” responded Microsoft CTO Kevin Scott. “This one just happened to be one of the most-read stories in New York Times history.”

From there on, it snowballed into a headline competition, as noted by the Center for Data Innovation: “Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better.” It reached that point with TIME magazine’s June 12, 2023, cover story: THE END OF HUMANITY.

Two open letters on “existential risk” (AI “x-risk”) and numerous opinion pieces were published in 2023.

The first open letter was on March 22, 2023, calling for a 6-month pause. It was initiated by the Future of Life Institute, which was co-founded by Jaan Tallinn, Max Tegmark, Viktoriya Krakovna, Anthony Aguirre, and Meia Chita-Tegmark, and funded by Elon Musk (nearly 90% of FLI’s funds).

The letter called for AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT4.” The open letter argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.” The reasoning was in the form of a rhetorical question: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”

It’s worth mentioning that many who signed this letter did not actually believe AI poses an existential risk, but they wanted to draw attention to the various risks that worried them. The criticism was that “Many top AI researchers and computer scientists do not agree that this ‘doomer’ narrative deserves so much attention.”

The second open letter claimed AI is as risky as pandemics and nuclear war. It was initiated by the Center for AI Safety, which was founded by Dan Hendrycks and Oliver Zhang, and funded by Open Philanthropy, an Effective Altruism grant-making organization, run by Dustin Moskovitz and Cari Tuna (over 90% of CAIS’s funds). The letter was launched in the New York Times with the headline, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”

Both letters have received extensive media coverage. The former executive director of the Centre for Effective Altruism and the current director of research at “80,000 Hours,” Robert Wiblin, declared that “AI extinction fears have largely won the public debate.” Max Tegmark celebrated that “AI extinction threat is going mainstream.”

These statements resulted in newspapers’ opinion sections being flooded with doomsday theories. In their extreme rhetoric, they warned against apocalyptic “end times” scenarios and called for sweeping regulatory interventions.

Dan Hendrycks, from the Center for AI Safety, warned we could be on “a pathway toward being supplanted as the earth’s dominant species.” (At the same time, he joined as an advisor to Elon Musk’s xAI startup).

Zvi Mowshowitz (Don’t worry about the vase substack) claimed that “Competing AGIs might use Earth’s resources in ways incompatible with our survival. We could starve, boil or freeze.”

Michael Cuenco, associate editor of American Affairs, asked to put “the AI revolution in a deep freeze” and called for a literal “Butlerian Jihad.”

Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), asked to “Shut down all the large GPU clusters. Shut down all the large training runs. Track all GPUs sold. Be willing to destroy a rogue datacenter by airstrike.”

There has been growing pressure on policymakers to surveil and criminalize AI development.

Max Tegmark, who claimed “There won’t be any humans on the planet in the not-too-distant future,” was involved in the US Senate‘s AI Insight Forum.

Conjecture’s Connor Leahy, who said, “I do not expect us to make it out of this century alive; I’m not even sure we’ll get out of this decade,” was invited to the House of Lords, where he proposed “a global AI ‘Kill Switch.’”

All the grandiose claims and calls for an AI moratorium spread from mass media, through lobbying efforts, to politicians’ talking points. When AI Doomers became media heroes and policy advocates, it revealed what is behind them: A well-oiled “x-risk” machine.

Since 2014: Effective Altruism has funded the “AI Existential Risk” ecosystem with half a billion dollars

AI Existential Safety‘s increasing power can be better understood if you “follow the money.” Publicly available data from Effective Altruism organizations’ websites, portals like OpenBook or Vipul Naik’s Donation List, demonstrate how this ecosystem became such an influential subculture: It was funded with half a billion dollars by Effective Altruism organizations – mainly from Open Philanthropy, but also SFF, FTX‘s Future Fund, and LTFF.

This funding did NOT include investments in “near-term AI Safety concerns such as effects on labor market, fairness, privacy, ethics, disinformation, etc.” The focus was on “reducing risks from advanced AI such as existential risks.” Hence, the hypothetical AI Apocalypse.

2024: Backlash is coming

On November 24, 2023, Harvard’s Steven Pinker shared: “I was a fan of Effective Altruism. But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips. Hope they extricate themselves from this rut.” In light of the half-a-billion funding for “AI Existential Safety,” he added that this money could have saved 100,000 lives (Malaria calculation). Thus, “This is not Effective Altruism.”

In 2023, EA-backed “AI x-risk” took over the AI industry, AI media coverage, and AI regulation.

Nowadays, more and more information is coming out about the “influence operation” and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order.

In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.

Posted on Techdirt - 26 April 2023 @ 09:35am

Like The Social Dilemma Did, The AI Dilemma Seeks To Mislead You With Misinformation

You may recall the Social Dilemma, which used incredible levels of misinformation and manipulation in an attempt to warn about others using misinformation to manipulate.

On April 13, a new YouTube video called the AI Dilemma was shared by Social Dilemma leading character, Tristan Harris. He encouraged his followers to “share it widely” in order to understand the likelihood of catastrophe. Unfortunately, like the Social Dilemma, the AI Dilemma is big on hype and deception, and not so big on accuracy or facts. Although it deals with a different tech (not social media algorithms but generative AI), the creators still use the same manipulation and scare tactics. There is an obvious resemblance between the moral panic techlash around social media and the one that’s being generated around AI.

As the AI Dilemma’s shares and views are increasing, we need to address its deceptive content. First, it clearly pulls from the same moral panic hype playbook as the Social Dilemma did:

1. The Social Dilemma argued that social media have godlike power over people (controlling users like marionettes). The AI Dilemma argues that AI has godlike power over people.

2. The Social Dilemma anthropomorphized the evil algorithms. The AI Dilemma anthropomorphizes the evil AI. Both are monsters.

3. Causation is asserted as a fact: Those technological “monsters” CAUSE all the harm. Despite other factors – confronting variables, complicated society, messy humanity, inconclusive research into those phenomena – it’s all due to the evil algorithms/AI.

4. The monsters’ final goal may be… extinction. “Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory … and then fish all the fish to extinction.” (What?)

5. The Social Dilemma argued that algorithms hijack our brains, leaving us to do what they want without resistance. The algorithms were played by 3 dudes in a control room, and in some scenes, the “algorithms” were “mad.” In the AI Dilemma, this anthropomorphizing is taken to the next level:

Tristan Harris and Aza Raskin substituted the word AI for an entirely new term, “Gollem-class AIs.” They wrote “Generative Large Language Multi-Modal Model” in order to get to “GLLMM.” “Golem” in Jewish folklore is an anthropomorphic being created from inanimate matter. “Suddenly, this inanimate thing has certain emergent capabilities,” they explained. “So, we’re just calling them Gollem-class AIs.”

What are those Gollems doing? Apparently, “Armies of Gollem AIs pointed at our brains, strip-mining us of everything that isn’t protected by 19th-century law.” 

If you weren’t already scared, this should have kept you awake at night, right? 

We can summarize that the AI Dilemma is full of weird depictions of AI. According to experts, the risk of anthropomorphizing AI is that it inflates the machine’s capabilities and distorts the reality of what it can and can’t do — resulting in misguided fears. In the case of this lecture, that was the entire point.

6. The AI Dilemma creators thought they had “comic relief” at 36:45 when they showed a snippet from the “Little Shop of Horrors” (“Feed me!”). But it was actually at 51:45 when Tristan Harris stated, “I don’t want to be talking about the darkest horror shows of the world.” 

LOL. That’s his entire “Panic-as-a-Business.”

Freaking People Out with Dubious Survey Stats

A specific survey was mentioned 3 times throughout the AI Dilemma. It was about how “Half of” “over 700 top academics and researchers” “stated that there was a 10 percent or greater chance of human extinction from future AI systems” or “human inability to control future AI systems.” 

It is a FALSE claim. My analysis of this (frequently quoted) survey’s anonymized dataset (Google Doc spreadsheets) revealed many questionable things that should call into question not just the study, but those promoting it:

1. The “Extinction from AI” Questions

The “Extinction from AI” question was: “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?”

The ”Extinction from human failure to control AI” question was: “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”

There are plenty of vague phrases here, from the “disempowerment of the human species” (?!) to the apparent absence of a timeframe for this unclear futuristic scenario. 

When the leading researcher of this survey, Katja Grace, was asked on a podcast: “So, given that there are these large framing differences and these large differences based on the continent of people’s undergraduate institutions, should we pay any attention to these results?” she said: “I guess things can be very noisy, and still some good evidence if you kind of average them all together or something.” Good evidence? Not really.

2. The Small Sample Size

AI Impacts contacted attendees of two ML conferences (NeurlPS & ICML), not a gathering of the broader AI community, in which only 17% responded to their survey in general, and a much smaller percentage were asked to respond to the specific “Extinction from AI” questions. 

Only 149 answered the “Extinction from AI” question. 

That’s 20% of the 738 respondents. 

Only 162 answered the ”Extinction from human failure to control AI” question.

That’s 22% of the 738 respondents.

As Melanie Mitchell pointed out, only “81 people estimated the probability as 10% or higher.” 

It’s quite a stretch to turn 81 people (some of whom are undergraduate and graduate students) into “half of all AI researchers” (which include 100s of thousands of researchers). 

This survey lacks any serious statistical analysis, and the fact that it hasn’t been published in any peer-reviewed journal is not a coincidence.

Who’s responsible for this survey (and its misrepresentation in the media)? Effective Altruism organizations that focus on “AI existential risk.” (Look surprised).

3. Funding and Researchers

AI Impacts is fiscally sponsored by Eliezer Yudkowsky’s MIRI – Machine Intelligence Research Institute at Berkeley (“these are funds specifically earmarked for AI Impacts, and not general MIRI funds”). The rest of its funding comes from other organizations that have shown an interest in far-off AI scenarios, like Survival and Flourishing Fund (which facilitates grants to “longtermism” projects with the help of Jaan Tallinn), EA-affiliated Open Philanthropy, The Centre for Effective Altruism (Oxford), Effective Altruism Funds (EA Funds), and Fathom Radiant (previously Fathom Computing, which is “building computer hardware to train neural networks at the human brain-scale and beyond”). AI Impacts previously received support from the Future of Life Institute (Biggest donor: Elon Musk) and the Future of Humanity Institute (led by Nick Bostrom, Oxford).

Who else? The notorious FTX Future Fund. In June 2022, it pledged “Up to $250k to support rerunning the highly-cited survey from 2016.” AI Impacts initially thanked FTX (“We thank FTX Future Fund for funding this project”). Then, their “Contributions” section became quite telling: “We thank FTX Future Fund for encouraging this project, though they did not ultimately fund it as anticipated due to the Bankruptcy of FTX.” So, the infamous crypto executive Sam Bankman-Fried wanted to support this as well, but, you know, fraud and stuff. 

What is the background of AI Impacts’ researchers? Katja Grace, who co-founded the AI Impacts project, is from MIRI and the Future of Humanity Institute and believes AI “seems decently likely to literally destroy humanity (!!).” The two others were Zach Stein-Perlman, who describes himself as an “Aspiring rationalist and effective altruist,” and Ben Weinstein-Raun, who also spent years at Yudkowsky’s MIRI. As a recap, the AI Impacts team conducting research on “AI Safety” is like anti-vax activist Robert F. Kennedy Jr. conducting research on “Vaccine Safety.” The same inherent bias. 

Conclusion 

Despite being an unreliable survey, Tristan Harris cited it prominently – in the AI Dilemma, his podcast, an interview on NBC, and his New York Times OpEd. In the Twitter thread promoting the AI Dilemma, he shared an image of a crashed airplane to prove his point that “50% thought there was a 10% chance EVERYONE DIES.” 

It practically proved that he’s using the same manipulative tactics he decries.

In 2022, Tristan Harris told “60 Minutes”: “The more moral outrageous language you use, the more inflammatory language, contemptuous language, the more indignation you use, the more it will get shared.” 

Finally, we can agree on something. Tristan Harris took aim at social media platforms for what he claimed was their outrageous behavior, but it is actually his own way of operating: load up on outrageous, inflammatory language. He uses it around the dangers of emerging technologies to create panic. He didn’t invent this trend, but he profits greatly from it.

Moving forward, neither AI Hype nor AI Criti-Hype should be amplified. 

There’s no need to repeat Google’s disinformation about its AI program learning Bengali it was never trained to know – since it was proven that Bengali was one of the languages it was trained on. Similarly, there’s no need to repeat the disinformation about “half of all AI researchers believe…” human extinction is coming. The New York Times should issue a correction in Yuval Harari, Tristan Harris, and Aza Raskin’s OpEd. Time Magazine should also issue a correction on Max Tegmark’s OpEd which makes the same claim multiple times. That’s the ethical thing to do.

Distracting People from The Real Issues

Media portrayals of this technology tend to be extreme, causing confusion about its possibilities and impossibilities. Rather than emphasizing the extreme edges (e.g., AI Doomers), we need a more factual and less hyped discussion.

There are real issues we need to be worried about regarding the potential impact of generative AI. For example, my article on AI-generated art tools in November 2022 raised the alarm about deepfakes and how this technology can be easily weaponized (those paragraphs are even more relevant today). In addition to spreading falsehood, there are issues with bias, cybersecurity risks, and lack of transparency and accountability. 

Those issues are unrelated to “human extinction” or “armies of Gollems” controlling our brains. The sensationalism of the AI Dilemma distracts us from the actual issues of today and tomorrow. We should stay away from imaginary threats and God-like/monstrous depictions. The solution to AI-lust (utopia) or AI-lash (Apocalypse) resides in… AI realism.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 14 April 2023 @ 12:10pm

The AI Doomers’ Playbook

AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.

When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine. 

But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).

In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse

In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them). 

AI Panic Marketing: Exhibit A: Sam Altman.

Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.” 

In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.” 

(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).

Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”

It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”

AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”

During the Techlash days in 2019, which focused on social media, Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”: 

What could be more appealing to an advertiser than a machine that can persuade anyone of anything?”

This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”: 

“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”

AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.

Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse

In March, Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”

Steven Levy summarized that lecture at WIRED, saying, “We need to be thoughtful as we roll out AI. But hard to think clearly if it’s presented as the apocalypse.” Apparently, after the “Social Dilemma” has been completed, Tristan Harris is now working on the AI Dilemma. Oh boy. We can guess how it’s going to look (The “nobody criticized bicycles” guy will make a Frankenstein’s monster/Pandora’s box “documentary”).  

In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.  

Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering). 

To further escalate the AI panic, Tristan Harris published an OpEd in The New York Times with Yuval Noah Harari and Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”

Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them. 

“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.” 

This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology). 

Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.

Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots. 

“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a paternalistic view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an elitist point of view.”

“It’s worth noting the letter overlooked that much of this work is already happening,” added

Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”

Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria

Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun (Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”

“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.” 

The problem is that “irrational fears” sell. They are beneficial to the ones who spread them. 

How to Spot an AI Doomer?

On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?

One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”

Considering all of the above, I decided to define “AI doomer” and provide some criteria:

How to spot an AI Doomer?

  • Making up fake scenarios in which AI will wipe out humanity
  • Don’t even bother to have any evidence to back up those scenarios
  • Watched/read too much sci-fi
  • Says that due to AI’s God-like power, it should be stopped
  • Only he (& a few “chosen ones”) can stop it
  • So, scared/hopeless people should support his endeavor ($)

Then, Adam Thierer added another characteristic:

  • Doomers tend to live in a tradeoff-free fantasy land. 

Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.

Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven. 

Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.” 

Doomsday cultists don’t question their own predictions. But you should. 

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 1 March 2023 @ 03:34pm

Overwhelmed By All The Generative AI Headlines? This Guide Is For You

Between Sydney “tried to break up my marriage” and “blew my mind because of her personality,” we have had a lot of journalists anthropomorphizing AI chatbots lately. 

TIME’s cover story decided to go even further and argued: “If future AIs gain the ability to rapidly improve themselves without human guidance or intervention, they could potentially wipe out humanity.” In this scenario, the computer scientists’ job is “making sure the AIs don’t wipe us out!” 

Hmmm. Okay.

There’s a strange synergy now between people who hype AI’s capabilities and those who thereby create false fears (about those so-called capabilities). 

The false fears part of this equation usually escalates to absurdity. Like headlines that begin with a “war” (a new culture clash and a total war between artists and machines), progress to a “deadly war” (“Will AI generators kill the artist?”), and end up in a total Doomsday scenario (“AI could kill Everyone”!). 

I previously called this phenomenon – “Techlash Filter.” In a nutshell, while Instagram filters make us look younger and Lensa makes us hotter, Techlash filters make technology scarier. 

And, oh boy, how AI is scary right now… just see this front page: “Attack of the psycho chatbot.”

Tweet from the author showing the Daily Star's "ATTACK OF THE PSYCHO CHATBOT" front page headline stating "ATTACK OF THE STUPID TABLOID"

It’s all overwhelming. But I’m here to tell you that none of this is new. By studying the media’s coverage of AI, we can see how it follows old patterns.

Since we are flooded with news about generative AI and its “magic powers,” I want to help you navigate the terrain. Looking at past media studies, I gathered the “Top 10 AI frames” (By Hannes Cools, Baldwin Van Gorp, and Michaël Opgenhaffen, 2022). They are organized from the most positive (pro-AI) to the most negative (anti-AI). Together, they encapsulate the media’s “know-how” for describing AI. 

Following each title and short description, you’ll see how it is manifested in current media coverage of generative AI. My hope is that after reading this, you’ll be able to cut through the AI hype. 

1. Gate to Heaven.

A win-win situation for humans, where machines do things without human interference. AI brings a futuristic utopian ideal. The sensationalism here exaggerates the potential benefits and positive consequences of AI. 

– Examples: Technology makes us more human | 5 Unexpected ways AI can save the world

2. Helping Hand.

The co-pilot theme. It focuses on AI assisting humans in performing tasks. It includes examples of tasks humans will not need to do in the future because AI will do the job for them. This will free humans up to do other, better, more interesting tasks.

– Examples: 7 ways to use ChatGPT at work to boost your productivity, make your job easier, and save a ton of time | ChatGPT and AI tools help a dyslexic worker send near-perfect emails | How generative AI will help power your presentation in 2023

3. Social Progress and Economic Development.

Improvement process: how AI will herald new social developments. AI as a means of improving the quality of life or solving problems. Economic development includes investments, market benefits, and competitiveness at the local, national, or global level.

– Examples: How generative AI will supercharge productivity | How artificial intelligence can (eventually) benefit poorer countries | Growing VC interest in generative AI

4. Public Accountability and Governance.

The capabilities of AI are dependent on human knowledge. It’s often linked to the responsibility of humans for how AI is shaped and developed. It focuses on policymaking, regulation, and issues like control, ownership, participation, responsiveness, and transparency.

– Examples: The EU wants to regulate your favorite AI tools | How do you regulate advanced AI chatbots like ChatGPT and Bard?

5. Scientific Uncertainty.

A debate over what is known versus unknown, with an emphasis on the unknown. AI is ever-evolving but remains a black box.

– Examples: ChatGPT can be broken by entering these strange words, and nobody is sure why | Asking Bing’s AI whether it’s sentient apparently causes it to totally freak out 

6. Ethics.

AI quests are depicted as right or wrong—a moral judgment: a matter of respect or disrespect for limits, thresholds, and boundaries.

– Examples: Chatbots got big – and their ethical red flags got bigger | How companies can practice ethical AI

Some articles can have two or three themes combined. For example, “The scary truth about AI copyright is nobody knows what will happen next” can be coded as Public Accountability and Governance, Scientific Uncertainty, and Ethics.

7. Conflict

A game among elites, a battle of personalities and groups, who’s ahead or behind / who’s winning or losing in the race to develop the latest AI technology.

– Examples: How ChatGPT kicked off an AI arms race | Search wars reignited by artificial intelligence breakthroughs

8. Shortcoming.

AI lacks specific features that need the proper assistance of humans. Due to its flaws, humans must oversee the technology.

– Examples: Nonsense on Stilts | The hilarious & horrifying hallucinations of AI

9. Kasparov Syndrome.

We will be overruled by AI. It will overthrow us, and humans will lose part of their autonomy, which will result in job losses.

– Examples: ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace. | ChatGPT could make these jobs obsolete: ‘The wolf is at the door’

10. Frankenstein’s Monster/Pandora’s Box

AI poses an existential threat to humanity or what it means to be human. It includes the loss of human control (entire autonomy). It calls for action in the face of out-of-control consequences and possible catastrophes. The sensationalism here exaggerates the potential dangers and negative impacts of AI.

– Examples: Is this the start of an AI Takeover? | Advanced AI ‘Could kill everyone’, warn Oxford researcher | The AI arms race is changing everything

Interestingly, studies found that the frames most commonly used by the media when discussing AI are “a helping hand” and “social progress” or the alarming “Frankenstein’s monster/Pandora’s Box.” It’s unsurprising, as the media is drawn to extreme depictions.  

If you think that the above examples represent the peak of the current panic, I’m sorry to say that we haven’t reached it yet. Along with the enthusiastic utopian promises, expect more dystopian descriptions of Skynet (Terminator), HAL 9000 (2001: A Space Odyssey), and Frankenstein’s monster.

The extreme edges provide media outlets with interesting material, for sure. However, “there’s a large greyscale between utopian dreams and dystopian nightmares.” It is the responsibility of tech journalists to minimize negative and positive hype

Today, it is more crucial than ever to portray AI – realistically.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 22 November 2022 @ 03:38pm

AI Art Is Eating The World, And We Need To Discuss Its Wonders And Dangers

After posting the following AI-generated images, I got private replies asking the same question: “Can you tell me how you made these?” So, here I will provide the background and “how to” of creating such AI portraits, but also describe the ethical considerations and the dangers we should address right now.

Astria AI images of Nirit Weiss-Blatt

Background

Generative AI – as opposed to analytical artificial intelligence – can create novel content. It not only analyzes existing datasets but it generates whole new images, text, audio, videos, and code.

Sequoia’s Generative-AI Market Map/Application Landscape, from Sonya Huang’s tweet

As the ability to generate original images based on written text emerged, it became the hottest hype in tech. It all began with the release of DALL-E 2, an improved AI art program from OpenAI. It allowed users to input text descriptions and get images that looked amazing, adorable, or weird as hell.

DALL-E 2 image results

Then, people start hearing about Midjourney (and its vibrant Discord) and Stable Diffusion, an open-source project. (Google’s Imagen and Meta’s image generator are not released to the public). Stable Diffusion allowed engineers to train the model on any image dataset to churn out any style of art.

Due to the rapid development of the coding community, more specialized generators were introduced, including new killer apps to create AI-generated art from YOUR pictures: Avatar AI, ProfilePicture.AI, and Astria AI. With them, you can create your own AI-generated avatars or profile pictures. You can change some of your features, as demonstrated by Andrew “Boz” Bosworth, Meta CTO, who used AvatarAI to see himself with hair:

Screenshot from Andrew “Boz” Bosworth’s Twitter account

Startups like the ones listed above are booming:

The founders of AvatarAI and ProfilePicture.AI tweet about their sales and growth

In order to use their tools, you need to follow these steps:

1. How to prepare your photos for the AI training

As of now, training Astria AI with your photos costs $10. Every app charges differently for fine-tuning credits (e.g., ProfilePicture AI costs $24, and Avatar AI costs $40). Please note that those charges change quickly as they experiment with their business model.

Here are a few ways to improve the training process:

  • At least 20 pictures, preferably shot or cropped to a 1:1 (square) aspect ratio.
  • At least 10 face close-ups, 5 medium from the chest up, 3 full body.
  • Variation in background, lighting, expressions, and eyes looking in different directions.
  • No glasses/sunglasses. No other people in the pictures.

Examples from my set of pictures

Approximately 60 minutes after uploading your pictures, a trained AI model will be ready. Where will you probably need the most guidance? Prompting.

2. How to survive the prompting mess

After the training is complete, a few images will be waiting for you on your page. Those are “default prompts” as examples of the app’s capabilities. To create your own prompts, set the className as “person” (this was recommended by Astria AI).

Formulating the right prompts for your purpose can take a lot of time. You’ll need patience (and motivation) to keep refining the prompts. But when a text prompt comes to life as you envisioned (or better than you envisioned), it feels a bit like magic. To get creative inspiration, I used two search engines, Lexica and Krea. You can search for keywords, scroll until you find an image style you like, and copy the prompt (then change the text to “sks person” to make it your self-portrait).

Screenshot from Lexica

Some prompts are so long that reading them is painful. They usually include the image’s setting (e.g., “highly detailed realistic portrait”) and style (“art by” one of the popular artists). As regular people need help crafting those words, we already have an entirely new role for artists under prompt engineering. It’s going to be a desirable skill. Just bear in mind that no matter how professional your prompts are, some results will look WILD. In one image, I had 3 arms (don’t ask me why).

If you wish to avoid the whole prompts chaos, I have a friend who just used the default ones, was delighted with the results, and shared them everywhere. For those apps to be more popular, I recommend including more “default prompts.”

Potentials and Advantages

1. It’s NOT the END of human creativity

The electronic synthesizer did not kill music, and photography did not kill painting. Instead, they catalyzed new forms of art. AI art is here to stay and can make creators more productive. Creators are going to include such models as part of their creative process. It’s a partnership: AI can serve as a starting point, a sketch tool that provides suggestions, and the creator will improve it further.

2. The path to the masses

Thus far, Crypto boosters didn’t answer the simple question of “what is it good for?” and have failed to articulate concrete, compelling use cases for Web3. All we got was needless complexity, vague future-casting, and “cryptocountries.” On the contrary, AI-generated art has a clear utility for creative industries. It’s already used in various industries, such as advertising, marketing, gaming, architecture, fashion, graphic design, and product design. This Twitter thread provides a variety of use cases, from commerce to the medical imaging domain.

When it comes to AI portraits, I’m thinking of another target audience: teenagers. Why? Because they already spend hours perfecting their pictures with various filters. Make image-generating tools inexpensive and easy to use, and they’ll be your heaviest users. Hopefully, they won’t use it in their dating profiles.

Downsides and Disadvantages

1. Copying by AI was not consented to by the artists

Despite the booming industry, there’s a lack of compensation for artists. Read about their frustration, for example, in how one unwilling illustrator found herself turned into an AI model. Spoiler alert: She didn’t like being turned into a popular prompt for people to mimic, and now thousands of people (soon to be millions) can copy her style of work almost exactly.

Copying artists is a copyright nightmare. The input question is: can you use copyright-protected data to train AI models? The output question is: can you copyright what an AI model creates? Nobody knows the answers, and it’s only the beginning of this debate.

2. This technology can be easily weaponized

A year ago on Techdirt, I summed up the narratives around Facebook: (1) Amplifying the good/bad or a mirror for the ugly, (2) The algorithms’ fault vs. the people who build them or use them, (3) Fixing the machine vs. the underlying societal problems. I believe this discussion also applies to AI-generated art. It should be viewed through the same lens: good, bad, and ugly. Though this technology is delightful and beneficial, there are also negative ramifications of releasing image-manipulation tools and letting humanity play with them.

While DALL-E had a few restrictions, the new competitors had a “hands-off” approach and no safeguards to prevent people from creating sexual or potentially violent and abusive content. Soon after, a subset of users generated deepfake-style images of nude celebrities. (Look surprised). Google’s Dreambooth (which AI-generated avatar tools use) made making deepfakes even easier.

As part of my exploration of the new tools, I also tried Deviant Art’s DreamUp. Its “most recent creations” page displayed various images depicting naked teenage girls. It was disturbing and sickening. In one digital artwork of a teen girl in the snow, the artist commented: “This one is closer to what I was envisioning, apart from being naked. Why DreamUp? Clearly, I need to state ‘clothes’ in my prompt.” That says it all.

According to the new book Data Science in Context: Foundations, Challenges, Opportunities, machine learning advances have made deepfakes more realistic but also enhanced our ability to detect deepfakes, leading to a “cat-and-mouse game.”

In almost every form of technology, there are bad actors playing this cat-and-mouse game. Managing user-generated content online is a headache that social media companies know all too well. Elon Musk’s first two weeks at Twitter magnified that experience — “he courted chaos and found it.” Stability AI released an open-source tool with a belief in radical freedom, courted chaos, and found it in AI-generated porn and CSAM.

Text-to-video isn’t very realistic now, but with the pace at which AI models are developing, it will be in a few months. In a world of synthetic media, seeing will no longer be believing, and the basic unit of visual truth will no longer be credible. The authenticity of every video will be in question. Overall, it will become increasingly difficult to determine whether a piece of text, audio, or video is human-generated or not. It could have a profound impact on trust in online media. The danger is that with the new persuasive visuals, propaganda could be taken to a whole new level. Meanwhile, deepfake detectors are making progress. The arms race is on.

AI-generated art inspires creativity, and enthusiasm as a result. But as it approaches mass consumption, we can also see the dark side. A revolution of this magnitude can have many consequences, some of which can be downright terrifying. Guardrails are needed now.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 17 May 2022 @ 12:15pm

A Guide For Tech Journalists: How To Be Bullshit Detectors And Hype Slayers (And Not The Opposite)

Tech journalism is evolving, including how it reports on and critiques tech companies. At the same time, tech journalists should still serve as bullshit detectors and hype slayers. The following tips are intended to help navigate the terrain.

As a general rule, beware of overconfident techies bragging about their innovation capabilities AND overconfident critics accusing that innovation of atrocities. If featured in your article, provide evidence and diverse perspectives to balance their quotes.     

Minimize The Overly Positive Hype

“Silicon Valley entrepreneurs completely believe their own hype all the time,” said Kara Swisher in 2016. “Just because they say something’s going to grow #ToTheMoon, it’s not the case.” It’s the journalists’ job to say, “Well, that’s great, but here are some of the problems we need to look at.” When marketing buzz arises, contextualize the innovation and “explore why the claims might not be true or why the innovation might not live up to the claims.”

Despite years of Techlash, tech companies still release products/services without considering the unintended consequences. A “Poparazzi” app that only lets you take pictures of your friends? Great. It’s a “brilliant new social app” because it lets you “hype up your squad” instead of self-glorification. It’s also not so great, and you should ask: “Be your friends’ poparazzi” – what could possibly go wrong?

The same applies to regulators who release bills without considering the unintended consequences — in a quest to rein in Big Tech. To paraphrase Kara Swisher, “Just because they say something’s going to solve all of our problems, it’s not the case” (thus, bullshit). It’s the journalists’ job to avoid declaring the regulatory reckoning will End Big Tech Dominance when it most likely will not, and to examine new proposals based on past legislation’s ramifications. See, for example, Mike Masnick’s “what could possibly go wrong” piece on the EARN IT Act.

Minimize The Overly Negative Hype

When critics relentlessly focus on the tech industry’s faults, you should contextualize them within the broader context (and shouldn’t wait until paragraph 55). Take, for example, this article about the future of Twitter under Elon Musk, which claimed: “Zuckerberg sits at his celestial keyboard, and he can decide day by day, hour by hour, whether people are going to be more angry or less angry, whether publications are going to live or die. With anti-vax, we saw the same power of Mr. Zuckerberg can be applied to life and death.”

No factual explanation was provided for this premium bullshit. Even though this is not how any of this works. In a similar vein, we can ask Prof. Shoshana Zuboff if she “sits at her celestial keyboard, and decide day by day, hour by hour, whether people are going to be more angry at Zuckerberg or the new villain Musk.” I mean, she used her keyboard to write that it’s in their power to trade “in human futures.” 

If the loudest shouters are given the stage, you end up with tech companies that simply ignore all public criticism as uninformed cynicism. So, challenge conventional narratives: Are they oversimplified or overstated? Be deliberate about which issues need attention and highlight the experts who can offer compelling arguments for specific changes (Bridging-based ranking, for example).

Look For The Underlying Forces 

Reject binary thinking. “Both the optimist and pessimist views of tech miss the point,” suggested WIRED’s Gideon Lichfield. This “0-or-1” logic turns every issue into divisive and tribal: “It’s generally framed as a judgment on the tech itself – ‘this tech is bad’ vs. ‘this tech is good.’” Explore the spaces in between, and the “underlying economic, social, and personal forces that actually determine what that tech will do.” 

First, there are the fundamental structures underneath the surface. Discuss “The Machine” more than its output. Second, many “tech problems” are often “people problems,” rooted in social, political, economic, and cultural factors. 

The pressure to produce fast “hot takes” prioritizes what’s new. Take some time to prioritize what’s important. 

Stop With “The END of __ /__ Is Dead”; It’s Probably Not The Case

The media and social media encourage despairing voices. However, blanket statements obscure nuances and don’t allow for productive inquiry. Yes, tech stocks are plummeting, and a down-cycle is here. That doesn’t mean the economy is collapsing and we’re all doomed. It’s not the dot-com crash, and we can still see amazing results (e.g., revenue surges over 20% Y/Y in 1Q’22) despite supply chain shortages. There are a lot more valuable graphs in “No, America is not collapsing.” 

Also, Silicon Valley is not dead. The Bay and other tech hubs expanded their share of tech jobs during the pandemic. Even Clubhouse is not dead (at least, not yet). Say “farewell” only after it’s official (RIP, iPod). 

Also, Elon Musk buying Twitter is neither “the end of Twitter” nor “the end of democracy as we know it.” It’s another example of pure BS. The Musk-Twitter deal can fix current problems and create a slew of new ones. It’s too soon to know. Sometimes, when you don’t see how things will end up, you can write, next to the speculation, that you just don’t know. Because no one does. Your readers would appreciate your honesty over a eulogy of Twitter and all democracy. Or maybe they won’t. IDK (and that’s okay).

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Posted on Techdirt - 1 March 2022 @ 12:01pm

What Happens When A Russian Invasion Takes Place In The Social Smartphone Era

Several days into Russia’s attack on Ukraine, we are already witnessing astonishing stories play out online. Social media platforms, after years of Techlash, are once again in the center of a historic event, as it unfolds.

Different tech issues are still evolving, but for now, here are the key themes.

Information overload

The combination of — smartphones, social media and high-speed data links — provides images that are almost certainly faster, more visual and more voluminous than in any previous major military conflict. What is coming out of Ukraine is simply impossible to produce on such a scale without citizens and soldiers throughout the country having easy access to cellphones, the internet, and, by extension, social media apps.

Social media is fueling a new type of ‘fog of war’

The ability to follow an escalating war is faster and easier than ever. But social media are also vulnerable to rapid-fire disinformation. So, social media are being blamed for fueling a new type of ‘fog of war’, in which information and disinformation are continuously entangled with each other — clarifying and confusing in almost equal measure.

Once again, the Internet is being used as a weapon

Past conflicts in places like Myanmar, India, and the Philippines show that tech giants are often caught off-guard by state-sponsored disinformation crises due to language barriers and a lack of cultural expertise. Now, Kremlin-backed falsehoods are putting the companies’ content policies to the test. It puts social media platforms in a precarious position, focusing global attention on their ability to moderate content ranging from graphic on-the-ground reports about the conflict to misinformation and propaganda.

How can they moderate disinformation without distorting the historical record?

Tech platforms face a difficult question: “How do you mitigate online harms that make war worse for civilians while preserving evidence of human rights abuses and war crimes potentially?”

What about the end-to-end encrypted messaging apps?

Social media platforms have been on high alert for Russian disinformation that would violate their policies. But they have less control over private messaging, where some propaganda efforts have moved to avoid detection.

According to the “Russia’s Propaganda & Disinformation Ecosystem — 2022 Update & New Disclosures” post and image, the Russian media environment, from overt state-run media to covert intelligence-backed outlets, is built on an infrastructure of influencers, anonymous Telegram channels (which have become a very serious, a very effective tool of the disinformation machine), and content creators with nebulous ties to the wider ecosystem.

The Russian government restricts access to online services

On Friday, Meta’s president of global affairs, Nick Clegg, updated that the company declined to comply with the Russian government’s requests to “stop fact-checking and labeling of content posted on Facebook by four Russian state-owned media organizations.” “As a result, they have announced they will be restricting the use of our services,” tweeted Clegg. In the heart of this issue there are ordinary Russians “using Meta’s apps to express themselves and organize for action.” As Eva Galperin (EFF) noted: “Facebook is where what remains of Russian civil society does its organizing. Cut off access to Facebook and you are cutting off independent journalism and anti-war protests.”

Then, on Saturday, Twitter, which had said it was pausing ads in Ukraine and Russia, said that its service was also being restricted for some people in Russia. We can only assume that it wouldn’t be the last restriction we’ll see as Russia continues to splinter the open internet.

Collective action & debunking falsehood in real-time

It’s become increasingly difficult for Russia to publish believable propaganda. People on the internet are using open-source intelligence tools that have proliferated in recent years to debunk Russia’s claims in real-time. Satellites and cameras gather information every moment of the day, much of it available to the public. And eyewitnesses can speak directly to the public via social media. So, now you have communities of people on the internet geolocating videos and verifying videos coming out of conflict zones.

The ubiquity of high-quality maps in people’s pockets, coupled with social media where anyone can stream videos or photos of what’s happening around them, has given civilians insight into what is happening on the ground in a way that only governments had before. See, for example, two interactive maps, which track the Russian military movements: The Russian Military Forces and the Russia-Ukraine Monitor Map (screenshot from February 27):

But big tech has a lot of complicated choices to make. Google Maps, for example, was applauded as a tool for visualizing the military action, helping researchers track troops and civilians seeking shelter. On Sunday, though, Google blocked two features (live traffic overlay & live busyness) in an effort to help keep Ukrainians safe and after consultations with local officials. It’s a constant balancing act and there’s no easy solution.

Global protests, donations, and empathy

Social media platforms are giving Russians who disagree with the Kremlin a way to make their voice heard. Videos from Russian protests are going viral on Facebook, Twitter, Telegram and other platforms, generating tens of millions of views. Global protests are also being viewed and shared extensively online, like this protest in Rome, shared by an Italian Facebook group. Many organizations post their volunteers’ actions to support Ukrainians, like this Israeli humanitarian mission, rescuing Jewish refugees. Donations are being collected all over the web, and on Saturday, Ukraine’s official Twitter account posted requests for cryptocurrency donations (in bitcoin, ether and USDT). On Sunday, crypto donations to Ukraine reached $20 million.

According to Jon Steinberg, all of these actions “are reminders of why we turn to social media at times like this.” For all their countless faults — including their vulnerabilities to government propaganda and misinformation — tech’s largest platforms can amplify powerful acts of resistance. They can promote truth-tellers over lies. And “they can reinforce our common humanity at even the bleakest of times.” 

“The role of misinformation/disinformation feels minor compared to what we might have expected,” Casey Newton noted. While tech companies need to “stay on alert for viral garbage,” social media is currently seen “as a force multiplier for Ukraine and pro-democracy efforts.”

Déjà vu to the onset of the pandemic

It reminds me a lot of March 2020, when Ben Smith praised that “Facebook, YouTube, and others can actually deliver on their old promise to democratize information and organize communities, and on their newer promise to drain the toxic information swamp.” Ina Fried added that if companies like Facebook and Google “are able to demonstrate they can be a force for good in a trying time, many inside the companies feel they could undo some of the Techlash’s ill will.” The article headline was: Tech’s moment to shine (or not).

On Feb 25, 2022, discussing the Russia-Ukraine conflict, Jon Stewart said social media “got to provide some measure of redemption for itself”: “There’s a part of me that truly hopes that this is where the social media algorithm will shine.”

All of the current online activities — taking advantage of the Social Smartphone Era — leave us with the hope the good can prevail over the bad and the ugly, but also with the fear it would not.

Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication

More posts from nirit.weiss-blatt >>