Behind effective accelerationism’s techno-optimist smile lies a familiar and dangerous impulse: subordinating human dignity to a technological imperative framed as inevitable.
The effective accelerationism movement (e/acc) presents itself as an enlightened embrace of technological progress, especially artificial general intelligence. Led by figures like Guillaume Verdon and embraced by venture capitalists like Marc Andreessen, the movement claims humanity faces a binary choice: “accelerate or die.” Those who question this narrative are dismissed as “decels” or “doomers” standing in the way of humanity’s cosmic destiny.
Nowhere is this authoritarian impulse more clearly articulated than in Andreessen’s “Techno-Optimist Manifesto”—a document that warrants direct examination. Strip away its futuristic veneer, and what remains is essentially 21st century fascism in digital clothing.
Consider the manifesto’s central claims. It flatly rejects the legitimacy of democratic regulation over technology: “We believe markets—free people making free choices—are the proper determinant of which technologies are created and deployed.” It declares technology the solution to all problems while dismissing concerns about inequality, sustainability, or governance as wrongheaded: “We oppose the philosophy of the unproductive ‘steady state.’“ Most tellingly, it explicitly rejects democratic oversight: “We are pro-civilization and thus we are focused on the private sector,” as if civilization itself is incompatible with public governance.
This isn’t mere enthusiasm for innovation; it’s a comprehensive political ideology that seeks to replace democratic deliberation with technological determinism and market fundamentalism. The manifesto’s vision is fundamentally feudal: a world where tech oligarchs determine humanity’s course, unencumbered by democratic institutions or public accountability. This isn’t optimism—it’s authoritarianism with a Silicon Valley gloss.
Andreessen positions himself as a philosopher-king of technological progress while demonstrating remarkable blindness to his own limitations. His breathless championing of Web3 and crypto as civilization’s inevitable future now looks more like hubris than vision as those markets have cratered. Though his venture firm, a16z, managed to unload much of its token holdings onto retail investors before the crash—a practice any reasonable person would find ethically troubling. This pattern of privatizing gains while socializing losses perfectly illustrates the movement’s underlying philosophy: technological “inevitability” for the masses, insider protection for the elite.
What makes e/acc dangerous isn’t enthusiasm for technology but its underlying technological determinism—the belief that innovation follows a predetermined path humans must accept rather than direct. This deterministic view treats human agency as largely irrelevant, serious debate as futile, and skepticism as dangerous heresy. We’ve seen this pattern before in other deterministic ideologies, from Marxist historical inevitability to market fundamentalism’s “invisible hand.” Marxism once declared proletarian revolution inevitable, sidelining debate about the means. Free-market fundamentalism claimed deregulation was destiny, ignoring warnings of catastrophic risk. Both left profound damage in their wake.
Technological determinism doesn’t just silence debate—it quietly erases the belief that humans have meaningful agency in shaping their future.
The movement’s practice of labeling critics as “decels” reveals its epistemic authoritarianism—a system where questioning the accelerationist narrative becomes not just incorrect but morally suspect. This approach inherently limits pluralistic debate, silences valid ethical concerns, and frames caution as weakness rather than wisdom. When questioning technological development is framed as opposition to progress itself—as an obstacle rather than necessary caution—we’ve crossed from debate into epistemic authoritarianism.
This authoritarian impulse isn’t accidental but essential to the movement’s character. Its leading voices consistently present themselves not as participants in democratic deliberation but as visionaries whose insight transcends normal political constraints. There’s something fundamentally fascistic in this self-conception—the belief that technological “greatness” requires bypassing democratic processes and dismissing public concerns as ignorance.
Let’s be very clear about what this is: a fascist disposition wrapped in techno-futurism. The historical parallels are too striking to ignore. Like 20th century fascism, it glorifies speed and power over deliberation and equity. It frames democratic oversight as weakness and celebrates the will of technological “pioneers” over collective wisdom. It positions a self-selected elite as the arbiters of humanity’s future while dismissing those who disagree as obstacles to progress. If this isn’t fascism in contemporary form, what would be?
Perhaps most troubling is e/acc‘s cynicism about human dignity. By explicitly subordinating traditional ethical values to technological imperatives and cosmic entropy maximization, the movement creates a moral calculus indifferent or even hostile to individual and collective human flourishing. When technology becomes an end in itself rather than a means to human ends, we risk a profound moral impoverishment—technological nihilism wearing the mask of cosmic purpose.
If we reject technological authoritarianism, the alternative isn’t Luddism—it’s philosophical liberalism, with its firm commitment to pluralism, human dignity, and epistemic humility. Liberal democracy isn’t anti-technology—it insists only that technological development must remain subject to democratic accountability, ethical oversight, and meaningful consent. Liberalism sees technological progress not as inevitable, but as an ongoing human choice. Liberal democracy exists not to maximize entropy or technological development for its own sake, but to safeguard conditions for diverse human flourishing.
What’s actually at stake in this debate isn’t just the pace of innovation but whether humans meaningfully shape their own future. E/acc‘s seductive simplicity—its promise that surrendering to technological inevitability will solve humanity’s problems—can slide quickly into authoritarian governance justified by “inevitable” technological imperatives. We’re already seeing these dynamics at work in real-world contexts, as when the Trump administration uses tariffs as leverage to force countries to accept Elon Musk’s Starlink—a fusion of technological and political power that bypasses democratic accountability.
The center must be held against this technological determinism. Two plus two equals four means we must always insist on seeing reality clearly, not through the distorting lens of inevitability narratives that conveniently serve those already in power. Human dignity and democratic legitimacy aren’t obstacles to technological advancement—they’re its moral foundation. Without them, technology inevitably becomes not a force for liberation, but merely another form of authoritarian control—no matter how brightly it smiles.
Mike Brock is a former tech exec who was on the leadership team at Block. Originally published at his Notes From the Circus.
Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.
Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.
If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.
In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.
What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.
Effective Altruism’s “brand management”
Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”
When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).
A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”
“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.
The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”
“We should be kind of quiet about it in public-facing spaces”
Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.
On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”
On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”
In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”
Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”
As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”
Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”
Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”
In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):
“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.
There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”
“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”
Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”
Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”
As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.
“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).
“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).
“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).
Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”
In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:
“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”
Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:
“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”
The structure of Effective Altruism rhetoric
The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”
When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.
In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”
Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”
The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.
The “Funnel Mode”
According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”
The levels are: Audience, followers, participants, contributors, core, and leadership.
In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”
At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.
The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”
According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”
The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”
Key takeaways
– Core EA
In the Public-facing/grassroots EAs (audience, followers, participants):
The main focus is effective giving à la Peter Singer.
The main cause area is global health, targeting the ‘distant poor’ in developing countries.
The donors support organizations doing direct anti-poverty work.
In the Core/highly engaged EAs (contributors, core, leadership):
The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.
With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.
– Effective Altruism was a Trojan horse
It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.
Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.
This needs to be investigated further.
Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.
In 2023, the extreme ideology of “human extinction from AI” became one of the most prominent trends. It was followed by extreme regulation proposals.
As we enter 2024, let’s take a moment to reflect: How did we get here?
Alina Constantin / Better Images of AI / Handmade A.I / CC-BY 4.0
2022: Public release of LLMs
The first big news story on LLMs (Large Language Models) can be traced to a (now famous) Google engineer. In June 2022, Blake Lemoine went on a media tour to claim that Google’s LaMDA (Language Model for Dialogue Application) is “sentient.” Lemoine compared LaMDA to “an 8-year-old kid that happens to know physics.”
This news cycle was met with skepticism: “Robots can’t think or feel, despite what the researchers who build them want to believe. A.I. is not sentient. Why do people say it is?”
In August 2022, OpenAI made DALL-E 2 accessible to 1 million people.
In November 2022, the company launched a user-friendly chatbot named ChatGPT.
People started interacting with more advanced AI systems, and impressive Generative AItools, with Blake Lemoine’s story in the background.
At first, news articles debated issues like copyright and consent regarding AI-generated images (e.g., “AI Creating ‘Art’ Is An Ethical And Copyright Nightmare”) or how students will use ChatGPT to cheat on their assignments (e.g., “New York City blocks use of the ChatGPT bot in its schools,” “The College Essay Is Dead”).
2023: The AI monster must be tamed, or we will all die!
The AI arms race escalated when Microsoft’s Bing and Google’s Bard were launched back-to-back in February 2023. It was the overhyped utopian dreams that helped overhype the dystopian nightmares.
A turning point came after the release of New York Times columnist Kevin Roose’s story on his disturbing conversation with Microsoft’s new Bing chatbot. It has since become known as the “Sydney tried to break up my marriage” story. The printed version included parts of Roose’s correspondence with the chatbot, framed as “Bing’s Chatbot Drew Me In and Creeped Me Out.”
“The normal way that you deal with software that has a user interface bug is you just go fix the bug and apologize to the customer that triggered it,” responded Microsoft CTO Kevin Scott. “This one just happened to be one of the most-read stories in New York Times history.”
From there on, it snowballed into a headline competition, as noted by the Center for Data Innovation: “Once news media first get wind of a panic, it becomes a game of one-upmanship: the more outlandish the claims, the better.” It reached that point with TIME magazine’s June 12, 2023, cover story: THE END OF HUMANITY.
Two open letters on “existential risk” (AI “x-risk”) and numerous opinion pieces were published in 2023.
The first open letter was on March 22, 2023, calling for a 6-month pause. It was initiated by the Future of Life Institute, which was co-founded by Jaan Tallinn, Max Tegmark, Viktoriya Krakovna, Anthony Aguirre, and Meia Chita-Tegmark, and funded by Elon Musk (nearly 90% of FLI’s funds).
The letter called for AI labs “to immediately pause for at least six months the training of AI systems more powerful than GPT4.” The open letter argued that “If such a pause cannot be enacted quickly, governments should institute a moratorium.” The reasoning was in the form of a rhetorical question: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?”
It’s worth mentioning that many who signed this letter did not actually believe AI poses an existential risk, but they wanted to draw attention to the various risks that worried them. The criticism was that “Many top AI researchers and computer scientists do not agree that this ‘doomer’ narrative deserves so much attention.”
The second open letter claimed AI is as risky as pandemics and nuclear war. It was initiated by the Center for AI Safety, which was founded by Dan Hendrycks and Oliver Zhang, and funded by Open Philanthropy, an Effective Altruism grant-making organization, run by Dustin Moskovitz and Cari Tuna (over 90% of CAIS’s funds). The letter was launched in the New York Times with the headline, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.”
These statements resulted in newspapers’ opinion sections being flooded with doomsday theories. In their extreme rhetoric, they warned against apocalyptic “end times” scenarios and called for sweeping regulatory interventions.
Dan Hendrycks, from the Center for AI Safety, warned we could be on “a pathway toward being supplanted as the earth’s dominant species.” (At the same time, he joined as an advisor to Elon Musk’s xAI startup).
Zvi Mowshowitz (Don’t worry about the vase substack) claimed that “Competing AGIs might use Earth’s resources in ways incompatible with our survival. We could starve, boil or freeze.”
Michael Cuenco, associate editor of American Affairs, asked to put “the AI revolution in a deep freeze” and called for a literal “Butlerian Jihad.”
Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute (MIRI), asked to “Shut down all the large GPU clusters. Shut down all the large training runs. Track all GPUs sold. Be willing to destroy a rogue datacenter by airstrike.”
Conjecture’s Connor Leahy, who said, “I do not expect us to make it out of this century alive; I’m not even sure we’ll get out of this decade,” was invited to the House of Lords, where he proposed “a global AI ‘Kill Switch.’”
All the grandiose claims and calls for an AI moratorium spread from mass media, through lobbying efforts, to politicians’ talking points. When AI Doomers became media heroes and policy advocates, it revealed what is behind them: A well-oiled “x-risk” machine.
Since 2014: Effective Altruism has funded the “AI Existential Risk” ecosystem with half a billion dollars
This funding did NOT include investments in “near-term AI Safety concerns such as effects on labor market, fairness, privacy, ethics, disinformation, etc.” The focus was on “reducing risks from advanced AI such as existential risks.” Hence, the hypothetical AI Apocalypse.
2024: Backlash is coming
On November 24, 2023, Harvard’s Steven Pinker shared: “I was a fan of Effective Altruism. But it became cultish. Happy to donate to save the most lives in Africa, but not to pay techies to fret about AI turning us into paperclips. Hope they extricate themselves from this rut.” In light of the half-a-billion funding for “AI Existential Safety,” he added that this money could have saved 100,000 lives (Malaria calculation). Thus, “This is not Effective Altruism.”
In 2023, EA-backed “AI x-risk” took over the AI industry, AI media coverage, and AI regulation.
Nowadays, more and more information is coming out about the “influence operation” and its impact on AI policy. See, for example, the reporting on Rishi Sunak’s AI agenda and Joe Biden’s AI order.
In 2024, this tech billionaires-backed influence campaign may backfire. Hopefully, a more significant reckoning will follow.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.