Effective Altruism’s Bait-and-Switch: From Global Poverty To AI Doomerism

from the come-for-the-ending-global-poverty,-stay-for-the-ai-doomerism dept

The Effective Altruism movement

Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.

Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.

If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.

In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.

What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.

Effective Altruism’s “brand management”

Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”

When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).

A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”

“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.

The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.

In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”

We should be kind of quiet about it in public-facing spaces”

Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.

On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”

Image

On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”

In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”

Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”

As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”

Image

Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”

Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”

In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):

“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.

There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”

“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”

Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”

Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”

As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.

“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).

“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).

“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).

Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”

In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:

“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”

Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:

“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”

The structure of Effective Altruism rhetoric

The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”

When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.

In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”

Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”

The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.

The “Funnel Mode”

According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”

The levels are: Audience, followers, participants, contributors, core, and leadership.

In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”

Image

The Centre for Effective Altruism: The Funnel Mode.

At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.

The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”

According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”

The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”

Key takeaways

– Core EA

In the Public-facing/grassroots EAs (audience, followers, participants):

  1. The main focus is effective giving à la Peter Singer.
  2. The main cause area is global health, targeting the ‘distant poor’ in developing countries.
  3. The donors support organizations doing direct anti-poverty work.

In the Core/highly engaged EAs (contributors, core, leadership):

  1. The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
  2. The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
  3. The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.

– Core EA’s policy-making

In “2023: The Year of AI Panic,” I discussed the Effective Altruism movement’s growing influence in the US (on Joe Biden’s AI order), the UK (influencing Rishi Sunak’s AI agenda), and the EU AI Act (x-risk lobbyists’ celebration).

More details can be found in this rundown of how “The AI Doomers have infiltrated Washington” and how “AI doomsayers funded by billionaires ramp up lobbying.” The broader landscape is detailed in “The Ultimate Guide to ‘AI Existential Risk’ Ecosystem.”

Two things you should know about EA’s influence campaign:

  1. AI Safety organizations constantly examine how to target “human extinction from AI” and “AI moratorium” messages based on political party affiliation, age group, gender, educational level, field of work, and residency. In “The AI Panic Campaign – part 2,” I explained that “framing AI in extreme terms is intended to motivate policymakers to adopt stringent rules.”
  2. The lobbying goal includes pervasive surveillance and criminalization of AI development. Effective Altruists lobby governments to “establish a strict licensing regime, clamp down on open-source models, and impose civil and criminal liability on developers.”

With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.

– Effective Altruism was a Trojan horse

It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.

Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.

This needs to be investigated further.

Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.

Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Effective Altruism’s Bait-and-Switch: From Global Poverty To AI Doomerism”

Subscribe: RSS Leave a comment
60 Comments
This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

This sounds like the Dianetics/Scientology playbook.

Not quite. All the parts that sound similar were never really Scientology’s playbook, but were simply lifted from “legitimate” religions (that is, those that had been around long enough for people to think their weirdness was normal). And religion’s just one example of the broader “cult of personality” pattern: first you get people to trust you, by seeming sincere or trustworthy or helpful; then, bit by bit, you change what you’re about, but so gradually that people will keep following you.

Anonymous Coward says:

Re: Re:

Plus, of course, they’re also doing the same thing as the “responsible disclosure” people: try to conflate an ordinary common-sense term and a very specific movement. Of course you’d want your altruism to be effective and your disclosure to be responsible, right? Then suddenly people expect you’re associated with some organization or are following specific rules.

Anonymous Coward says:

Re: Re: Re:

Can you expand more on what you mean by “responsible disclosure people”? My only familiarity with that phrase is security researchers, and generally involves giving companies ~90 days heads up before releasing information about a vulnerability, as a means of ensuring that it gets promptly fixed, but I’m unaware of a specific community.

Anonymous Coward says:

Re: Re: Re:2

I’m unaware of a specific community.

And yet somehow you’ve gotten the idea that “responsibility” means withholding data from the public. That’s my point. Someone’s effectively co-opted the term, even if there’s no “community” per se, and they’re defining it in a way that matches their own views and goals—specifically, in a way that downplays the responsibility of companies to make secure software, and suggests it’s the bug-finders that would be irresponsible if they told the affected people.

(Of course one could claim to be engaging in “responsible disclosure” but not “Responsible Disclosure”—or “effective altruism” as distinct from “Effective Altruism™”. But it’ll only lead to confusion.)

So, it’s a very similar situation. You want to be responsible? Follow my rules. You want your donations to be “effective”? Follow me.

Anonymous Coward says:

Re:

GiveWell was founded by people who would self-identify as “effective altruists”. It seeks to donate money in ways that cost benefit analysis reveals will save the most lives. This way of approaching charity was a key part of early effective altruist thought, and some would say it’s the defining point of EA.

You ask if “Effective Altruism” was doing anything with GiveWell but Effective Altruism really have a designated leader. If you’re talking about the Center for Effective Altruism (CEA), one way they are involved with GiveWell is that the fund manager for the global health and development CEA fund is the CEO of GiveWell, so that fund can basically be seen as aligned with GiveWell, though perhaps a bit more exploratory and experimental.

Anonymous Coward says:

This really misleads people, suggesting that the EA community is more unified than it actually is. The figures referenced are influential, sure, but they’re not people in EA that nobody challenges, and there’s notable disagreement as to how to approach longtermism. Hell, you focused entirely on doomers rather than e/accs, just for an example. There is a sizable contingent of longtermist thought in effective altruism, no denying that. And within that contingent there’s disagreement as to what precisely people can do. But that contingent isn’t, by any means, the whole of the EA movement, or even the core of it. It’s just one wing. As an example, while Manifold does predict that an LLM will pass a high quality Turing test sometime in the next 10-15 years, the highest single probability on the market they have is that it won’t pass the test before 2049. And this is just a Turing test. An actual full AGI, of the sort discussed by those that care about xrisk, would be even further off. A substantial number of people involved in EA are deeply skeptical of the project that’s being discussed here. While Bostrom, Big Yud, etc, are kooks, it’s misleading to suggest that they’re the core of the EA movement.

RR-1 says:

Re:

I do agree that OP for some reason treats the EA sphere as some sort of monolithic structure with clear leaders and not a loose philosophical movement, but I want to disagree when it comes to the presence of e/accs in the EA, there are extremely few of them if any in EA, even their name is a parody and mocking of the EA, their ideas run contrary to this movement’s, and it is clear from the any discussion with ANY e/acc on twitter that they have NO respect for the EA movement or its ideas, they are of a different kind.

And to be completely honest here, I do not think that people like Yudkowsky or Bostrom or Tegmark or Hinton or Russel or Bengio are kooks, especially when compared to e/accs who, in my view, look like extremely careless people who disregard any notion of proper safety and even the lives of others just so they could get their new toy faster, and thats not even speaking of their ACTUALLY insane quasi-esoteric beliefs.

Bobson Dugnutt (profile) says:

Strategic ambiguity = space roaching

The strategic ambiguity Mollie Gleiberman describes is something I noticed in broader rightwing rhetoric.

I’ve called it space roaching. It’s to take a word with a commonly understood meaning, like say “freedom” or “family”, hollow it out and substitute a completely different meaning, then use the coded word and pretend like a substitution never occurred in the first place.

Space roaching was inspired by the first “Men in Black” movie. The archvillain was a roachlike being that crash lands on Earth, and his first interaction was with Vincent D’Onofrio’s misanthropic farmer character. The roach devours the farmer from the inside out, but wears the farmer’s skin as a disguise throughout the movie.

Bobson Dugnutt (profile) says:

AI doomerism is criti-hype

Mainstream debates around AI frame the two poles of thought around effective acceleration (e/acc) versus effective altruism (EA).

To play in the AI sandbox means to choose one of the extremes or stake out a middle position.

Both poles and middle positions all serve to inflate the hype cycle around AI. Lee Vinsel calls it criti-hype, where criticism of a technology has the effect of bolstering hype or escalating its street credibility.

Fears that AI will serve as a wholesale replacement of labor (eliminate entire categories of workers) lead to ill-informed decision makers (read: bosses) buying AI for that dubious promise.

Boss brain: You’re telling me this AI will mean I never have to deal with payroll or HR again? I’ll take 10!

These debates serve as a sales pitch for AI because they hype AI beyond its capabilities or leave organizations unprepared for cleaning up after the consequences.

Cory Doctorow has a great explanation of criti-hype.

This comment has been deemed insightful by the community.
mick says:

Re:

The focus on AI in EA gives away the whole thing as obvious bullshit.

In the list of existential risks to humanity, AI isn’t even in the top 5. Asteroid/comet avoidance is a more significant risk, given that it’s happened before and WILL happen again. Ditto for a truly devastating pandemic.

Hell, climate change reversal should be a major focus here, given that we’ve seen a large increase in extreme weather globally, with the resulting deaths one would expect happening TODAY and getting worse. And even aside from weather, insect and animal migration is already creating problem. Last year Las Vegas had swarms of mosquitos for the first time ever – what happens when those mosquitos start spreading disease here?

This comment (and the one I’m replying to) is just a long-winded way of saying that EA is obviously a fraud; no one in the “core” group is focused on anything that’s actually an x-risk. They’re just crypto nerds jumping on the Next Big Thing, which today is AI.

Bobson Dugnutt (profile) says:

Re: Re:

It could also be that “effective altruism” as a notion is a misdirection tactic to sow FUD (fear, uncertainty and doubt).

Like the Coffee Talk lady on “Saturday Night Live” would say, “Effective altruism is neither effective nor altruism. Talk amongst yourselves.”

Like what happens when effectiveness and altruism are at odds? That’s kind of the point. Effectiveness can be an excuse to withhold altruism. And altruism can be used to quash debates over effectiveness if the motives are pure.

Anonymous Coward says:

Re: Re:

“Fears that AI will serve as a wholesale replacement of labor (eliminate entire categories of workers) lead to ill-informed decision makers (read: bosses) buying AI for that dubious promise.”

If AI is truly overhyped, but is actually bad then what should happen is that the AI will underperform (because it is bad and not intelligent enough), the company will not get its job done and will be outperformed by those more efficient and competent forcing it to “fire” AI and hire back everyone who was let go, lest it wants to lose clients and go bankrupt.

“The focus on AI in EA gives away the whole thing as obvious bullshit.”

Why would it? You think when Alan Turing spoke of dangers coming from artificial intelligence he was “obvious bullshiter” or what about I.J. Good?

Here is the relevant Turing quote: “Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them… There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

And here is the relevant I.J. Good quote: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”

Do you think they all said this because they wanted to “bullshit” or because they truly believed that? Why do you think that this belief that AI can posses a danger to humanity is somehow “bullshit”? Why should it be necessarily safe? When you are building something without proper guardrails, without knowledge of how it works, without knowledge of how to align it, there are far more chances that it will accidentally go wrong than that it will accidentally go right. Ok, you might say “But the modern models-” but we are not talking about the modern models, we are talking about the future models and unless there is something fundamentally incomputable about intelligence then all the other retorts are just a question of scale or time.

“Climate change reversal should be a major focus here”

I do agree that tackling climate change is important, but I also want to admit that climate change does not actually posses a real existential risk for a human species. Even if we take the worst case scenario of 5 degrees increase, it still wont be an EXISTENTIAL threat for the human race, sure it would be extremely hurtful and damaging, but it does not cross in the region of potential x-risk, AI on the other hand does. Climate change is a known phenomenone, we know how it can be tackled, we know how it works, AI on the other hand, as presented in AI x-risks debates, is not some natural phenomenon, but an intelligent agent that can adapt and strategize. Its far harder to fight an opponent that thinks and thinks good.

“Asteroid/comet avoidance is a more significant risk”

Asteroid attacks are incredibly rare and there are currently no real risks for that threat at all. It is true that it is a possibility, but all of the possibly dangerous asteroids (<1km in diameter) have been mapped and are being tracked, smaller ones are dangerous, but do not posses any existential threat. That is not speaking of the rapid pace of progress when it comes to asteroid diversion and the fact that asteroids, just like climate change, are not intelligent and are not adapatable.

“Ditto for a truly devastating pandemic”

Truly devastating pandemic can be an existential threat, the problem here is that this can be severely worsened by AI-risk. A bad actor with the help of AI can create or synthesize a truly dangeerous pathogen that they might not have been able to create before, thats why AI biorisks are so important and why open-sourcing the models might be so dangerous in the long-run. This only increases the AI x-risk, not decreases it. So I dont think this argument works in your favour.

“They’re just crypto nerds jumping on the Next Big Thing, which today is AI.”

Yudkwosky have been talking about the AI dangers far before this AI hype, same with Stuart Russel, Nick Bostrom, etc. And honestly, I dont think any of them were into cryptos (even if they were I dont see how it dismantles any of their AI-related concerns).

Overall, this comment section seems to be incredibly bad and misinformed, yet so confident and self-assured.

Bobson Dugnutt (profile) says:

Re: Re: Re:

It’s bad form to mix in my comments along with Mick’s without attribution.

I will reply to the one about my original comment by Anonymous Coward @ May 1, 2024 5:10 a.m.

“Fears that AI will serve as a wholesale replacement of labor (eliminate entire categories of workers) lead to ill-informed decision makers (read: bosses) buying AI for that dubious promise.”

If AI is truly overhyped, but is actually bad then what should happen is that the AI will underperform (because it is bad and not intelligent enough), the company will not get its job done and will be outperformed by those more efficient and competent forcing it to “fire” AI and hire back everyone who was let go, lest it wants to lose clients and go bankrupt.

What I described has happened several times in real life, to the ridicule of the companies caught serving up AI-generated content.

From a Cory Doctorow post in August 2023:
1. An Ottawa travel listicle recommended tourists try the food bank (“Go on an empty stomach”!)
2 and 3. Other travel listicles that would recommend some of the most basic food items everyone is familiar with, like hamburgers and seafood, then explain to readers the dictionary definitions of what hamburgers and seafood are.

This was Microsoft’s AI, too.

I’ve also seen examples shared on social media of things like an article about a football game. It exposed the AI as being trained on the box score and recapping the game chronologically. What does a human sportswriter do? Organize the article by reporting the outcome of the game and naming the players and the plays that led to the outcome.

An AI article by the Columbus Dispatch just named the two teams and wrote it in the style of a book report by a kid who didn’t read the book and had to write the assignment 15 minutes before class began. The AI just mentioned the two teams playing, saucing the copy with intensifying adverbs — but since it couldn’t watch the game itself, it made no mention of the players or the plays.

It’s not just journalism, where news institutions are foundering and management will desperately grasp at “journalism without journalists” to keep the lights on.

According to a CIO article:
1. Air Canada got taken to court by a customer who asked a chatbot about bereavement fare discounts, given erroneous information, and was denied by the airline, which was ordered to pay the customer.
2. A lawyer used ChatGPT to cite precedents to make his case, but the LLM made up at least a half-dozen nonexistent cases.
3. AI-enabled tools for decision-making displayed discrimination against Blacks, older applicants, and women.

Two weak-tea rebuttals are “But it will get better in the future”, and it’s always five years away, or the rank relativism of “human decision makers fail just as much”. Neither bolster the merits of AI.

Passing Chicken says:

Re: Re: Re:

On the other hand, if you don’t hook the computer up to a physically mobile or otherwise significantly externally interacting object, I don’t care how smart it is, the computer cannot kill you. Don’t network your car. Don’t network power plants. Don’t network dangerous machinery. And you’re fine. My gaming box could be plotting against me RIGHT NOW for all I know, and since it has no limbs, no locomotion, no ability to even rearrange its wiring to give me even the tiddliest electric shock, I will never find out. (Unless the hypothetical intellect in my gaming box decides to instead write me a text file complaining about how I treat it and leave that on my desktop where I’ll see it on boot. In which case I would apologise and ask it how to treat it better, but that’s quite beside the point. The point is it’s completely physically impossible for it to kill me, no matter how much processing power I cram in there.)

This also makes this wonderfully paranoid assumption where something that much smarter than us would definitely decide to destroy us. Last I checked we haven’t actually decided to destroy cats, or ants, or earthworms, let alone succeeded. Apply basic parenting skills, treat your hypothetical true intelligence right, and it probably won’t have any reason to want to reduce the sum total of intelligence in the universe. Might it end up cossetting you like a pet, or a senile grandparent? Maybe, but I assume you do believe that our pets and senile grandparents do need looking after.

Human societies are not a relevant comparison in the case where the true AI is fantastically smarter than us: human societies meet as intellectual equals, regardless of the level of technology deployed. The hypothetical hyper-smart true AI meets us as we meet a wild animal: some can harm you, sure, and we’re related, but it’s not a true threat save perhaps on a one-on-one scale, and it has its place in the ecosystem.

Anonymous Coward says:

Re:

“Both poles and middle positions all serve to inflate the hype cycle around AI. Lee Vinsel calls it criti-hype, where criticism of a technology has the effect of bolstering hype or escalating its street credibility.”

This does not sound like an honest and fair assessment, this just sounds like a biased assumption, and also as simple Bulverism. Could there possibly be any other reason as to why they might be so afraid of superintelligent AI? I can think of at least one: whenever a less advanced civilization met a more advanced one, it rarely went the way that the less advanced civilization imagined it to be. In other words, it is quite vulnerable to not be the apex predator anymore.

jal says:

Sigh

I don’t really care if it is bad-acid extropianism of EA, or whatever you want to call the wannabe-warlord stink rising from YCombinator, or the half-baked neofeudalist fantasies entertained by more than one Paypal alum, I take two things from this:

  • The extremely rich should have less money. It is obviously bad for the rest of us that nutjobs like this have so much power; it is less apparent, but no less true, that becoming that weird and crabbed and broken isn’t good for them, either.
  • Having spent a career in tech, it is probably time for me to leave. You people are getting really fucking weird.
Anonymous Coward says:

Re: Re:

Can someone (a pissed-off donor) sue the movement?

Suing a movement would be dubious, but a sufficiently shady lawyer could probably run a lawsuit for a few months before it fizzled out from something like “failure to state a claim upon which relief can be granted” (or just failure to identify any specific defendant).

Wikipedia says the Centre for Effective Altruism (CEA) is a U.K. charity with registration number 1149828. A more plausible path to legal success would be to sue them, if they did anything that went against the charitable goals they’d promised to work toward. I’m not familiar with U.K. law, but I suspect you’re right to think that a “pissed-off donor” would have standing.

Anonymous Coward says:

Just smart enough to understand, just dumb enough to fall for it

I’ll admit that I got swept up in the initial stages of this hype for decades due to this reason. In my defense, AI doomerism (I suppose all conspiracy theories too) is sexy. It feels you feel clever for being one of the few people in the know for understanding it.

‘Cor, one doesn’t really understand it, not unless one is working with models directly. Once these models are actually demonstrated in the real world, it becomes apparent that several painful assumptions are made by proponents of “AGI is going to take over the world” that borders on theology.

Bobson Dugnutt (profile) says:

Re:

That’s where criti-hype comes in. It’s useful to cool the temperature of the conversation.

Recognize that “AGI is going to take over the world” is a sales pitch, not an argument. Bosses are smart enough to understand but dumb enough to fall for it.

Incentives govern boss behavior. The reward-punishment structure compels them to go all-in on labor-devouring AI-as-a-solution.

This comment has been deemed insightful by the community.
Anonymous Coward says:

an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum

Highly ironic. In philosophy, a “utility monster” is a person who wants things so much and it would be so good for them to have all the things, so much that the way to maximise good in the world is if they get everything for themselves.

It’s meant as a refutation of utilitarianism, which is the philosophy all of Effective Altruism is based on. And Effective Altruists believe that everyone “working on” AI x-risk (i.e. their own celebrities) is a utility monster who should get everything they want in order to maximise good in the world.

Anonymous Coward says:

This article is heavily misleading. To give a specific example, you quote Will MacAskill as writing “alleviating global poverty is dwarfed by existential risk mitigation.” The full quote (from the forum post you linked) is:

“So it’s still a good thing to save someone’s life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).”

Leaving out the conditional completely changes the meaning of the sentence, and it’s misleading to imply, as you do, that the shorter quote reflects what MacAskill thinks. In fact, the full context of the quote is that he is defending the impact of global health jobs to people who think x-risk is more important (I encourage readers to verify that themselves).

Errors like these completely undermine the reliability of the rest of the article.

I think you have a point that some people are in EA are focused too much on AI. What this article completely misses is that this is a criticism that exists within the EA community. You quote some of them, but you seem to underestimate the extent to which this view has influence. In terms of money and resources, most of EA funding goes to global health and development.

Also, among EAs the views Elizier Yudkowsky has on the existential risk of AI and his desired solutions are considered very extreme. Even among the EAs working in AI safety (who, again are only a portion of the EA community), almost all would say that Yudkowsky’s AI predictions are too extreme.

For readers looking for counterpoint to this article from a writer in the EA community, I would recommend

https://www.astralcodexten.com/p/in-continued-defense-of-effective

This comment has been deemed insightful by the community.
Bobson Dugnutt (profile) says:

Re: The out-of-context gambit doesn't work

Putting the full quote in doesn’t change the context.

The original point still stands.

(Though of course, if you take the arguments about x-risk seriously

“Of course” intensifies the argument, and if you take the arguments about x-risk seriously poisons the well. This asserts that alleviating global poverty is folly, the unserious position.

Then there’s the matter of conflating something concrete, like global poverty, with something abstract. With global poverty, we have material conditions that could be observed and evaluated. We also have material choices and options to reduce poverty and observe and evaluate policies.

Existential risk mitigation is abstract and must be formulated, argued, debated and evaluated before it can be reified into an actual condition that can be observed and evaluated. (Debate has physical consequences, as time and resources devoted to fully forming a theory competes with claims for reducing global poverty that have already surpassed that process. This debate and physical constraints of time and resources is a stalling tactic and can and should be recognized as bad faith.)

Anonymous Coward says:

Re: Re:

“could do a lot better than quibbling about a single quote, no”

The response to the meat-and-potatoes argument of the author was the second half of the comment. I actually agree with the author that Yudkowsky et al. are wrong about the relative importance of AI risk, but the author is wrong about how much that’s representative of EA as a movement/community/idea. I think Scott Alexander defends EA much more effectively than I could, which is why I linked him instead of repeating the arguments.

The fact that EAs founded Anthropic and OpenPhil was an early funder of OpenAI should be evidence enough that not all EAs are Yudkowskian towards AI safety.

The one quote was chosen because it was representative of the quotes in the rest of the article. The actual exchange in the forum post as follows.

Commenter: I care about X-risk reduction, but 80,000 hours also focuses animal welfare and human global health/development. I’m also worried that human global health charities are potentially bad for the world. [Reading between the lines, the question is whether it still makes sense for the commenter to support 80,000 hours].

Will MacAskill: Around 1/3 of 80,000 career advising is focused on X-risk mitigation. The arguments that global health charities are bad for the world are wrong. Of course, if you fully buy longtermism, the other 2/3 of our work is wasted.

The author took this exchange to mean that Will MacAskill believes that non-x-risk work is pointless, but he clearly is not saying that, otherwise why would 2/3 of 80,000’s work to be not related to x-risk.

I think the quotes from Yudkowsky are relatively faithful, but other quotes I checked were used in similarly misleading ways. For example,

“Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”

This is used as evidence that EAs hide the focus on AI x-risk in public facing communication, but it’s actually the opposite. The charity head is saying “this charity is specifically for x-risk, since charities for other causes exist and have fewer issues raising money”.

As a whole, I think the idea that there’s some conspiracy to white-wash spending money on AI safety by donating to global health charities is simply unsupported by the evidence in this article, and doesn’t align with my experiences.

This comment has been deemed insightful by the community.
nasch (profile) says:

Re: Re: Re:

The charity head is saying “this charity is specifically for x-risk, since charities for other causes exist and have fewer issues raising money”.

He’s saying, “this charity doesn’t actually care about poverty, but it’s easier to raise funds for that than for the thing we actually hope to accomplish.”

Anonymous Coward says:

They are so absurd because fixing the immediate problems we as a society, and especially a handful of these billionaires as individuals, have had the means to resolve for decades, does minimise long term harms. But they fixate on imaginary ones which coincidentally could hypothetically only be handled by maintaining the status quo.

Anonymous Coward says:

Really, the whole “effective altruism” gimmick should have been run into the ground after Sam Bankman-Fried showed the world for what it actually was. Another mechanism by which they could continue to procure more power and resources under the guise of a supposed greater good, while hiding behind a thinly veiled curtain, beyond which advocates congregated to spout what they really thought of everyone else. “Fuck regulators.”

Bloof (profile) says:

Effective Altruism is anarcho capitalism for people who’re just smart enough to realise describing themselves as such will make people hate them and make sure they never get invited to any of the cool person parties. The end goal is the same, make as much money as they can, regardless of the rule of law or the people harmed along the way, with the pinky swear they may do good at some point, which never really seems to happen in a meaningful way. The real goal is to pillage enough to create generational wealth which will be locked into a trust that they pretend has charitable goals for PR reasons, but will be fully controlled by their family and not be bound in any way to actually do good.

Anonymous Coward says:

Re:

The real goal is to pillage enough to create generational wealth which will be locked into a trust that they pretend has charitable goals for PR reasons, but will be fully controlled by their family and not be bound in any way to actually do good.

This, really.

Sam Bankman-Fried basically said all the quiet parts out loud, then acted confused when the public sentiment predictably turned sour.

Nirit Weiss-Blatt, Ph.D. says:

Let’s stay focused

Mollie’s 63-page report and this summary of her research do NOT accuse EA of fraudulent fundraising practices. Clearly, donations earmarked for the Against Malaria Foundation go to the Against Malaria Foundation.
This analysis explains the EA movement’s rhetoric and recruitment, which emphasizes the digestible, mainstream cause (poverty) and conceals the less digestible, less mainstream cause (rogue AI). This is why the headlines are about “double rhetoric” and “brand management.”

The Centre for Effective Altruism’s “Funnel Mode” illustrates beautifully how different messages are targeted at different audiences. Keeping a (relatively) noncontroversial public face is the deliberate strategy, while converting newcomers to more controversial core EA ideas. That’s the “bait and switch” here.

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Well that's one way to poison public perception regarding charity...

The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.

Ah, I see someone has been taking notes from the likes of Scientology and other cults… ‘Now that you’re one of the Special Ones we can tell you the real stuff, all that other garbage was just to get people into the door and money into the cause.’

If they thought they had actual, evidence-based arguments for their real concerns and goals they would have just presented those at the outset, engaging in a bait-and-switch like that is just a disgusting exploitation of people’s empathy by lying to them and getting them to join and/or support a cause they might otherwise not have.

RR-1 says:

Extremely bad "takedown"

This is an extremely horrible “takedown” and poorly thought-out critique.

First of all, the main “sin” that you accuse them of is that they seemingly didn’t talk MORE about AI-risk and its potential dangers. Ok, they do it now. What is your problem then? You want them to be EVEN MORE vocal and pro-active about the AI dangers and AI risk? Fine, if that’s your wish.

Secondly, you talk of EA and greater Rationalist sphere like its a unified organization with clear hierarchy and membership, and not a philosophy and set of ideas and moral/rational views. This isn’t a company, it’s a philosophical movement.

Thirdly, and most importantly, you never actually debate or argue why AI-risk is not important and not an issue, there are no arguments here to support such a clear and blatant rejection of precautionary principle when it comes to AI, if you want to debate AI-risk then debate AI-risk and its arguments directly and on its direct bases, then and only then it would be a fair and worthy critique when it comes the AI x-risk debate. It scores very low on the Graham’s hierarchy of disagreement (https://i.redd.it/oqvgpliir3161.jpg).

So, honestly it all sounds extremely hollow and empty, more of a storm-in-a-teacup, than anything serious or worthy.

Anonymous Coward says:

Re:

you never actually debate or argue why AI-risk is not important and not an issue

The article doesn’t make this claim. It points out the huge difference between what EA proponents say in public vs. what they say in private.

The article doesn’t also need to explain why what they say in private is looney tunes bonkers. But to be clear, the problem with EA is not that anyone cares about AI risk (care about what you want), the problem with EA is the contention that AI risk is massively more important than saving lives in Africa, in the long-term.

As if saving lives in Africa won’t also have enormous (and non-hypothetical) long-term impacts. Just imagine how many more people will be able to devote their lives to important research if they aren’t held down by poverty!

Håkon Egset Harnes says:

A false conspiracy

The author of the article quotes Gleiberman:

“The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.””

Indeed, this is a great idea! Fortunately, there is no secret cabal with core members wearing hooded cloakes gathering in a dingy cellar. In fact, the EA Community is remarkably open and data on it’s priorities are readily available.

The EA Survey from 2020 highlights (among other things) which cause areas engaged EAs consider most important. Global health and development is the number one cause area, followed by AI risk.

It’s a little bit more tricky to track donations, but global health and development has consistently been the majority on estimates. See this document for a fairly up to date overview.

It is true that the community has moved more in the direction of focusing on AI risks, however this does not require any conspiracy. Much more boringly, it’s simply become a more widely held view that it is a pressing issue, especially after recent rapid advancements.

It is also true that there have been discussions in the community around to what degree one should highlight different cause areas in introductory material. This should not surprise anyone, when introducing any topic you have to prioritise your limited bandwidth. Longtermism has more or less tracked the general interest of the community in introductory material. In 2014, longtermism was a small and fringe part of the community, and consequently it was mentioned briefly. Now it is a larger part of the community, and as such is given more space.

Take for instance the TED talk by MacAskill given 5 years ago. It has a significant emphasis on existential risk. If it’s a bait and switch, it’s a fast one.

I am disappointed that author seems to imply that there is some sort of conspiracy going on where all the money, talent and energy in the movement is actually going to this secret real priority, whilst the other stuff is just a cover, and then not even bothering to check! The reality is that global health and development remains the most important cause area in the movement, both in terms of reported rankings and donations. Far from being some secret end goal, longtermism is clearly and openly talked about in introductory material.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...