Last week I noted how Elon Musk saw fit to inject himself in the middle of the Helene hurricane disaster by falsely claiming that hurricane victims died because the FCC refused to give Starlink a billion dollars in subsidies. I explained at length why that claim was grotesque and incorrect, in part because the subsidies in question weren’t even slated to arrive until next year.
Most of the press didn’t bother to dissect Musk’s gross and false claim about subsidies, instead portraying him in coverage as single-handedly saving Helene victims. One, by shipping some satellite dishes to the region, and two by offering free Starlink service:
Another cornerstone of Starlink’s efforts to help locals involved promising “free” Starlink service:
But when locals and some news outlets ran down the claims, they discovered that the free service wasn’t free at all. In reality, users who signed up for service were still required to pay for hardware, resulting in a $400 charge:
“Try to sign up for the ostensibly “free” service in an area Starlink has designated as a Helene disaster zone, and surprise: You still have to pay for the terminal (normally $350, but reportedly discounted to $299 for disaster relief, though that’s not reflected in Starlink’s signup page), plus shipping and tax, bringing the grand total to just shy of $400.”
After 30 days, users are automatically shifted over to the $120 a month option; a steep price tag for folks who may have just lost everything they own. The kicker is this 30-day (not really free) trial was something Starlink already offered, just dressed up as unique disaster altruism.
“This smells like a crafty, bait and switch, wolf in sheep’s clothing scam meant to take advantage of people instead of helping them.”
That’s not to say that Starlink can’t be of service to area residents (assuming they have power and can afford it), just not in the scale and scope presented to locals by Starlink, Musk, and adoring press coverage:
“There may be isolated scenarios when what [Musk] is offering will be a service,” [local Kinney] Baughman said. “But we’re talking about cases where someone’s way up a holler, doesn’t have access to cell service, and where the flooding has broken their fiber. You’re looking at months before you get service. In that case you might want to think about [Starlink].”
But that’s an isolated case, Baughman noted. By the time Starlink arrives for others, general internet service may already be working, and thus someone is roped into paying for a satellite service they don’t actually need.
It’s like the YouTubers who film themselves nobly giving homeless people free tacos, but worse, not free, and at scale… during a major crisis. It’s all once again very demonstrative of who Musk truly is. Or, as the case may be, very clearly isn’t.
Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.
Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.
If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.
In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.
What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.
Effective Altruism’s “brand management”
Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”
When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).
A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”
“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.
The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”
“We should be kind of quiet about it in public-facing spaces”
Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.
On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”
On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”
In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”
Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”
As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”
Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”
Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”
In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):
“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.
There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”
“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”
Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”
Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”
As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.
“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).
“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).
“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).
Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”
In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:
“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”
Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:
“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”
The structure of Effective Altruism rhetoric
The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”
When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.
In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”
Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”
The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.
The “Funnel Mode”
According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”
The levels are: Audience, followers, participants, contributors, core, and leadership.
In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”
At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.
The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”
According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”
The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”
Key takeaways
– Core EA
In the Public-facing/grassroots EAs (audience, followers, participants):
The main focus is effective giving à la Peter Singer.
The main cause area is global health, targeting the ‘distant poor’ in developing countries.
The donors support organizations doing direct anti-poverty work.
In the Core/highly engaged EAs (contributors, core, leadership):
The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.
With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.
– Effective Altruism was a Trojan horse
It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.
Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.
This needs to be investigated further.
Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.
We appreciate and value you as one of Joyent’s lifetime Shared Hosting customers. As this service is one of our earliest offerings, and has now run its course, your lifetime service will end on October 31, 2012
One would imagine that the FTC might have some questions for Joyent’s management on the nature of living up to the promises of what was offered. Jason Hoffman, who apparently co-founded both TextDrive and Joyent, seemed to make things worse with his defense of the decision, basically admitting that this is screwing over their earliest supporters and biggest advocates:
Having co-founded two companies that ultimately became Joyent, growing from a tiny startup to where we are today has had its ups and downs, and this is one of the toughest decisions I’ve made. In particular because I’ve always been the biggest advocate for pushing a shared hosting product forward, and then here I am, the only remaining “founder” that is active.
It’s ironic that our biggest advocates are the ones most affected by this and I know many of you are disappointed in me. I’ve received many questions and comments about why the service is being discontinued and I’m listening and will continue to listen. And like the past, this response won’t be my last.
Making the decision to discontinue the service was extremely difficult. It was driven by some simple things: the hardware is simply old (6-8 years old), it’s failing, there isn’t an upgrade path from it, there’s more than many of you likely realize and oddly enough it’s more expensive with time (while not being used much). The rest of the Joyent’s business has been paying for that, and I can’t make the argument as to why it can continue.
Yet, we’re only here because of the initial community that trusted us, and I’m genuinely grateful for the support. I’m sorry that I’ve lost that trust and I’ve upset you. You have a right to be upset. This was a tough decision with some nuance to it and none of this is lost on me.
This seems to go back and forth. First off, it’s not “ironic” that you’re screwing over your early supporters and not giving them what was promised. It’s a highly questionable business practice. As for not being able to make an argument for why the service can continue, one would think keeping Joyent’s name and reputation from being dragged through the mud would be a potential argument. Also, avoiding possible smackdowns from government officials for selling one thing and delivering another.
I recognize that things change and businesses change over times. But the company did make the promise that these were lifetime accounts and that they’d stay up for as long as the company was around. It seems only reasonable that it should not just cut off those accounts without any sort of recompense.
I’ve actually been one of the few satisfied Sprint customers for many years. Over the past few years, they were the only mobile broadband provider who didn’t limit mobile broadband to ridiculously low plans like 5 gigs per month, like other carriers. In fact, this was a key selling point, and one of the reasons why I happily stuck it out with Sprint. I know Wall St. analysts have been insisting that Sprint would need to cap such broadband usage at some point, but it seemed like a really short-sighted idea, since the unlimited broadband is really about the only facet of a Sprint account that makes it more appealing than its competitors. And so… of course… it appears to be going away. Here’s the email I recently received concerning my “phone as modem” option, which I use often enough:
Basically, with no warning, effective immediately, Sprint has unilaterally changed our deal from one where I was paying for unlimited data via the phone as a modem — to one where it’s capped at a stupidly low 5GB. And, the company even has the gall to then happily tell me (below the screenshot cut off) that this change won’t impact how much I pay — as if I should have expected them to increase the fees while taking away a feature I like.
Considering that unlimited mobile broadband was not only part of the marketing pitch, but also a big part of the reason for why I signed up for the plan I did, this certainly seems like a bait-and-switch deal… and I’d thought that bait-and-switch deals like this were violations of FTC rules, but what do I know?
Of course, on a whim, I wondered if Sprint’s marketing had changed… and I did a quick search on “Sprint unlimited broadband” and turned up the following advertisement:
If you can’t see it clearly — it appears Sprint is still advertising unlimited mobile broadband — highlighting that you can “avoid the data dilemma” and “get truly Unlimited data.” Except, um, that’s clearly not the case. Changing your plans unilaterally for those who specifically signed up for unlimited broadband is one thing. But continuing to advertise such plans while limiting them and — even worse, effectively mocking such limited plans — is simply adding rather obnoxious insult to injury. Sorry Sprint, but you may have finally convinced me it’s time to explore other options.
There’s lots of buzz going around concerning the news that an AT&T exec has admitted that to deal with the companies own inability to build out a strong cellular network (angering tons of iPhone users), that it’s planning to put in place caps and charge more to high-end users. Of course, this is pure bait and switch. The company sold people on an unlimited data plan, failed to invest in its network, and pushed high bandwidth apps on people. And, of course, it’s worth noting that while they now want to charge high bandwidth users more, they don’t say anything about the low bandwidth users. No one gets a discount. AT&T is making a ton of money off of the iPhone. It could have — and should have — invested more of that into network upgrades. Now it’s blaming its most loyal users — the same ones who it recommended high bandwidth apps to — and expecting that everyone will be happy with that? AT&T may discover that people start looking for other alternatives if they dump the unlimited data offering that they sold people.
You may recall last month that Ben Stein was fired from the NY Times after it was revealed that he was pitching “free credit reports” under the brand FreeScore.com, from a company, Adaptive Marketing, whose parent company, Vertue, has a reputation for figuring out ways to make those credit reports not so free. Reuters’ Felix Salmon helped expose this in a blog post entitled Ben Stein, predatory bait-and-switch merchant. An pseudonymous blogger under the name flaneur de fraude linked to Salmon’s post, and quoted the “predatory bait-and-switch” part.
Adaptive Marketing didn’t go after Felix Salmon for that phrase… but it did go after this anonymous blogger, starting pre-litigation discovery to try to unmask who it is. Perhaps that’s because in the blog post agreeing with Salmon, the blogger detailed a rather long and detailed list of instances where Adaptive Marketing’s parent company, Vertue, has gotten in trouble for shady practices involving getting recurring charges onto users’ credit cards. Among the links on the blog? One to Vertue’s Better Business Bureau rating, where it has a solid “F.” Paul Alan Levy, who alerted us to this story and is representing the blogger, notes, “When even the Better Business Bureau disses a company, you know there must be a big problem.” Levy continues:
Although the burden on a defamation plaintiff would be to prove falsity, in this case, of course, it is hard to believe that what the blogger said isn’t true. Instead of just getting a credit score, consumers are entitled to obtain their entire credit report free of charge at the government-mandated web site annualcreditreport.com. And the ads in question solicit telephone calls in which the service of credit monitoring is at best hawked, and at worst, as many consumers have complained, slipped in — it remains to be seen which is true. Such services“are often overrated, oversold, and overpriced.” But regardless of whether the services are worthwhile, and whether they are charged to consumers’ credit cards after a genuine consent, “bait and switch” seems to be a fair characterization of what Adaptive is doing.
Adaptive and Vertrue have been similarly criticized in the Wall Street Journal, Washington Post and New York Times, but it doesn’t claim defamation by companies that can afford to defend themselves. So Adaptive’s suit seems to be just the latest in a long line of cases in which companies that don’t want to be criticized seek to cleanse their reputations through subpoenas sent as a means of intimidation to those who may not be able to defend themselves. It remains to be seen whether the Streisand effect gives them second thoughts
In the meantime, the blogger in question is is pointing out both that Vertrue is also going after Wikipedia (good luck with that) and is now dealing with a Senate subpoena. Perhaps threatening an anonymous blogger for pointing out some questions about the company’s past isn’t such a wise move. It only seems likey to draw just a bit more attention to these questions than if the company had just left things alone. Or… even better… cleaned up its act.
Yesterday, we wrote a highly critical post concerning the details around Choruss, the recording industry’s latest plan to get universities or ISPs to hand over a chunk of money in exchange for “covenants not to sue.” On a private email list (which has been forwarded to me by a few members of that list), Mr. Griffin responded by claiming that my “report is factually incorrect in every respect.”
I certainly hope that’s true!
The points I’ve raised are that the industry will continue suing file sharing networks, that they’ll still pursue three-strikes policies, and that Choruss will be expensive, diverting a chunk of money away from other legitimate business models, which many musicians have been establishing successfully, by adding yet another middleman. Is he saying all of these assertions are false?
Actually, Griffin doesn’t address or refute any of these points at all. With respect to the last one, he actually confirms it, by claiming that Choruss will be costly to run.
The only “factual” point he disputes is a rather minor one: concerning whether the program would also cover publishers and songwriters rather than just the labels. He insists that it will, noting that Warner Music owns one of the largest publishers. That’s true, but hardly eases the worries. It just suggests, again, that this is a plan for Warner and its subsidiaries, rather than for building a better system for all stakeholders. And he doesn’t explain how the system can cover the necessary rights at the price points being discussed. In fact, by noting how costly the program is to run, and how it will lose money at first, it certainly sounds like he’s saying “this program will start out cheap, but then we’ll jack up the fees.”
He claims that Choruss “cannot credibly be claimed to be a money grab — the costs will exceed the fees,” but that’s highly misleading on several accounts. First, as noted, it confirms just how expensive the program will be. Second, if it’s a pure money loser, than why would anyone be involved with it at all? Obviously the idea, and the whole reason why Warner Music is backing it, is that it expects this to be a money maker, eventually. Claiming that it’s costly simply confirms my original point, that inserting yet another costly middleman is the last thing that we need in the process. And this just suggests that any early pricing is, once again… bait and switch. The eventual prices will have to be increased once people are locked in.
That seems to confirm my initial complaints, rather than show how they’re “factually incorrect.”
Mr. Griffin, (on a private email list), again tries to refute the claim that they haven’t included the stakeholders in the process, by noting:
“the calendar is a clear refutation: The coming week has Choruss at SXSW,
a music conference in Nashville and the music educator’s conference
in Boston. We’ve done appearances and podcasts with Educause, dozens
of public meetings at colleges and a keynote at Digital Music Forum.”
Yes, after coming up with the plan in back rooms, without input from the actual stakeholders, Griffin has started going out and presenting the plan to others. But there’s been no open discussion with those of us worried about the inevitable consequences of his plan. There’s been no explanation of why this is actually needed. There’s been no attempt to actually respond to the numerous questions that we’ve raised about the plan and no attempt to bring the actual users into the discussion:
Why do we even need such a plan when plenty of musicians are showing that they can craft business models on the open market that work?
How does adding yet another middleman make the music market any more efficient?
Will the recording industry promise to stop trying to shut down file sharing systems if this program gets adopted?
Will the recording industry promise to stop pushing for 3 strikes if this program gets adopted?
How will the program prevent the gaming opportunities, where artists set up scripts to constantly reload/download their songs?
Why should music be separated out and subsidized while other industries have to come up with their own business models?
Why should those who don’t listen to much music and aren’t interested in giving their money to the recording industry be required to participate if their university or ISP decides to make them?
Finally, Mr. Griffin takes a personal swipe at me, saying that no “responsible professional” would write what I’ve been writing. I’ve the highest respect for Mr. Griffin, who I do believe is very capable and very smart — and most certainly has the best of intentions with Choruss. But it’s a bad plan and he seems unwilling to address the many, many questions raised about it, other than to brush anyone who disagrees with him aside, and focus on talking to friendlier audiences. If he wants to brush me off as not a “responsible professional,” that’s fine. I’m willing to let anyone judge me on my work, not on what Griffin says about me. But the very least he could do is actually address the points that I’ve raised.
To date, his form of “discussion” has been to have Warner Music PR send me a statement saying that it’s “premature” to issue any criticism of his plan. That’s not discussion and that’s not addressing the many, many questions raised by his plan.
But, there’s some good news. That “music conference in Nashville” where he’ll be presenting about Choruss next week is the Leadership Music Digital Summit… which I happen to be keynoting. So, I’d love to sit down with Griffin and see if he’ll actually answer some of these questions, rather than continue brushing us off as being “factually incorrect in every respect,” without actually addressing the fundamental questions raised.
Back in December, when we revealed how Warner Music, through consultant Jim Griffin and his new organization “Choruss,” were quietly pushing a music tax on universities, Warner and Griffin snapped back angrily, telling us it wasn’t fair to criticize the plan, because it was still being “discussed.” Yet, as we then asked: where is that discussion and why isn’t it taking place with the actual stakeholders? To date, the answer has been a near deafening silence. Despite having reached out to both Griffin and Warner Music directly, neither has shown any interest to actually engage in any form of conversation.
Now we’re beginning to learn why.
While we discussed, in detail, why any such music tax is problematic, the details coming out make it clear that this is much worse than originally imagined. In fact, it’s so bad that it can be described accurately as a bait-and-switch program designed to make people (1) pay lots of money (2) believing they’re now free to file share and then find out that (3) file sharing systems will still be sued out of existence and (4) the users themselves, despite paying, will still be liable for massive lawsuits. It’s basically a plan to give the record labels tons of money, handed over by universities (so users have no chance to opt-out) without actually changing anything.
After months of silence on what he was working on behind closed doors and in backrooms, Griffin recently gave a prepared speech supposedly revealing some “details” on the plan — but as IP attorney Bennett Lincoff points out, what Griffin and Choruss are proposing is to pull the wool over universities and the public’s eyes. The plan, as we originally pointed out, isn’t a license: it’s merely a covenant not to sue — and that leads to all sorts of problems.
First, considering that the RIAA has been cutting back on lawsuits, that’s not particularly meaningful. It’ll still pushing for 3 strikes policies that will cut users off from the internet, even if they’ve paid up through Choruss. Furthermore, as was made clear in the speech, the RIAA won’t stop trying to shut down file sharing systems. So, people who think this is a good idea because it will let them use The Pirate Bay or Limewire may discover after getting locked into this program that the lawsuits continue and those services keep getting shut down. Next, since it’s just a covenant for the labels not to sue, rather than a license, it doesn’t cover all of the other rightsholders, such as songwriters and the music publishers — meaning that those who file share will still be wide open to lawsuits from those parties.
This is quite a scheme that the record labels and Griffin may pull off:
Convince universities to buy into the program with no input from students. Universities will buy into it because they think they’re “helping” deal with the “problem” of file sharing… and to avoid Congress forcing them into such agreements
Universities pass the cost on to students (of course), so students are forced to pay for this
Record labels get a big chunk of money for no good reason
New expensive bureaucracy (Choruss) gets set up to siphon more middleman cash away from musicians
Record labels don’t do anything different, since they already have started moving away from suing individuals (sorta)
The public thinks that file sharing is now legal
Record labels continue to sue and shut down favorite file sharing networks, leaving only crappy, limited and expensive “approved” systems
Individuals who paid up start getting sued by other rightsholders not covered by this agreement and not getting any money from it
And most of the press will eat it up as a revolutionary agreement whereby the record labels “legalize” file sharing.
Now can you understand why Griffin and Warner Music aren’t open to any real conversation and will slam anyone who actually offers to take part in a conversation? A real conversation might bring out these issues, and that’s the last thing the record labels want. They want everyone to believe they’re working to make file sharing legal, when all they’re doing is constructing a massive wealth transfer from people to the labels providing almost no benefit to consumers at all.
For the last few years, various connectivity providers sold “unlimited” data plans when the reality was the plans weren’t unlimited at all. Many providers are now changing the plans and instituting more clear caps, but it still seems a bit ridiculous to have marketed unlimited data plans and then pulled the rug out from under those who bought exactly what you sold them. Up in Canada, it seems that TELUS is taking it a step further. Not only did it sell people “unlimited” plans that it now regrets, it’s exercising some vague language in its contract that allows them to simply cancel the plans of those who had bought into the “unlimited” plan even just a short while ago. The company is forcing users to switch from a $75 unlimited plan to a $65 plan that is limited to just one GB per month, and dumping anyone who won’t switch. That would seem to be a pretty strong bait-and-switch claim. Sure, perhaps the telcos oversold these unlimited plans, but that doesn’t mean they shouldn’t be required to live up to what they sold.