We’ve got one more cross-post episode for you today, then next week we’re back with a brand new discussion. Recently, Mike joined the Daily Beast’s The New Abnormal podcast with host Andy Levy for a conversation about the big news from last week: Biden signing a bill that will ban TikTok in the US if owner ByteDance doesn’t divest from it. The full episode of The New Abnormal covers other topics as well, or you can listen to Mike’s segment isolated here on this week’s episode.
It’s been a long two years since the Dobbs decision to overturn Roe v. Wade. Between May 2022 when the Supreme Court accidentally leaked the draft memo and the following June when the case was decided, there was a mad scramble to figure out what the impacts would be. Besides the obvious perils of stripping away half the country’s right to reproductive healthcare, digital surveillance and mass data collection caused a flurry of concerns.
Although many activists fighting for reproductive justice had been operating under assumptions of little to no legal protections for some time, the Dobbs decision was for most a sudden and scary revelation. Everyone implicated in that moment somewhat understood the stark difference between pre-Roe 1973 and post-Roe 2022; living under the most sophisticated surveillance apparatus in human history presents a vastly different landscape of threats. Since 2022, some suspicions have been confirmed, new threats have emerged, and overall our risk assessment has grown smarter. Below, we cover the most pressing digital dangers facing people seeking reproductive care, and ways to combat them.
Digital Evidence in Abortion-Related Court Cases: Some Examples
Social Media Message Logs
A case in Nebraska resulted in a woman, Jessica Burgess, being sentenced to two years in prison for obtaining abortion pills for her teenage daughter. Prosecutors used a Facebook Messenger chat log between Jessica and her daughter as key evidence, bolstering the concerns many had raised about using such privacy-invasive tech products for sensitive communications. At the time, Facebook Messenger did not have end-to-end encryption.
In response to criticisms about Facebook’s cooperation with law enforcement that landed a mother in prison, a Meta spokesperson issued a frustratingly laconic tweet stating that “[n]othing in the valid warrants we received from local law enforcement in early June, prior to the Supreme Court decision, mentioned abortion.” They followed this up with a short statement reiterating that the warrants did not mention abortion at all. The lesson is clear: although companies do sometimes push back against data warrants, we have to prepare for the likelihood that they won’t.
Google: Search History & Warrants
Well before the Dobbs decision, prosecutors had already used Google Search history to indict a woman for her pregnancy outcome. In this case, it was keyword searches for misoprostol (a safe and effective abortion medication) that clinched the prosecutor’s evidence against her. Google acquiesced, as it so often has, to the warrant request.
Related to this is the ongoing and extremely complicated territory of reverse keyword and geolocation warrants. Google has promised that it would remove from user profiles all location data history related to abortion clinic sites. Researchers tested this claim and it was shown to be false, twice. Late in 2023, Google made a bigger promise: it would soon change how it stores location data to make it much more difficult–if not impossible–for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. This would be a genuinely helpful measure, but we’ve been conditioned to approach such claims with caution. We’ll believe it when we see it (and refer to external testing for proof).
Other Dangers to Consider
Doxxing
Sites propped up for doxxing healthcare professionals that offer abortion services are about as old as the internet itself. Doxxing comes in a variety of forms, but a quick and loose definition of it is the weaponization of open source intelligence with the intention of escalating to other harms. There’s been a massive increase in hate groups abusing public records requests and data broker collections to publish personal information about healthcare workers. Doxxing websites hosting such material are updated frequently. Doxxing has led to steadily rising material dangers (targeted harassment, gun violence, arson, just to name a few) for the past few years.
There are some piecemeal attempts at data protection for healthcare workers in more protective states like California (one which we’ve covered). Other states may offer some form of an address confidentiality program that provides people with proxy addresses. Though these can be effective, they are not comprehensive. Since doxxing campaigns are typically coordinated through a combination of open source intelligence tactics, it presents a particularly difficult threat to protect against. This is especially true for government and medical industry workers whose information may be subjected to exposure through public records requests.
Data Brokers
Recently, Senator Wyden’s office released a statement about a long investigation into Near Intelligence, a data broker company that sold geolocation data to The Veritas Society, an anti-choice think tank. The Veritas Society then used the geolocation data to target individuals who had traveled near healthcare clinics that offered abortion services and delivered pro-life advertisements to their devices.
That alone is a stark example of the dangers of commercial surveillance, but it’s still unclear what other ways this type of dataset could be abused. Near Intelligence has filed for bankruptcy, but they are far from the only, or the most pernicious, data broker company out there. This situation bolsters what we’ve been saying for years: the data broker industry is a dangerously unregulated mess of privacy threats that needs to be addressed. It not only contributes to the doxxing campaigns described above, but essentially creates a backdoor for warrantless surveillance.
Domestic Terrorist Threat Designation by Federal Agencies
Midway through 2023, The Intercept published an article about a tenfold increase in federal designation of abortion-rights activist groups as domestic terrorist threats. This projects a massive shadow of risk for organizers and activists at work in the struggle for reproductive justice. The digital surveillance capabilities of federal law enforcement are more sophisticated than that of typical anti-choice zealots. Most people in the abortion access movement may not have to worry about being labeled a domestic terrorist threat, though for some that is a reality, and strategizing against it is vital.
Looming Threats
Legal Threats to Medication Abortion
Last month, the Supreme Court heard oral arguments challenging the FDA’s approval of and regulations governing mifepristone, a widely available and safe abortion pill. If the anti-abortion advocates who brought this case succeed, access to the most common medication abortion regimen used in the U.S. would end across the country—even in those states where abortion rights are protected.
Access to abortion medication might also be threatened by a 150 year old obscenity law. Many people now recognize the long dormant Comstock Act as a potential avenue to criminalize procurement of the abortion pill.
Although the outcomes of these legal challenges are yet-to-be determined, it’s reasonable to prepare for the worst: if there is no longer a way to access medication abortion legally, there will be even more surveillance of the digital footprints prescribers and patients leave behind.
Electronic Health Records Systems
Electronic Health Records (EHRs) are digital transcripts of medical information meant to be easily stored and shared between medical facilities and providers. Since abortion restrictions are now dictated on a state-by-state basis, the sharing of these records across state lines present a serious matrix of concerns.
As some academics and privacy advocates have outlined, the interoperability of EHRs can jeopardize the safety of patients when reproductive healthcare data is shared across state lines. Although the Department of Health and Human Services has proposed a new rule to help protect sensitive EHR data, it’s currently possible that data shared between EHRs can lead to the prosecution of reproductive healthcare.
The Good Stuff: Protections You Can Take
Perhaps the most frustrating aspect of what we’ve covered thus far is how much is beyond individual control. It’s completely understandable to feel powerless against these monumental threats. That said, you aren’t powerless. Much can be done to protect your digital footprint, and thus, your safety. We don’t propose reinventing the wheel when it comes to digital security and data privacy. Instead, rely on the resources that already exist and re-tool them to fit your particular needs. Here are some good places to start:
Create a Security Plan
It’s impossible, and generally unnecessary, to implement every privacy and security tactic or tool out there. What’s more important is figuring out the specific risks you face and finding the right ways to protect against them. This process takes some brainstorming around potentially scary topics, so it’s best done well before you are in any kind of crisis. Pen and paper works best. Here’s a handy guide.
After you’ve answered those questions and figured out your risks, it’s time to locate the best ways to protect against them. Don’t sweat it if you’re not a highly technical person; many of the strategies we recommend can be applied in non-tech ways.
Careful Communications
Secure communication is as much a frame of mind as it is a type of tech product. When you are able to identify which aspects of your life need to be spoken about more carefully, you can then make informed decisions about who to trust with what information, and when. It’s as much about creating ground rules with others about types of communication as it is about normalizing the use of privacy technologies.
Assuming you’ve already created a security plan and identified some risks you want to protect against, begin thinking about the communication you have with others involving those things. Set some rules for how you broach those topics, where they can be discussed, and with whom. Sometimes this might look like the careful development of codewords. Sometimes it’s as easy as saying “let’s move this conversation to Signal.” Now that Signal supports usernames (so you can keep your phone number private), as well as disappearing messages, it’s an obvious tech choice for secure communication.
Compartmentalize Your Digital Activity
As mentioned above, it’s important to know when to compartmentalize sensitive communications to more secure environments. You can expand this idea to other parts of your life. For example, you can designate different web browsers for different use cases, choosing those browsers for the privacy they offer. One might offer significant convenience for day-to-day casual activities (like Chrome), whereas another is best suited for activities that require utmost privacy (like Tor).
Now apply this thought process towards what payment processors you use, what registration information you give to social media sites, what profiles you keep public versus private, how you organize your data backups, and so on. The possibilities are endless, so it’s important that you prioritize only the aspects of your life that most need protection.
Security Culture and Community Care
Both tactics mentioned above incorporate a sense of community when it comes to our privacy and security. We’ve said it before and we’ll say it again: privacy is a team sport. People live in communities built on trust and care for one another; your digital life is imbricated with others in the same way.
If a node on a network is compromised, it will likely implicate others on the same network. This principle of computer network security is just as applicable to social networks. Although traditional information security often builds from a paradigm of “zero trust,” we are social creatures and must work against that idea. It’s more about incorporating elements of shared trust pushing for a culture of security.
Sometimes this looks like setting standards for how information is articulated and shared within a trusted group. Sometimes it looks like choosing privacy-focused technologies to serve a community’s computing needs. The point is to normalize these types of conversations, to let others know that you’re caring for them by attending to your own digital hygiene. For example, when you ask for consent to share images that include others from a protest, you are not only pushing for a culture of security, but normalizing the process of asking for consent. This relationship of community care through data privacy hygiene is reciprocal.
Help Prevent Doxxing
As somewhat touched on above in the other dangers to consider section, doxxing can be a frustratingly difficult thing to protect against, especially when it’s public records that are being used against you. It’s worth looking into your state level voter registration records, if that information is public, and how you can request for that information to be redacted (success may vary by state).
Similarly, although business registration records are publicly available, you can appeal to websites that mirror that information (like Bizapedia) to have your personal information taken down. This is of course only a concern if you have a business registration tied to your personal address.
If you work for a business that is susceptible to public records requests revealing personal sensitive information about you, there’s little to be done to prevent it. You can, however, apply for an address confidentiality program if your state has it. You can also do the somewhat tedious work of scrubbing your personal information from other places online (since doxxing is often a combination of information resources). Consider subscribing to a service like DeleteMe (or follow a free DIY guide) for a more thorough process of minimizing your digital footprint. Collaborating with trusted allies to monitor hate forums is a smart way to unburden yourself from having to look up your own information alone. Sharing that responsibility with others makes it easier to do, as well as group planning for what to do in ways of prevention and incident response.
Take a Deep Breath
It’s natural to feel bogged down by all the thought that has to be put towards privacy and security. Again, don’t beat yourself up for feeling powerless in the face of mass surveillance. You aren’t powerless. You can protect yourself, but it’s reasonable to feel frustrated when there is no comprehensive federal data privacy legislation that would alleviate so many of these concerns.
Take a deep breath. You’re not alone in this fight. There are guides for you to learn more about stepping up your privacy and security. We’ve even curated a special list of them. And there is Digital Defense Fund, a digital security organization for the abortion access movement, who we are grateful and proud to boost. And though it can often feel like privacy is getting harder to protect, in many ways it’s actually improving. With all that information, as well as continuing to trust your communities, and pushing for a culture of security within them, safety is much easier to attain. With a bit of privacy, you can go back to focusing on what matters, like healthcare.
New York City subway and bus riders who skip paying fares are threatening the fiscal health of the nation’s largest public transportation provider and its ability to improve service, the transit authority’s chief executive said Wednesday.
“This is a fundamental, existential threat to our ability to provide first-class public transit and make it better, more frequent, more reliable,” Janno Lieber said during the agency’s monthly board meeting. “And so we got to push back.”
Ah, the ol’ “existential threat.” When this phrase has been used to describe everything from international terrorism to social media moderation, it kinds of loses all meaning. Normally, existential threats describe something serious, not the easily expected outcome of a public service pricing itself out of the market.
Now, we call all argue over things like whether a publicly-subsidized service should be more affordable or whether it’s ok to just not pay for government services if they seem to be too expensive, but we probably agree that fare jumping — while not desirable — is not an “existential threat.” For it to be an actual threat, it would have to target things that aren’t paid for with public funds and are extremely unlikely to be shut down just because they’re not bringing in as much money as city officials would prefer.
On top of that, the MTA seems to feel there are plenty of other, far more real threats, that need to be addressed. If fare jumping was the real issue, the city probably wouldn’t have deployed the National Guard to subway hubs to perform bag searches or otherwise make city residents feel the martial law so strongly desired by former mayor Mike Bloomberg is one step closer to reality.
The MTA doesn’t appear to have any good ideas on how to beat back the fare jumpers (I mean, short of possibly literally beating them), but it still had an idea. “Can’t AI do it?” the MTA asked, without bothering to consult the public that would not only be subjected to it, but expected to pay for it.
While it is using AI as a “counting tool” to add up the total “lost” to fare jumping, it won’t be able to use another favorite tool of the “Can’t AI do it?” crowd, as Stephen Nessen reports for Gothamist.
Buried in the new state budget is one sentence with major implications for the future of MTA fare enforcement: a ban on the use of facial recognition.
The new law requires the MTA to “not use, or arrange for the use, of biometric identifying technology, including but not limited to facial recognition technology, to enforce rules relating to the payment of fares.”
State Assemblymember Zohran Mamdani of Queens told Gothamist the measure was added to the budget to protect New Yorkers and their privacy.
“There has long been a concern [facial recognition] could invade upon people’s lives through expanded surveillance and through the criminalization of just existing within the public sphere,” Mamdani said.
Like every government budget bill everywhere, the New York state budget is a great place to hide law revisions you don’t want the public to discover until the bill has been passed. The same thing happened here, except this one somehow managed to benefit the public, rather than the MTA or the domestic surveillance hawks in the state legislature.
This MTA-targeting ban resulted in the MTA delivering a statement that was as unnecessary as it was empty and meaningless.
An MTA spokesperson said in a statement that the agency has never used facial recognition in its expanding surveillance system. The agency is in the midst of installing cameras in every subway station and some train cars.
Ok, then. So this changes nothing about the current state of affairs vis-à-vis MTA passenger surveillance. But it does change things about its future plans, which most likely included (prior to the passage of the budget bill) other options for policing fare jumping. And since the NYPD gets to use facial recognition, the MTA would not be out of line to assume it would enjoy the same privileges… unless something prevented it from doing so… which is what has happened here.
How long this will last is still up in the air. But, for now, the MTA will have to use less intrusive surveillance options to combat this “existential crisis.” Hopefully, if it asks for this moratorium to be lifted in the future, state leaders will at least allow the public to participate in the discussion, rather than hide a couple of lines in a must-pass budget bill.
Embark on the journey of language learning with the Rosetta Stone lifetime subscription for all languages. Rosetta Stone has been the go-to software for language learning for the past 27 years. With its immersive and intuitive training method, you might be reading, writing, and speaking a new language with confidence in no time. It’s on sale for $190.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Well, hey, let’s start this article by noting that Techdirt has no paywall, no annoying or intrusive advertising. And we almost never even bug people about the fact that there are many ways to support the site, including getting early access to some articles, the ability to join our Insider Discord, and other features as well. And we really do rely on support from readers to keep doing what we do without a paywall or other nuisances.
And a key reason that we don’t paywall our writing is because we think it diminishes the value of it. Techdirt is about helping more people understand issues around tech, policy, business, law, and civil liberties, and we can’t do a very good job of that if it’s all hidden behind a paywall.
Indeed, there has been a flood of articles from deep thinkers all about how dangerous paywalling all our news is for democracy over the last few years, with the clearest one being Nathan Robinson’s “The Truth Is Paywalled But The Lies Are Free.”
The latest entry into this category comes from the lofty perches of the Atlantic. They also titled their piece, by Richard Stengel, “Democracy Dies Behind Paywalls.”
In it, though, Stengel puts on his advocacy hat to make the case that journalism should be free, at least through the 2024 US election. There’s just one big problem. Stengel apparently failed to convince his own publisher, because, well, take a look:
If you can’t see that, it’s the article we’re talking about. It shows the headline and the subhead reading “The case for making journalism free—at least during the 2024 election” followed by a popup noting “To read this story, sign in or start a free trial.”
Oops.
That’s not to say the article is bad. It’s actually quite good. It just feels a bit on the ironic side. From that paywalled article:
Paywalls create a two-tiered system: credible, fact-based information for people who are willing to pay for it, and murkier, less-reliable information for everyone else. Simply put, paywalls get in the way of informing the public, which is the mission of journalism. And they get in the way of the public being informed, which is the foundation of democracy. It is a terrible time for the press to be failing at reaching people, during an election in which democracy is on the line. There’s a simple, temporary solution: Publications should suspend their paywalls for all 2024 election coverage and all information that is beneficial to voters. Democracy does not die in darkness—it dies behind paywalls.
The problem is not just that professionally produced news is behind a wall; the problem is that paywalls increase the proportion of free and easily available stories that are actually filled with misinformation and disinformation. Way back in 1995 (think America Online), the UCLA professor Eugene Volokh predicted that the rise of “cheap speech”—free internet content—would not only democratize mass media by allowing new voices, but also increase the proliferation of misinformation and conspiracy theories, which would then destabilize mass media.
Some of us out here in the cheap speech seats continue to fight the good fight of providing information that is beneficial to voters, but sometimes it means we need to deal with impossible paywalls on our own. And, also, continually seek out new business models.
And, look, most people recognize that the media business is facing a ton of challenges these days. There are reports of journalist layoffs all the time. The internet effectively “unbundled” the package of things that made the newspaper worth buying as a whole in the past (classifieds, sports, comics, news, reviews, etc.) and made it so that each bit could be produced separately, but, perhaps, without the corresponding business model to prop it up.
Because of that, it’s no surprise that desperation sets in. And when desperation sets in, too many people go for the easy shot: “we need money, people should pay us, so we have money.” Unfortunately, that doesn’t take the reality of the competitive landscape, the market, and the information ecosystem into account. Charging for news online might work a lot better if there wasn’t so much competitive stuff freely available.
Many people point out that before the internet, most news required a subscription (leaving aside things like libraries and free weeklies and whatnot). However, when there is so much information available for free, putting all of the good, credible information behind a paywall can allow the “free” zone to be flooded with shit.
In his (paywalled) piece against paywalls, Stengel argues that paywalls are one of the major reasons that American trust in media is so low. He assumes that what most people see as media is mostly terrible crap that they get for free, while the good journalism is behind a paywall. I’m not entirely convinced that’s true, and it would be great to see some data on that. But anecdotally, I don’t think the reason the account MAGAhat1488 thinks the press is the enemy of the people isn’t because he doesn’t want to plunk down for a New Yorker subscription.
To be fair to Stengel (though, not to his publisher at the Atlantic, which could have easily lifted the paywall), he admits to the irony:
The best way to address these challenges is for newsrooms to remove or suspend their paywalls for stories related to the 2024 election. I am mindful of the irony of putting this plea behind The Atlantic’s own paywall, but that’s exactly where the argument should be made. If you’re reading this, you’ve probably paid to support journalism that you think matters in the world. Don’t you want it to be available to others, too, especially those who would not otherwise get to see it?
Now, of course, the bigger underlying issue is that news orgs need sustainable business models (hey, reminder: support Techdirt!). But, as Stengel notes, a lot of publications did lift their paywalls during the height of the pandemic and actually found that it helped rather than hurt their business:
During the pandemic, some publications found that suspending their paywall had an effect they had not anticipated: It increased subscriptions. The Seattle Times, the paper of record in a city that was an early epicenter of coronavirus, put all of its COVID-related content outside the paywall and then saw, according to its senior vice president of marketing, Kati Erwert, “a very significant increase in digital subscriptions”—two to three times its previous daily averages. The Philadelphia Inquirer put its COVID content outside its paywall in the spring of 2020 as a public service. And then, according to the paper’s director of special projects, Evan Benn, it saw a “higher than usual number of digital subscription sign-ups.”
The Tampa Bay Times, The Denver Post, and The St. Paul Pioneer Press, in Minnesota, all experienced similar increases, as did papers operated by the Tribune Publishing Company, including the Chicago Tribune and the Hartford Courant. The new subscribers were readers who appreciated the content and the reporting and wanted to support the paper’s efforts, and to make the coverage free for others to read, too.
I have always argued that it’s best to make the core reporting free. You can then charge for other things and see if you can get people to support you. It could be things like events, or early access to articles (as in the case of Techdirt!), or access to the reporters. It sounds like the COVID paywall removals discovered the same kinda thing: people were interested in paying for something to support the paper delivering them good news.
The journalism acted as an advertisement for the paper itself in many ways. News orgs need to lean into that element. Let the journalism act as promotion, and find other ways to get people to pay to support.
There are ways to do it, but it seems odd to advocate for it behind a paywall.
The FCC’s Affordable Connectivity Program (ACP), part of the 2021 infrastructure bill, provided 23+ million low-income households a $30 broadband discount every month. But the roughly 60 million Americans benefiting from the program are poised to soon lose the discount because key Republicans — who routinely dole out billions of dollars on far dumber fare — refuse to fund a $4-$7 billion extension.
As a result, the FCC started informing struggling Americans in April that their broadband bills are all about to jump significantly as the program starts to wind down. There’s apparently going to be a last ditch effort to finance the program in May, but despite the support of broadband providers and a sizeable bipartisan coalition of lawmakers, the funding bill has a long-shot chance at success.
Republican House Speaker Mike Johnson is basically slow walking the funding bill to death because a key component of MAGA types don’t want to pay for the program or give the Biden administration a policy win in an election season.
“Because of political gameplay, about 60 million Americans will have to make hard choices between paying for the internet or paying for food, rent, and other utilities, widening the digital divide in this country,” said Gigi Sohn, a former top FCC official. “It’s embarrassing that a popular, bipartisan program with support from nearly half of Congress will end because of politics, not policy.”
If you care about this sort of stuff, I suspect your lawmakers might benefit from a call.
I wasn’t in love with the ACP. It basically involves throwing billions of dollars at telecom giants so they’ll temporarily reduce high broadband prices — that wouldn’t be high in the first place if they hadn’t spent 30 years lobbying (quite successfully) to defang our regulators and crush all competition. Many of these same companies (like Verizon and Charter) exploited the program to upsell users to more expensive tiers.
But given that both Democrats and Republicans are too corrupt to tackle or even acknowledge the real problem (unchecked regional monopoly, stifled competition, and market failure), this was at least some sort of temporary solution. We actively courted low income Americans to the program, got them used to the benefits of affordable internet during a health crisis, then pulled the rug out from under their feet.
It’s a drum I’ve been beating for some time now: the only reason cord-cutting hasn’t led the traditional cable television market into full capitulation has been television rights for live sports broadcasts. While major sports leagues and college conferences have certainly been trending into the streaming market like the rest of traditional television, it’s typically been with baby steps. And, frankly, the fractured nature of the streaming market, with all kinds of niche streaming services jumping into the game, hasn’t helped push this faster either.
So, where is the market at for these broadcast rights for major sports? We’re about to find out, as the NBA is entering a period in which it can broadly negotiate these rights with whomever it desires.
On Monday at 11:59 p.m. ET, the exclusive financial negotiation window between the NBA, ESPN and TNT Sports will officially close, allowing league commissioner Adam Silver and his top lieutenants to talk specific contract details with other potential partners, which, besides Amazon and NBC, could include Google/YouTube, Netflix and Apple.
There will be at least three separate packages, which is the NBA’s preference, but the idea of four has not been ruled out, those briefed on the discussions said.
It will be the distribution of those deals that will be most interesting here. Keep in mind that, like the NFL, the teams themselves in the NBA often have their own local rights deals that will carry the majority of NBA games, but the national games are always a spotlight, particularly when it comes to the playoffs. So, of the three or four major deals that get signed for nationally televised games, will the emphasis be placed on the streaming market or traditional cable television.
It’s likely to be a combination of both. NBC, in particular, will be of interest, given that it can pair its traditional broadcast channel with its Peacock streaming services. Disney is in a similar place, being able to offer up ABC, ESPN and its streaming services, or Disney Plus. But that doesn’t mean that pure streaming services are out of the running.
The notion that a pure streamer, like Amazon, could have significant games, including conference finals and perhaps even the NBA Finals at some point over the life of a long-term deal is a possibility, according to executives briefed on the NBA’s discussions.
The NBA will broach the idea of partnering with ESPN, Amazon, Apple, Google/YouTube TV — maybe more than one of them — to potentially offer local games direct to consumers.
What’s important here is that the NBA smartly gave itself the full range of options on its licensing menu. In the last round of rights deals, the league organized it such that all of these rights agreements co-terminate after the ’25 season.
Meaning that whatever arrangement the league comes up with, it’s going to be a fascinating view into how a major professional sports league thinks about the streaming and cable television markets.
When it comes to Backpage.com, the truth has been buried beneath a mountain of political grandstanding and legal theatrics.
There’s the public narrative about Backpage.com, and then there’s the real story. We’ve discussed this before, but if you’re not familiar with the details, you might think that Backpage was a huge sex trafficking operation that somehow dodged law enforcement for years. It was, the story goes, only finally taken down in 2018 thanks to the passage of a new law, FOSTA.
The problem is literally all of that is wrong. Backpage actually worked hard to stop trafficking on its site, including working with law enforcement to bring traffickers to justice. This went so far that the DOJ talked about what a good partner Backpage was. Indeed, so frequently lost in all of this is that, while everyone agrees how problematic sex trafficking can be, Backpage was actually helping law enforcement attack that problem. And (eventually) got punished for it.
A bunch of politicians really, really wanted Backpage’s head on a pike, led by now Vice President Kamala Harris, but plenty of others joined in as well (both Democrat and Republican). Harris had tried to bring charges against the site while she was California’s Attorney General, which failed.
Eventually, the company and its execs faced federal charges, and the feds seized the entire site. The timing of the arrests and seizure were weird. For months, politicians had been pushing a new law, FOSTA, where basically all of the language in support of the bill was about how it was needed to take down Backpage. You would think that its only purpose was to stop Backpage, if you listened to the politicians pushing the law.
But the feds seized Backpage and arrested its execs under existing legal authorities, not FOSTA, which had not even been signed into law at the time of the arrests (it had passed Congress, though, and just awaited a presidential signature).
Since then, the DOJ has seemed entirely focused on litigating the entire case as if the false narrative was true — and to do whatever possible to block the arrested executives from presenting the actual story. It’s almost as if they recognize they fucked up, but refuse to admit it. This included suppressing exculpatory evidence and trying to block the execs from using some pretty important defenses around the First Amendment.
The first trial of the execs ended in a mistrial after the DOJ couldn’t stay away from lying and claiming that the execs were involved in child sex trafficking. The judge had already made it clear to them that the case had nothing to do with child sex trafficking (it had been cut back to dealing with just a few examples of prostitution ads).
Instead of leaving it alone, the DOJ strung things out and had a second trial (even after Jim Larkin, one of the execs, took his own life days before the trial was set to start). That didn’t go well either. A jury only found the other main exec, Michael Lacey, guilty of one single (questionable) charge of money laundering, acquitted him on another charge, and couldn’t agree on dozens of other charges. In other words, on their second shot at it, they still were unable to get Lacey on any prostitution or sex trafficking claims.
Again, the DOJ could just leave it alone, but earlier this year announced that they intended to try Lacey for the third time later this year.
With regard to Mr. Lacey, the Court finds there is an insufficiency of trial evidence supporting a direct theory of liability for any of the Travel Act charges brought against him. Specifically, the Government did not put forth sufficient evidence of Mr. Lacey’s specific intent to facilitate the promotion of the posters or prostitution businesses comprising Counts 2 through 51, as that mens rea is defined by the Ninth Circuit. Though the Government put forth some evidence that Mr. Lacey had knowledge that Backpage’s Adult section evolved into an on-line prostitution advertising platform operating in states that outlaw prostitution and that he extraordinarily benefited financially therefrom, there was no evidence that he was involved with developing or overseeing Backpage’s moderation or aggregation practices for the ads in Counts 2–51.
He still faces trial on the remaining counts (and sentencing of the one count he was found guilty of last year).
The longer this case goes on, the worse everyone pushing for it has looked from all of the politicians, to the media and activist groups who supported this whole process.
But, of course, there will be no consequences for any of them.
In the end, the prosecution of Backpage.com and its executives stands as a cautionary tale about the dangers of moral panic and political grandstanding. It’s a story of how easily the truth can be obscured when those in power are more interested in scoring points than seeking justice. And it’s a reminder of how vital it is that we, as a society, remain vigilant against these abuses. Because if we don’t, this kind of thing is going to happen again and again.
The European government has spent a few years trying to break encryption. The results have been, at best, mixed. Of course, the EU government claims it’s not actually interested in breaking encryption. Instead, it hides its intentions behind phrases like “client-side scanning” and “chat control.” But it all just means the same thing: purposefully weakening or breaking encryption to allow the government to monitor communications.
Client-side scanning would necessitate the removal of one end of end-to-end encryption. Monitoring communications for “chat control” would mean the same thing. Fortunately, plenty of EU members disagreed with these proposals, finally forcing the EU Commission to drop its anti-encryption demands… for now.
As the EU government moves on from its failed proposal, it’s undergoing the usual stages of grief. First and foremost is denial — something often expressed in op-eds and formal statements that are short on facts or logic, but long on strawmen and cognitive dissonance.
But there’s still a desire to undermine encryption — one that simply won’t go away just because several EU members nations are against it. And here’s where the cops have decided to insert themselves, even though most EU citizens couldn’t care less about law enforcement’s thoughts on policy issues. I mean, they’re always the same sort of thing: less accountability, more power, fewer rights for citizens, etc.
Unfortunately, the ruling class tends to listen to cops because cops are part of the conjoined triangles (or whatever) that ensures people in power retain their power while being protected from the people being ruled. What works for cops works for the rest of the government, and that’s why this statement carries some weight, even if it’s exactly the sort of thing you’d expect to roll out of a cop’s mouth.
European Police Chiefs are calling for industry and governments to take urgent action to ensure public safety across social media platforms.
Privacy measures currently being rolled out, such as end-to-end encryption, will stop tech companies from seeing any offending that occurs on their platforms. It will also stop law enforcement’s ability to obtain and use this evidence in investigations to prevent and prosecute the most serious crimes such as child sexual abuse, human trafficking, drug smuggling, homicides, economic crime and terrorism offences.
The declaration, published today and supported by Europol and the European Police Chiefs, comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.
Well, ensuring public safety often takes the form of securing people’s private communications, i.e., the end-to-end encryption this formal statement rails against. I’m sure the EU police chiefs and the people who work for them appreciate the security enabled by encryption, whether its protecting their devices from the curiosity of interlopers or shielding their communications from public view.
But what works best for cops can’t be extended to the general public because, unlike cops shops, the public is known to be riddled with criminals. (Yes, I know. But I’m trying my best to explain this from the perspective of law enforcement officials, who would never admit they’re not doing much to keep their own backyards clean, so to speak.)
The letter opens with an admission by the collective of police chiefs that they’re unable to do their jobs unless tech companies do half the work for them.
We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership of complementary actions towards that end. That partnership is at risk.
Two key capabilities are crucial to supporting online safety.
First, the ability of technology companies to reactively provide to law enforcement investigations – on the basis of a lawful authority with strong safeguards and oversight – the data of suspected criminals on their service. This is known as ‘lawful access’.
We’ll pause here for a moment because Europol has already given us plenty to work with. First, there’s the invocation of the “children,” which is always a leading indicator of disingenuous arguments. If you’re say you’re doing it for the kids, you can get all kinds of irrational because who in their right mind would argue against someone who claims to be deeply interested in protecting children from criminals?
Then there’s the phrase “lawful access,” which means nothing more than cops believe they should have access to any potential evidence just because they have a warrant. This supposed hole in law enforcement efficiency is blamed on the advent of encryption, even though criminals have been destroying or hiding evidence for years but no law enforcement official ever sent out a statement demanding the manufacturers of fire pits, paper shredders, or bridges over bodies of water stop making it so easy for criminals to hide evidence from investigators.
Moving on, there’s more of the same stuff for a couple of paragraphs. It’s the police chiefs griping that evidence is now suddenly out of reach and that’s because tech companies won’t create encryption backdoors or just refuse to deploy encryption in the first place. More is said about crimes against children, terrorism, human trafficking, drug smuggling, and (LOL) “economic crime,” the last of which is something no government body is truly serious about because it would require prosecuting people who give them massive amounts of money in exchange for government goods and services. If you’ve heard these arguments once, you’ve heard them a thousand times. We won’t rehash them here.
But we will quote the statement again because it goes back to the “we’ve never had trouble obtaining evidence before this exact point in time” well, even though that’s clearly false.
Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity.
OK, chief. I don’t remember any mobs (flash or pitchfork-wielding) wandering into neighborhoods to destroy fireplaces, paper shredders, or toilets because those areas might be “beyond the reach of law enforcement” when it comes to ensuring evidence is always accessible to investigators. And they’ve never taken down phone lines or slashed postal vehicles’ tires just because criminals might use those methods to “communicate safely.”
Our societies have always understood criminals will have options, some of which are beyond the reach of law enforcement. They don’t want to see those options destroyed or undermined just because criminals also happen to use the same options non-criminals use.
Then there’s the unneeded swipe at “anonymity,” which suggests Europol’s top cops think online anonymity is problematic in and of itself — even the stuff that exists out in the open away from the depths of the “dark web.”
Finally, the cops of Europe reach the “nerd harder” point of their message — one that claims to be conciliatory but is anything but:
We are committed to supporting the development of critical innovations, such as encryption, as a means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful. Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments.
Whenever government entities pushing new forms of intrusion start talking about “flexibility,” that trait should only apply to those on the receiving end of the imposition. Governments will never back down. It’s always the other side that’s expected to compromise their standards and ethics.
This statement isn’t going to budge the needle for Meta or others offering the same level of security for their users. But it may light a small fire under the asses of enemies of encryption in the European government. And that’s the real danger of this collection of clichés presenting itself as a principled stance on the issue.
Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.
Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.
If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.
In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.
What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.
Effective Altruism’s “brand management”
Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”
When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).
A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”
“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.
The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”
“We should be kind of quiet about it in public-facing spaces”
Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.
On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”
On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”
In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”
Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”
As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”
Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”
Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”
In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):
“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.
There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”
“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”
Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”
Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”
As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.
“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).
“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).
“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).
Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”
In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:
“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”
Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:
“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”
The structure of Effective Altruism rhetoric
The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”
When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.
In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”
Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”
The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.
The “Funnel Mode”
According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”
The levels are: Audience, followers, participants, contributors, core, and leadership.
In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”
At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.
The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”
According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”
The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”
Key takeaways
– Core EA
In the Public-facing/grassroots EAs (audience, followers, participants):
The main focus is effective giving à la Peter Singer.
The main cause area is global health, targeting the ‘distant poor’ in developing countries.
The donors support organizations doing direct anti-poverty work.
In the Core/highly engaged EAs (contributors, core, leadership):
The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.
With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.
– Effective Altruism was a Trojan horse
It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.
Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.
This needs to be investigated further.
Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.