So on Monday you probably saw that Apple announced it was more tightly integrating “AI” into its mobile operating system, both via a suite of AI-powered tools dubbed Apple Intelligence, and tighter AI integration with its Siri voice assistant. It’s not that big of a deal and (hopefully) reflects Apple’s more cautious approach to AI after Google told millions of customers to eat rocks and glue.
Apple was quick to point out that the processing for these features would happen on device to (hopefully) protect privacy. If Apple’s own systems can’t handle user inquiries, some of them may be offloaded to OpenAI’s ChatGPT, attempting to put a little distance between Apple and potential error-prone fabulism:
“Apple struck a deal with OpenAI, the maker of ChatGPT, to support some of its A.I. capabilities. Requests that its system can’t field will be directed to ChatGPT. For example, a user could say that they have salmon, lemon and tomatoes and want help planning dinner with those ingredients. Users would have to choose to direct those requests to ChatGPT, ensuring that they know that the chatbot — not Apple — is responsible if the answers are unsatisfying.”
Enter Elon Musk, who threw a petulant hissy fit after he realized that Apple had decided to partner with OpenAI instead of his half-cooked and more racist Grok pseudo-intelligence system. He took to ExTwitter to (falsely) claim Apple OS with ChatGPT integration posed such a dire privacy threat, iPhones would soon be banned from his companies and visitors would have to leave theirs in a copper-lined faraday cage:
This is, of course, a bunch of meaningless gibberish not actually based on anything technical. Musk just made up some security concerns to malign a competitor. The ban of iPhones will likely never happen. And to Luddites, his reference to a faraday cage certainly sounds smart.
Here’s the thing: nearly every app on your phone and every device in your home is tracking your every movement, choice, and behavior in granular detail, then selling that information to an international cabal of largely unregulated and extremely dodgy data brokers. Brokers that then turn around and sell that information to any nitwit with two nickels to rub together, including foreign intelligence.
So kind of like the TikTok hysteria, the idea that Apple’s new partnership with OpenAI poses some unique security and privacy threat above and beyond our existing total lack of any meaningful privacywhatsoever in a country too corrupt to pass an internet privacy law is pure performance.
Keep in mind that Musk’s companies have a pretty well established track record of playing extremely fast and loose with consumer privacy themselves. Automakers are generally some of the worst companies in tech when it comes to privacy and security, and according to Mozilla, Tesla is the worst of the worst. So the idea that Musk was engaging in any sort of good faith contemplation of privacy is simply false.
Still, it didn’t take long before the click-hunting press turned Musk’s meaningless comments into an entire news cycle. Resources that could have been spent on any number of meaningful stories were instead focused on platforming a throwaway comment by a fabulist that literally didn’t mean anything:
I’m particularly impressed with the Forbes headline, which pushes two falsehoods in one headline: that the nonexistent ban hurt Apple stock (it didn’t), while implying the ban already happened.
I’m unfortunately contributing to the news cycle noise to make a different point: this happens with every single Musk brain fart now, regardless of whether the comment has any meaning or importance. And it needs to stop if we’re to preserve what’s left of our collective sanity.
Journalists are quick to insist that it’s their noble responsibility to cover the comments of important people. But journalism is about informing and educating the public, which isn’t accomplished by redirecting limited journalistic resources to cover platform bullshit that means nothing and will result in nothing meaningful. All you’ve done is made a little money wasting people’s time.
U.S. newsrooms are so broadly conditioned to chase superficial SEO clickbait ad engagement waves they’ve tricked themselves into thinking these kinds of hollow news cycles serve an actual function. But it might be beneficial for the industry to do some deep introspection into the harmful symbiosis it has forged with terrible people and bullshit (see: any of a million recent profiles of white supremacists).
There are a million amazing scientific developments or acts of fatal corporate malfeasance that every single day go uncovered or under-covered in this country because we’ve hollowed out journalism and replaced it with lazy engagement infotainment.
And despite Musk’s supposed disdain for the press, his circus sideshow has always heavily relied on this media dysfunction. As his stock-fluffing house of cards starts to unravel, he’s had to increasingly rely on gibberish and controversy to distract, and U.S. journalism continues to lend a willing hand.
First it spent fifteen years hyping up Musk’s super-genius engineering mythology, despite mounting evidence that Musk was more of a clever credit-absconding opportunist than any sort of revolutionary thinker. After 20 years of this, the press still treats every belch the man has as worthy of the deepest analysis under the pretense they’re engaging in some sort of heady public service.
The public interest is often served by not covering the fever dreams of obnoxious opportunists, but every part of the media ecosystem is financially incentivized to do the exact opposite. And instead of any sort of introspection into the symbiosis the media has formed with absolute bullshit, we’re using badly crafted automation to supercharge all of the sector’s worst impulses at unprecedented new scale.
We’ve noted repeatedly how the primary problem with U.S. media and journalism often isn’t the actual journalists, or even the sloppy automation being used to cut corners; it’s the terrible, trust fund brunchlords that fail upwards into positions of power. The kind of owners and managers who, through malice or sheer incompetence, turn the outlets they oversee into either outright propaganda mills (Newsweek), or money-burning, purposeless mush (Vice, Buzzfeed, The Messenger, etc., etc.)
Very often these collapses are framed with the narrative that doing journalism online somehow simply can’t be profitable; something quickly disproven every time a group of journalists go off to start their own media venture without a useless executive getting outsized compensation and setting money on fire (see: 404 Media and countless other successful worker-owned journalistic ventures).
Of course these kinds of real journalistic outlets still have to scrap and fight for every nickel. At the same time, there’s just an unlimited amount of money available if you want to participate in the right wing grievance propaganda engagement economy, telling white young males that all of their very worst instincts are correct (see: Rogan, Taibbi, Rufo, Greenwald, Tracey, Tate, Peterson, etc. etc. etc. etc.).
One key player in this far right delusion farm, failed Presidential opportunist Vivek Ramaswamy, recently tried to ramp up his own make believe efforts to “fix journalism.” He did so by purchasing an 8 percent stake in what’s left of Buzzfeed after it basically gave up on trying to do journalism last year.
Ramaswamy’s demands are silly toddler gibberish, demanding that the outlet pivot to video, and hire such intellectual heavyweights as Tucker Carlson and Aaron Rodgers:
“Mr. Ramaswamy is pushing BuzzFeed to add three new members to its board of directors, to hone its focus on audio and video content and to embrace “greater diversity of thought,” according to a copy of his letter shared with The New York Times.”
By “greater diversity of thought,” he means pushing facts-optional right wing grievance porn and propaganda pretending to be journalism, in a bid to further distract the public from issues of substance, and fill American heads with pudding.
But it sounds like Ramaswamy couldn’t even do that successfully. For one thing, Buzzfeed simply isn’t relevant as a news company any longer. Gone is the real journalism peppered between cutesy listicles, replaced mostly with mindless engagement bullshit. For another, Buzzfeed CEO Jonah Peretti (and affiliates) still hold 96 percent of the Class B stock, giving them 50 times voting rights of Ramaswamy.
So as Elizabeth Lopatto at The Verge notes, Ramaswamy is either trying to goose and then sell his stock, or is engaging in a hollow and performative PR exercise where he can pretend that he’s “fixing liberal media.” Or both. The entire venture is utterly purposeless and meaningless:
“You’ve picked Buzzfeed because the shares are cheap, and because you have a grudge against a historically liberal outlet. It doesn’t matter that Buzzfeed News no longer exists — you’re still mad that it famously published the Steele dossier and you want to replace a once-respected, Pulitzer-winning brand with a half-assed “creators” plan starring Tucker Carlson and Aaron Rodgers. Really piss on your enemies’ graves, right, babe?”
While Ramaswamy’s bid is purely decorative, it, of course, was treated as a very serious effort to “fix journalism” by other pseudo-news outlets like the NY Post, The Hill, and Fox Business. It’s part of the broader right wing delusion that the real problem with U.S. journalism isn’t that it’s improperly financed and broadly mismanaged by raging incompetents, but that it’s not dedicated enough to coddling wealth and power. Or telling terrible, ignorant people exactly what they want to hear.
Of course none of this is any dumber than what happens in the U.S. media sector every day, as the Vice bankruptcy or the $50 million dollar Messenger implosion so aptly illustrated. U.S. journalism isn’t just dying, the corpses of what remains are being abused by terrible, wealthy puppeteers with no ideas and nothing of substance to contribute (see the postmortem abuse of Newsweek or Sports Illustrated), and in that sense Vivek fits right in.
Just last week, we posted about a thorough debunking of the “mobile phones are bad for kids” argument making the rounds. We highlighted how banning phones can actually do significantly more harm than good. This was based on a detailed article in the Atlantic by UCI psychologist and researcher Candice Odgers, who actually studies this stuff.
As she’s highlighted multiple times, none of the research supports the idea that phones or social media are inherently harmful. In the very small number of cases where there’s a correlation, it often appears to be a reverse causal situation:
When associations are found, things seem to work in the opposite direction from what we’ve been told: Recent research among adolescents—including among young-adolescent girls, along with a large review of 24 studies that followed people over time—suggests that early mental-health symptoms may predict later social-media use, but not the other way around.
In other words, the kids who often have both mental health problems and difficulty putting down their phones appear to be turning to their phones because of their untreated mental health issues, and because they don’t have the resources necessary to help them.
Taking away their phones takes away their attempt to find help for themselves, and it also takes away a lifeline that many teens have used to actually help themselves: whether it’s in finding community, finding information they need, or otherwise communicating with friends and family. Cutting that off can cause real harm. Again, as Odgers notes:
We should not send the message to families—and to teens—that social-media use, which is common among adolescents and helpful in many cases, is inherently damaging, shameful, and harmful. It’s not. What my fellow researchers and I see when we connect with adolescents is young people going online to do regular adolescent stuff. They connect with peers from their offline life, consume music and media, and play games with friends. Spending time on YouTube remains the most frequent online activity for U.S. adolescents. Adolescents also goonline to seek information about health, and this is especially true if they also report experiencing psychological distress themselves or encounter barriers to finding help offline. Many adolescents report finding spaces of refuge online, especially when they have marginalized identities or lack support in their family and school. Adolescents also report wanting, but often not being able to access, online mental-health services and supports.
All adolescents will eventually need to know how to safely navigate online spaces, so shutting off or restricting access to smartphones and social media is unlikely to work in the long term. In many instances, doing so could backfire: Teens will find creative ways to access these or even more unregulated spaces, and we should not give them additional reasons to feel alienated from the adults in their lives.
But still, when there’s a big moral panic to be had, politicians are quick to follow, so banning mobile phones for teens is on the table:
The committee says that without urgent action, more children could be put in harm’s way.
It recommended the next government should work with the regulator, Ofcom, to consult on additional measures, including the possibility of a total ban on smartphones for under-16s or having parental controls installed as a default.
The report notes that mobile phone use has gone up in recent years:
Committee chairman Robin Walker said its inquiry had heard “shocking statistics on the extent of the damage being done to under-18s”.
The report found there had been a significant rise in screen time in recent years, with one in four children now using their phone in a manner resembling behavioural addiction.
Again, most of those studies cover the time when kids were locked down due to COVID, so it’s not at all surprising that their phone usage went up. And, as Odgers has shown, there’s been no actual data suggesting any real or significant causal connection between phone use and mental health problems for kids.
Incredibly, since this is happening in the UK, you’d think that maybe the MPs could wander over to Oxford (surely, they’re aware of it?) and talk to Andrew Przybylski, who keeps releasing new studies, based on huge data sets, that show no link between phone/internet use and harm. He’s been pumping these out for years. Surely, the MPs could be bothered to go take a look?
But, no, it’s easier to ignore the real problem (and the hard societal solutions it would entail) and instead play up the moral panic. Then, they can do something stupidly, dangerously counter-productive like banning phones… and claim victory. Then, when the mental health problems get worse, not better, they can find some other technology to blame, rather than taking a step back and wondering why they’re failing to provide resources to help those dealing with a mental health crisis.
Generally speaking, a private company’s press release is not “news.” If anyone wants to watch companies stroke themselves off in public, there are plenty of sites dedicated to that kink.
If it’s cop tech purveyors seeking to redeem themselves after a bunch of negative press and/or the loss of high-profile government contracts, we should be even more suspicious of “reporting” that simply regurgitates PR rep statements with headlines that suggest this is something the rabble should be paying attention to.
Former Chicago Police Superintendent Eddie Johnson appeared in a video this week supporting ShotSpotter, a technology designed to identify the location of gunfire incidents.
The video was posted on the website saveshotspotter.com, where Johnson emphasized the system’s role in preventing crime in Chicago neighborhoods.
In this video, the former police official claims ShotSpotter reliably detects gunshots, that it helped prioritize patrol patterns, and (literally unbelievably) made the city safer. (It apparently does not reliably detect gunshots, unless the only experts you ask are those employed by ShotSpotter.)
According to the former CPD official, “you can’t put a price on public safety.” But that’s obviously not true. Budgets have to be passed every year and the price (as it were) of public safety is whatever is thrown in that general direction from year to year by local governments. But very little of that outlay has anything to do with making the public safer — not when it’s being thrown at the Chicago PD and its suite of questionable tech.
In September, the city’s contract with ShotSpotter will expire. This has been prompted by ShotSpotter’s utter uselessness in decreasing gun crime. It’s not just me saying that. It’s also the city’s Inspector General’s Office, which had this to say about the shoddy shot spotting the city’s been paying ShotSpotter to provide:
The CPD data examined by OIG does not support a conclusion that ShotSpotter is an effective tool in developing evidence of gun-related crime.
If it doesn’t work, there’s no reason to keep paying for it. Hence the September contract expiration.
While this “reporting” from Fox 32’s Jenna Carroll does link to the site’s previous reporting on ShotSpotter, it does not actually link to the site where this recording by the former CPD Superintendent is posted. I will link to it so you can see the entirety of it yourself.
This site dedicated to “saving” ShotSpotter in Chicago contains nothing more than a link to third-party form that allows visitors to express their support of ShotSpotter to Chicago lawmakers, the aforementioned video recorded by Eddie Johnson (the site says nothing about any compensation), and a copyright notice at the bottom of the page:
@All Rights Reserved. SoundThinking, Inc.
It’s not a grassroots effort. Fortunately, it’s not even astroturf. The company behind the site makes it clear right up front (albeit all the way at the bottom of the page, rather than via an “About” page or something more visitors are likely to see) who’s doing this: SoundThinking. That would be ShotSpotter’s new-ish name — one apparently chosen because it had run its own reputation into the ground.
So, this isn’t news. And it’s not even as coherent or content-filled as an average press release. This is just “reporters” fielding emails from ShotSpotter’s PR and deciding there’s nothing wrong with combining fear-mongering with site churn.
A former Chicago police superintendent is leading an effort to keep ShotSpotter, a gunshot detection system, in Chicago as the city’s contract with the technology’s provider is set to expire later this year.
“People are uneducated about what ShotSpotter really is,” former CPD Supt. Eddie Johnson said.
However, unlike Fox, NBC at least has the honesty to add this to its reporting of ShotSpotter’s latest desperation move:
As part of his effort, Johnson has offered his voice to a lobbyist-led website, saveshotspotter.com.
Good to know. Unsurprisingly, the Fox affiliate’s “reporting” makes no mention of this fact, instead focusing on the “positive” aspects of ShotSpotter — at least as portrayed by supporters of the tech like Eddie Johnson.
And this seems to be as much about the former CPD superintendent as it is about ShotSpotter. ShotSpotter’s tech was rolled out under Johnson’s watch. Four years later, the Office of the Inspector General was calling it useless. These are two entities seeking to rehabilitate their reputations: ShotSpotter and its champion in Chicago, CPD Superintendent Eddie Johnson. Whatever. Let them try. Just don’t help them by presenting their self-serving efforts as “news.”
Long before TikTok histrionics took root, you might recall that numerous members of Congress spent numerous years freaking about another Chinese company: Chinese telecom equipment maker Huawei.
The argument, made without much in the way of public evidence, was that Huawei was systematically using its network gear to spy on Americans at a massive scale. Congress then proposed a solution: it would require that U.S. telecom operators (large and small) rip out all Huawei equipment from their networks at great expense, then replace it with usually more expensive alternatives.
So in early 2020 Congress passed the Secure And Trusted Communications Act effectively banning Huawei from U.S. telecom networks. Congress doled out $1.9 billion to rip out and replace Huawei gear, but it’s estimated to cost around $5 billion to actually complete the effort. But instead of finishing the job, the FCC last week politely pointed out that Congress did nothing.
The costs were significant, but especially for smaller telecoms which may now be forced to withdraw from the program, or shut their networks down entirely without additional funding, the FCC wrote:
“Several recipients have recently informed the Commission that they foresee significant consequences that could result from the lack of full funding, including having to shut down their networks or withdraw from the program. Because Reimbursement Program recipients serve many rural and remote areas of the country where they may be the only mobile broadband service provider, a shutdown of all or part of their networks could eliminate the only provider in some regions.”
So basically Congress freaked out about Huawei (without much public evidence), proposed a very expensive solution to address the problem, didn’t fully fund the program, then basically fell asleep. Their apathy and dysfunction now risks putting some smaller ISPs out of business; ISPs that may be the only broadband provider available in some rural markets. Impressive work all around.
In part because the work of actually doing a coherent job doesn’t much interest an ad-engagement chasing press. The actually daily nitty gritty details of coherent governance isn’t sexy, and (usually) doesn’t get you on cable TV.
It all aptly demonstrates the often-performative nature of Congress’ hysteria over China. They’ll thrash and flail over some perceived Chinese threat to grab headlines and make U.S. competitors (like Facebook or Cisco) happy, throw out some barely workable solution (like say the TikTok ban), then consider their job done. Once the cable networks are no longer interested they’ll just forget about the problem entirely.
Holding social media companies solely responsible for the mental health challenges faced by today’s youth is not only misguided—it’s dangerous. Misdiagnosing the problem means your solutions are going to be actively harmful.
I know that, these days especially, it seems that the thing everyone across the political aisle seems to agree on is that the internet is uniquely harmful for children, and, somehow this is all, “big tech’s fault.” And, yes, we can all point to examples of where internet companies could do better.
But, as we keep pointing out, reality is a lot more complex than the simplistic narrative that “tech is a unique evil and out to get kids.” First off, many of the underlying problems are societal problems, which the internet is merely shining a light on. Those problems are most tempting for politicians to freak out about because their existence often highlights governmental failures where all the internet is doing is shining a light on those problems.
Of course, the narrative about “social media addiction” is not actually supported by the facts or data. What the data has shown, repeatedly, is that for a very small percentage of users — mainly those who are dealing with existing mental health issues, and who don’t have the support or resources they need — may turn to social media instead, and that can be problematic.
And when the tech companies try to study these things in order to fix them, their studies get falsely portrayed as “not doing anything” to fix the problems, making it that much more difficult to get the companies to do any more research to help.
So, the very framing of Siebel Newsom’s complaint is misguided, and she should (maybe) be asking her husband why California isn’t doing enough to support the cohort of students who need mental health resources and aren’t getting them.
The rest of her speech is equally misguided:
She also noted industry efforts to stymie the state’s landmark Age Appropriate Design Code — a law designed to protect children’s online privacy and safety — that has been held up in courts since the governor signed it in 2022.
“We’re sadly being held back by capitalist interests,” the first partner said. “For me, legislation is necessary if the tech companies aren’t going to be more transparent.”
The AADC wasn’t just blocked because of whining, but because the law itself would lead to the suppression of constitutionally protected speech, as the judge clearly explained. And that suppression of constitutionally protected speech could, in many cases, cause real harm, such as by suppressing useful information on mental health and suicide prevention for kids.
But Siebel Newsom doesn’t seem interested in actually understanding what works. She seems to just want political wins for her husband.
I recognize it’s convenient to claim that it’s just “big tech” that pointed out the constitutional flaws of the AADC, but they were just the only ones that could afford the lawsuit. There were plenty of others, myself included, who pointed out just how dangerous a law this was.
In an interview with POLITICO following the panel, Siebel Newsom called tech companies the “Wild West” and spoke to the need to protect children.
This is also nonsense. The “wild west” trope hasn’t been true in more than a decade, but it makes for a convenient scapegoat for people like Siebel Newsom trying to divert attention from the failings of her husband as California’s governor.
For what it’s worth, most of the big tech companies actually supported the AADC. They know that they’re already doing most of what it requires, and also that the law creates a moat that smaller competitors will struggle with. The idea that big tech doesn’t like the AADC may be a convenient narrative for Siebel Newsom to spread, but it’s a myth.
Siebel Newsom, during the panel, also spoke about her experience as a mother to four children between the ages of 8 and 14, who have had their own struggles with social media. At one point, she choked up recounting how the couple had to pull one of their kids out of school because of online bullying.
“Granted, we’re public figures, but what we’re seeing, sadly, are adults coming after our own children online — parents of children, and then the children mimicking it. I actually pulled my daughter from school,” she said. “It’s bad.”
That does sound bad, and I have sympathy for any family dealing with bullying. But bullying predated the internet, and it is something that lots of families and schools have dealt with for years. A CDC study from a few years back found that offline bullying at schools was noticeably more prevalent than online bullying.
Is Siebel Newsom advocating for new laws to punish schools that allowed bullying to happen on campus?
In fact, multiple studies have shown that online bullying has actually been on a massive decline over the last few years. Some attribute that to school lockdowns during COVID-19, but that seems strange, given that interactions between kids increased online due to those lockdowns.
Indeed, I’ve heard from a few researchers suggesting that the biggest success in stopping online bullying among students was simply better education. Schools are starting anti-bullying education programs much earlier, and it’s a bigger focus in curricula, which, at least, appears to be having an impact.
But sure, let’s blame tech for not magically stopping this larger societal issue.
Look, everyone wants to make sure kids are safe and not being bullied. But these politicians with simplistic answers, who immediately blame tech companies, continue to present answers that make them feel good, but do little to deal with the realities and actual complexities of the issues at play.
“AI,” or semi-cooked language learning models are very cool. There’s a world of possibility there in terms of creativity and productivity tools to scientific research.
But early adoption of AI has been more of a rushed mess driven by speculative VC bros who are more interested in making money off of hype (see: pointless AI badges), or cutting corners (see: journalism), or badly automating already broken systems (see: health insurance) or using it as a bludgeon against labor (also see: journalism and media), than any sort of serious beneficial application.
And a lot of these kinds of folks are absolutely obsessed with putting “AI” into products that don’t need it just to generate hype. Even if the actual use case makes no coherent sense.
We most recently saw this with the Human AI pin, which was hyped as some kind of game changing revelation pre-release, only for reviewers to realize it doesn’t really work, and doesn’t really provide much not already accomplished by the supercomputer sitting in everybody’s pocket. But even that’s not as bad as companies who claim they’re integrating AI — despite doing nothing of the sort.
Like Logitech, which recently released a new M750 wireless mouse it has branded as a “signature AI edition.” But as Ars Technica notes, all they did is rebrand a mouse released in 2022 while adding a customizable button:
“I was disappointed to learn that the most distinct feature of the Logitech Signature AI Edition M750 is a button located south of the scroll wheel. This button is preprogrammed to launch the ChatGPT prompt builder, which Logitech recently added to its peripherals configuration app Options+.
That’s pretty much it.”
Ars points to other, similarly pointless ventures, like earbuds with clunky ChatGPT gesture prompt integration or Microsoft’s CoPilot button; stuff that only kind of works and nobody actually asked for. It’s basically just an attempt to seem futuristic and cash in on the hype wave without bothering to see if the actually functionality works or works better than what already exists.
The AI hype cycle isn’t entirely unlike the 5G hype cycle, in that there certainly is interesting and beneficial technology under the hood, but the way it’s being presented or implemented by overzealous marketing types is so detached from reality as to not be entirely coherent.
That creates an association over time in the minds of consumers between the technology and empty bluster, undermining the tech itself and future, actually beneficial use cases.
When bankers and marketing departments took over Silicon Valley it resulted in the actual engineers (like Woz) getting shoved in the corner out of sight. We’re now seeing such a severe disconnect between hype and reality it’s resulting in a golden age of bullshit artists and actively harming everybody in the chain, including the marketing folks absolutely convinced they’re being exceptionally clever.
Because it sells so very well to a certain percentage of the population, ridiculous people are saying ridiculous things about crime rates in the United States. And, of course, the first place to post this so-called “news” is Fox News.
An independent group of law enforcement officials and analysts claim violent crime rates are much higher than figures reported by the Federal Bureau of Investigation in its 2023 violent crime statistics.
The Coalition for Law Order and Safety released its April 2024 report called “Assessing America’s Crime Crises: Trends, Causes, and Consequences,” and identified four potential causes for the increase in crime in most major cities across the U.S.: de-policing, de-carceration, de-prosecution and politicization of the criminal justice system.
This plays well with the Fox News audience, many of whom are very sure there needs to be a whole lot more law and order, just so long as it doesn’t affect people who literally RAID THE CAPITOL BUILDING IN ORDER TO PREVENT A PEACEFUL TRANSFER OF PRESIDENTIAL POWER FROM HAPPENING.
These people like to hear the nation is in the midst of a criminal apocalypse because it allows them to be even nastier to minorities and even friendlier to cops (I mean, right up until they physically assault them for daring to stand between them and the inner halls of the Capitol buildings).
It’s not an “independent group.” In fact, it’s a stretch to claim there’s anything approaching actual “analysis” in this “report.” This is pro-cop propaganda pretending to be an actual study — one that expects everyone to be impressed by the sheer number of footnotes.
Here’s the thing about the Coalition for Law Order and Safety. Actually, here’s a few things. First off, the name is bad and its creators should feel bad. The fuck does “Law Order” actually mean, with or without the context of the alleged coalition’s entire name?
Second, this “coalition” has no web presence. Perhaps someone with stronger Googling skills may manage to run across a site run by this “coalition,” but multiple searches using multiple parameters have failed to turn up anything that would suggest this coalition exists anywhere outside of the title page of its report [PDF].
Here’s what we do know about this “coalition:” it contains, at most, two coalitioners (sp?). Those would be Mark Morgan, former assistant FBI director and, most recently, the acting commissioner of CBP (Customs and Border Protection) during Trump’s four-year stretch of abject Oval Office failure. (He’s also hooked up with The Federalist and The Heritage Foundation.) The other person is Sean Kennedy, who is apparently an attorney for the “Law Enforcement Legal Defense Fund.” (He also writes for The Federalist.)
At least that entity maintains a web presence. And, as can be assumed by its name, it spends a lot of its time and money ensuring bad cops keep their jobs and fighting against anything that might resemble transparency or accountability. (The press releases even contain exclamation points!)
This is what greets visitors to the Law Enforcement Legal Defense Fund website:
Yep, it’s yet another “George Soros is behind whatever we disagree with” sales pitch. Gotta love a pro-cop site that chooses to lead off with a little of the ol’ anti-antisemitism. This follows shortly after:
Well, duh. But maybe the LELDF should start asking the cops it represents and defends why they’re not doing their jobs. And let’s ask ourselves why we’re paying so much for a public service these so-called public servants have decided they’re just not going to do anymore, even though they’re still willing to collect the paychecks.
We could probably spend hours just discussing these two screenshots and their combination of dog whistles, but maybe we should just get to the report — written by a supposed “coalition,” but reading more like an angry blog post by the only two people actually willing to be named in the PDF.
There are only two aspects of this report that I agree with. First, the “coalition” (lol) is correct in the fact that the FBI’s reported crime rates are, at best, incomplete. The FBI recently changed the way it handles crime reporting, which has introduced plenty of clerical issues that numerous law enforcement agencies are still adjusting to.
Participation has been extremely low due to the learning curve, as well as a general reluctance to share pretty much anything with the public. On top of that, the coding of crimes has changed, which means the FBI is still receiving a blend of old reporting and adding that to new reporting that follows the new nomenclature. As a result, there’s a blend of old and new that potentially muddies crime stats and may result in an inaccurate picture of crime rates across the nation.
The other thing I agree with is the “coalition’s” assertion that criminal activity is under-reported. What I don’t agree with is the cause of this issue, which the copagandists chalk up to “progressive prosecutors” being unwilling to prosecute some crimes and/or bail reform programs making crime consequence-free. I think the real issue is that the public knows how cops will respond to most reported crimes and realizes it’s a waste of their time to report crimes to entities that have gotten progressively worse at solving crime, even as their budget demands and tech uptake continue to increase.
Law enforcement is a job and an extension of government bureaucracy. Things that aren’t easy or flashy just aren’t going to get done. It’s not just a cop problem. It persists anywhere people are employed and (perhaps especially) where people are employed to provide public services to taxpayers.
Those agreements aside, the rest of the report is pure bullshit. It cherry-picks stats, selectively quotes other studies that agree with its assertions, and delivers a bunch of conclusory statements that simply aren’t supported by the report’s contents.
And it engages in the sort tactics no serious report or study would attempt to do. It places its conclusions at the beginning of the report, surrounded by black boxes to highlight the author’s claims, and tagged (hilariously) as “facts.”
Here’s what the authors claim to be facts:
FACT #1: America faces a public safety crisis beset by high crime and an increasingly dysfunctional justice system.
First off, the “public safety crisis” does not exist. Neither does “high crime.” Even if we agree with the authors’ assertions, the crime rates in this country are only slightly above the historical lows we’ve enjoyed for most of the 21st century. It is nowhere near what it used to be, even if (and I’m ceding this ground for the sake of my argument) we’re seeing spikes in certain locations around the country. (I’ll also grant them the “dysfunctional justice system” argument, even though my definition of dysfunction isn’t aligned with theirs. The system is broken and has been for a long time.)
FACT #2: Crime has risen dramatically over the past few years and may be worse than some official statistics claim.
“Dramatically” possibly as in year-over-year in specific areas. “Dramatically” over the course of the past decades? It’s actually still in decline, even given the occasional uptick.
FACT #3: Although preliminary 2023 data shows a decline in many offenses, violent and serious crime remains at highly elevated levels compared to 2019.
Wow, that sounds furious! I wonder what it signifies…? First, the authors admit crime is down, but then they insist crime is actually up, especially when compared to one specific waypoint on the continuum of crime statistics. Man, I’ve been known to cherry-pick stats to back up my assertions, but at least I’ve never (1) limited my cherry-picking to a single year, or (2) pretended my assertions were some sort of study or report backed by a “coalition” of “professionals” and “analysts.” Also: this assertion is pretty much, “This thing that just happened to me once yesterday is a disturbing trend!”
There’s more:
FACT #4: Less than 42% of violent crime and 33% of property crime victims reported the crime to law enforcement.
Even if true (and it probably isn’t), this says more about cops than it says about criminals. When people decide they’re not going to report these crimes, it’s not because they think the criminal justice system as a whole will fail them. It’s because they think the first responders (cops) will fail them. The most likely reason for less crime reporting is the fact that cops are objectively terrible at solving crimes, even the most violent ones.
FACT #5: The American people feel less safe than they did prior to 2020.
First, it depends on who you ask. And second, even if the public does feel this way, it’s largely because of “studies” like this one and “reporting” performed by Fox News and others who love to stoke the “crime is everywhere” fires because it makes it easier to sell anti-immigrant and anti-minority hatred. It has little, if anything, to do with actual crime rates. We’re twice as safe (at least!) as a nation than we were in the 1990s and yet most people are still convinced things are worse than they’ve ever been — a belief they carry from year to year like reverse amortization.
Then we get to the supposed “causes” of all the supposed “facts.” And that’s where it gets somehow stupider. The “coalition” claims this is the direct result of cops doing less cop work due to decreased morale, “political hostility” [cops aren’t a political party, yo], and “policy changes.” All I can say is: suck it up. Sorry the job isn’t the glorious joyride it used to be. Do your job or GTFO. Stop collecting paychecks while harming public safety just because the people you’ve alienated for years are pushing back. Even if this assertion is true (it isn’t), the problem is cops, not society or “politics.”
The authors also claim “decarceration” and “de-prosecution” are part of the problem. Bail reform efforts and prosecutorial discretion has led to fewer people being charged or held without bail. These are good things that are better for society in the long run. Destroying people’s lives simply because they’re suspected of committing a crime creates a destructive cycle that tends to encourage more criminal activity because non-criminal means of income are now that much farther out of reach.
You can tell this argument is bullshit because of who it cites in support of this so-called “finding.” It points to a study released by Paul Cassell and Richard Fowles entitled “Does Bail Reform Increase Crime?” According to the authors it does and that conclusion is supposedly supported by the data pulled from Cook County, Illinois, where bail reform efforts were implemented in 2019.
But the stats don’t back up the paper’s claims. The authors take issue with the county’s “community safety rate” calculations:
The Bail Reform Study reported figures for the number of defendants who “remained crime-free” in both the fifteen months before G.O. 18.8A and the fifteen months after—i.e., the number of defendants who were not charged in Cook County for another crime after their initial bail hearing date. Based on these data, the Study concluded that “considerable stability” existed in “community safety rates” comparing the pre- and post-implementation periods. Indeed, the Study highlighted “community safety rates” that were about the same (or even better) following G.O. 18.8A’s implementation. The Study reported, for example, that the “community safety rate” for male defendants who were released improved from 81.2% before to 82.5% after; and for female defendants, the community safety rate improved from 85.7% to 86.5%.66 Combining the male and female figures produces the result that the overall community safety rate improved from 81.8% before implementation of the changes to 83.0% after.
The authors say this rate is wrong. They argue that releasing more accused criminals resulted in more crime.
[T]he number of defendants released pretrial increased from 20,435 in the “before” period to 24,504 in the “after” period—about a 20% increase. So even though the “community safety rate” remained roughly stable (and even improved very slightly), the total number of crimes committed by pretrial releasees increased after G.O. 18.8A. In the fifteen months before G.O.18.8A, 20,435 defendants were released and 16,720 remained “crime-free”—and, thus, arithmetically (although this number is not directly disclosed in the Study), 3,715 defendants were charged with committing new crimes while they were released. In the fifteen months after G.O. 18.8A, 24,504 defendants were released, and 20,340 remained “crimefree”—and, thus, arithmetically, 4,164 defendants were charged with committing new crimes while they were released. Directly comparing the before and after numbers shows a clear increase from 3,715 defendants who were charged with committing new crimes before to 4,164 after—a 12% increase.
Even if, as the authors point out, more total crimes were committed after more total people were released (bailed out or with no bail set), the County’s assessment isn’t wrong. More people were released and the recidivism rate fell. Prior to G.O. 18.8A’s passage, the “crime-free” rate (as a percentage) was 79.6%. After the implementation of bail reform, it was 83.0%. If we follow the authors to the conclusion they seem to feel is logical, the only way to prevent recidivism is to keep every arrestee locked up until their trial, no matter how minor the crime triggering the arrest.
But that’s not how the criminal justice system is supposed to work. The authors apparently believe thousands of people who are still — in the eyes of the law — innocent (until proven guilty) should stay behind bars because the more people cut loose on bail (or freed without bail being set) increases the total number of criminal acts perpetrated.
Of course, we should expect nothing less. Especially not from Paul Cassell. Cassell presents himself as a “victim’s rights” hero. And while he has a lot to say about giving crime victims more rights than Americans who haven’t had the misfortune of being on the resulting end of a criminal act, he doesn’t have much to say about the frequent abuse of these laws by police officers who’ve committed violence against arrestees.
Not only that, but he’s the author of perhaps the worst paper ever written on the intersection of civil rights and American law enforcement. The title should give you a pretty good idea what you’re in for, but go ahead and give it a read if you feel like voluntarily angrying up your blood:
Still Handcuffing the Cops? A Review of Fifty Years of Empirical Evidence of Miranda’s Harmful Effects on Law Enforcement
Yep, that’s Cassell arguing that the Supreme Court forcing the government to respect Fifth Amendment rights is somehow a net loss for society and the beginning of a five-decade losing streak for law enforcement crime clearance rates.
So, you can see why an apparently imaginary “coalition” that supports “law order” would look to Cassell to provide back-up for piss poor assertions and even worse logic.
There’s plenty more that’s terrible in this so-called study from this so-called coalition. And I encourage you to give it a read because I’m sure there are things I missed that absolutely should be named and shamed in the comments.
But let’s take a look at one of my favorite things in this terrible waste of bits and bytes:
Concomitant with de-prosecution is a shift toward politicization of prosecutorial priorities at the cost of focusing on tackling rising crime and violent repeat offenders. Both local, state, and federal prosecutors have increasingly devoted a greater share of their finite, and often strained, resources to ideologically preferred or politically expedient cases. This approach has two primary and deleterious impacts – on public safety and on public faith in the impartiality of the justice system.
Under the tranche of recently elected progressive district attorneys, prosecutions of police officers have climbed dramatically and well before the death of George Floyd in May 2020, though they have since substantially accelerated.
Yep, that’s how cops see this: getting prosecuted is a “political” thing, as though being a cop was the same thing as being part of a political party. Cops like to imagine themselves as a group worthy of more rights. Unfortunately, lots of legislators agree with them. But trying to hold cops accountable is not an act of partisanship… or at least it shouldn’t be. It should just be the sort of thing all levels of law enforcement oversight strive for. But one would expect nothing more than this sort of disingenuousness from a couple of dudes who want to blame everyone but cops for the shit state the nation’s in (even if it actually isn’t.)
The European government has spent a few years trying to break encryption. The results have been, at best, mixed. Of course, the EU government claims it’s not actually interested in breaking encryption. Instead, it hides its intentions behind phrases like “client-side scanning” and “chat control.” But it all just means the same thing: purposefully weakening or breaking encryption to allow the government to monitor communications.
Client-side scanning would necessitate the removal of one end of end-to-end encryption. Monitoring communications for “chat control” would mean the same thing. Fortunately, plenty of EU members disagreed with these proposals, finally forcing the EU Commission to drop its anti-encryption demands… for now.
As the EU government moves on from its failed proposal, it’s undergoing the usual stages of grief. First and foremost is denial — something often expressed in op-eds and formal statements that are short on facts or logic, but long on strawmen and cognitive dissonance.
But there’s still a desire to undermine encryption — one that simply won’t go away just because several EU members nations are against it. And here’s where the cops have decided to insert themselves, even though most EU citizens couldn’t care less about law enforcement’s thoughts on policy issues. I mean, they’re always the same sort of thing: less accountability, more power, fewer rights for citizens, etc.
Unfortunately, the ruling class tends to listen to cops because cops are part of the conjoined triangles (or whatever) that ensures people in power retain their power while being protected from the people being ruled. What works for cops works for the rest of the government, and that’s why this statement carries some weight, even if it’s exactly the sort of thing you’d expect to roll out of a cop’s mouth.
European Police Chiefs are calling for industry and governments to take urgent action to ensure public safety across social media platforms.
Privacy measures currently being rolled out, such as end-to-end encryption, will stop tech companies from seeing any offending that occurs on their platforms. It will also stop law enforcement’s ability to obtain and use this evidence in investigations to prevent and prosecute the most serious crimes such as child sexual abuse, human trafficking, drug smuggling, homicides, economic crime and terrorism offences.
The declaration, published today and supported by Europol and the European Police Chiefs, comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.
Well, ensuring public safety often takes the form of securing people’s private communications, i.e., the end-to-end encryption this formal statement rails against. I’m sure the EU police chiefs and the people who work for them appreciate the security enabled by encryption, whether its protecting their devices from the curiosity of interlopers or shielding their communications from public view.
But what works best for cops can’t be extended to the general public because, unlike cops shops, the public is known to be riddled with criminals. (Yes, I know. But I’m trying my best to explain this from the perspective of law enforcement officials, who would never admit they’re not doing much to keep their own backyards clean, so to speak.)
The letter opens with an admission by the collective of police chiefs that they’re unable to do their jobs unless tech companies do half the work for them.
We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership of complementary actions towards that end. That partnership is at risk.
Two key capabilities are crucial to supporting online safety.
First, the ability of technology companies to reactively provide to law enforcement investigations – on the basis of a lawful authority with strong safeguards and oversight – the data of suspected criminals on their service. This is known as ‘lawful access’.
We’ll pause here for a moment because Europol has already given us plenty to work with. First, there’s the invocation of the “children,” which is always a leading indicator of disingenuous arguments. If you’re say you’re doing it for the kids, you can get all kinds of irrational because who in their right mind would argue against someone who claims to be deeply interested in protecting children from criminals?
Then there’s the phrase “lawful access,” which means nothing more than cops believe they should have access to any potential evidence just because they have a warrant. This supposed hole in law enforcement efficiency is blamed on the advent of encryption, even though criminals have been destroying or hiding evidence for years but no law enforcement official ever sent out a statement demanding the manufacturers of fire pits, paper shredders, or bridges over bodies of water stop making it so easy for criminals to hide evidence from investigators.
Moving on, there’s more of the same stuff for a couple of paragraphs. It’s the police chiefs griping that evidence is now suddenly out of reach and that’s because tech companies won’t create encryption backdoors or just refuse to deploy encryption in the first place. More is said about crimes against children, terrorism, human trafficking, drug smuggling, and (LOL) “economic crime,” the last of which is something no government body is truly serious about because it would require prosecuting people who give them massive amounts of money in exchange for government goods and services. If you’ve heard these arguments once, you’ve heard them a thousand times. We won’t rehash them here.
But we will quote the statement again because it goes back to the “we’ve never had trouble obtaining evidence before this exact point in time” well, even though that’s clearly false.
Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity.
OK, chief. I don’t remember any mobs (flash or pitchfork-wielding) wandering into neighborhoods to destroy fireplaces, paper shredders, or toilets because those areas might be “beyond the reach of law enforcement” when it comes to ensuring evidence is always accessible to investigators. And they’ve never taken down phone lines or slashed postal vehicles’ tires just because criminals might use those methods to “communicate safely.”
Our societies have always understood criminals will have options, some of which are beyond the reach of law enforcement. They don’t want to see those options destroyed or undermined just because criminals also happen to use the same options non-criminals use.
Then there’s the unneeded swipe at “anonymity,” which suggests Europol’s top cops think online anonymity is problematic in and of itself — even the stuff that exists out in the open away from the depths of the “dark web.”
Finally, the cops of Europe reach the “nerd harder” point of their message — one that claims to be conciliatory but is anything but:
We are committed to supporting the development of critical innovations, such as encryption, as a means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful. Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments.
Whenever government entities pushing new forms of intrusion start talking about “flexibility,” that trait should only apply to those on the receiving end of the imposition. Governments will never back down. It’s always the other side that’s expected to compromise their standards and ethics.
This statement isn’t going to budge the needle for Meta or others offering the same level of security for their users. But it may light a small fire under the asses of enemies of encryption in the European government. And that’s the real danger of this collection of clichés presenting itself as a principled stance on the issue.
Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.
Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.
If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.
In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.
What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.
Effective Altruism’s “brand management”
Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”
When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).
A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”
“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.
The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”
“We should be kind of quiet about it in public-facing spaces”
Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.
On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”
On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”
In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”
Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”
As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”
Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”
Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”
In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):
“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.
There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”
“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”
Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”
Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”
As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.
“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).
“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).
“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).
Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”
In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:
“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”
Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:
“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”
The structure of Effective Altruism rhetoric
The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”
When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.
In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”
Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”
The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.
The “Funnel Mode”
According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”
The levels are: Audience, followers, participants, contributors, core, and leadership.
In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”
At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.
The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”
According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”
The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”
Key takeaways
– Core EA
In the Public-facing/grassroots EAs (audience, followers, participants):
The main focus is effective giving à la Peter Singer.
The main cause area is global health, targeting the ‘distant poor’ in developing countries.
The donors support organizations doing direct anti-poverty work.
In the Core/highly engaged EAs (contributors, core, leadership):
The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.
With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.
– Effective Altruism was a Trojan horse
It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.
Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.
This needs to be investigated further.
Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.