The European government has spent a few years trying to break encryption. The results have been, at best, mixed. Of course, the EU government claims it’s not actually interested in breaking encryption. Instead, it hides its intentions behind phrases like “client-side scanning” and “chat control.” But it all just means the same thing: purposefully weakening or breaking encryption to allow the government to monitor communications.
Client-side scanning would necessitate the removal of one end of end-to-end encryption. Monitoring communications for “chat control” would mean the same thing. Fortunately, plenty of EU members disagreed with these proposals, finally forcing the EU Commission to drop its anti-encryption demands… for now.
As the EU government moves on from its failed proposal, it’s undergoing the usual stages of grief. First and foremost is denial — something often expressed in op-eds and formal statements that are short on facts or logic, but long on strawmen and cognitive dissonance.
But there’s still a desire to undermine encryption — one that simply won’t go away just because several EU members nations are against it. And here’s where the cops have decided to insert themselves, even though most EU citizens couldn’t care less about law enforcement’s thoughts on policy issues. I mean, they’re always the same sort of thing: less accountability, more power, fewer rights for citizens, etc.
Unfortunately, the ruling class tends to listen to cops because cops are part of the conjoined triangles (or whatever) that ensures people in power retain their power while being protected from the people being ruled. What works for cops works for the rest of the government, and that’s why this statement carries some weight, even if it’s exactly the sort of thing you’d expect to roll out of a cop’s mouth.
European Police Chiefs are calling for industry and governments to take urgent action to ensure public safety across social media platforms.
Privacy measures currently being rolled out, such as end-to-end encryption, will stop tech companies from seeing any offending that occurs on their platforms. It will also stop law enforcement’s ability to obtain and use this evidence in investigations to prevent and prosecute the most serious crimes such as child sexual abuse, human trafficking, drug smuggling, homicides, economic crime and terrorism offences.
The declaration, published today and supported by Europol and the European Police Chiefs, comes as end-to-end encryption has started to be rolled out across Meta’s messenger platform.
Well, ensuring public safety often takes the form of securing people’s private communications, i.e., the end-to-end encryption this formal statement rails against. I’m sure the EU police chiefs and the people who work for them appreciate the security enabled by encryption, whether its protecting their devices from the curiosity of interlopers or shielding their communications from public view.
But what works best for cops can’t be extended to the general public because, unlike cops shops, the public is known to be riddled with criminals. (Yes, I know. But I’m trying my best to explain this from the perspective of law enforcement officials, who would never admit they’re not doing much to keep their own backyards clean, so to speak.)
The letter opens with an admission by the collective of police chiefs that they’re unable to do their jobs unless tech companies do half the work for them.
We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership of complementary actions towards that end. That partnership is at risk.
Two key capabilities are crucial to supporting online safety.
First, the ability of technology companies to reactively provide to law enforcement investigations – on the basis of a lawful authority with strong safeguards and oversight – the data of suspected criminals on their service. This is known as ‘lawful access’.
We’ll pause here for a moment because Europol has already given us plenty to work with. First, there’s the invocation of the “children,” which is always a leading indicator of disingenuous arguments. If you’re say you’re doing it for the kids, you can get all kinds of irrational because who in their right mind would argue against someone who claims to be deeply interested in protecting children from criminals?
Then there’s the phrase “lawful access,” which means nothing more than cops believe they should have access to any potential evidence just because they have a warrant. This supposed hole in law enforcement efficiency is blamed on the advent of encryption, even though criminals have been destroying or hiding evidence for years but no law enforcement official ever sent out a statement demanding the manufacturers of fire pits, paper shredders, or bridges over bodies of water stop making it so easy for criminals to hide evidence from investigators.
Moving on, there’s more of the same stuff for a couple of paragraphs. It’s the police chiefs griping that evidence is now suddenly out of reach and that’s because tech companies won’t create encryption backdoors or just refuse to deploy encryption in the first place. More is said about crimes against children, terrorism, human trafficking, drug smuggling, and (LOL) “economic crime,” the last of which is something no government body is truly serious about because it would require prosecuting people who give them massive amounts of money in exchange for government goods and services. If you’ve heard these arguments once, you’ve heard them a thousand times. We won’t rehash them here.
But we will quote the statement again because it goes back to the “we’ve never had trouble obtaining evidence before this exact point in time” well, even though that’s clearly false.
Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity.
OK, chief. I don’t remember any mobs (flash or pitchfork-wielding) wandering into neighborhoods to destroy fireplaces, paper shredders, or toilets because those areas might be “beyond the reach of law enforcement” when it comes to ensuring evidence is always accessible to investigators. And they’ve never taken down phone lines or slashed postal vehicles’ tires just because criminals might use those methods to “communicate safely.”
Our societies have always understood criminals will have options, some of which are beyond the reach of law enforcement. They don’t want to see those options destroyed or undermined just because criminals also happen to use the same options non-criminals use.
Then there’s the unneeded swipe at “anonymity,” which suggests Europol’s top cops think online anonymity is problematic in and of itself — even the stuff that exists out in the open away from the depths of the “dark web.”
Finally, the cops of Europe reach the “nerd harder” point of their message — one that claims to be conciliatory but is anything but:
We are committed to supporting the development of critical innovations, such as encryption, as a means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful. Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments.
Whenever government entities pushing new forms of intrusion start talking about “flexibility,” that trait should only apply to those on the receiving end of the imposition. Governments will never back down. It’s always the other side that’s expected to compromise their standards and ethics.
This statement isn’t going to budge the needle for Meta or others offering the same level of security for their users. But it may light a small fire under the asses of enemies of encryption in the European government. And that’s the real danger of this collection of clichés presenting itself as a principled stance on the issue.
Effective Altruism (EA) is typically explained as a philosophy that encourages individuals to do the “most good” with their resources (money, skills). Its “effective giving” aspect was marketed as evidence-based charities serving the global poor. The Effective Altruism philosophy was formally crystallized as a social movement with the launch of the Centre for Effective Altruism (CEA) in February 2012 by Toby Ord, Will MacAskill, Nick Beckstead, and Michelle Hutchinson. Two other organizations, “Giving What We Can” (GWWC) and “80,000 Hours,” were brought under CEA’s umbrella, and the movement became officially known as Effective Altruism.
Effective Altruists (EAs) were praised in the media as “charity nerds” looking to maximize the number of “lives saved” per dollar spent, with initiatives like providing anti-malarial bed nets in sub-Saharan Africa.
If this movement sounds familiar to you, it’s thanks to Sam Bankman-Fried (SBF). With FTX, Bankman-Fried was attempting to fulfill what William MacAskill taught him about Effective Altruism: “Earn to give.” In November 2023, SBF was convicted of seven fraud charges (stealing $10 billion from customers and investors). In March 2024, SBF was sentenced to 25 years in prison. Since SBF was one of the largest Effective Altruism donors, public perception of this movement has declined due to his fraudulent behavior. It turned out that the “Earn to give” concept was susceptible to the “Ends justify the means” mentality.
In 2016, the main funder of Effective Altruism, Open Philanthropy, designated “AI Safety” a priority area, and the leading EA organization 80,000 Hours declared artificial intelligence (AI) existential risk (x-risk) is the world’s most pressing problem. It looked like a major shift in focus and was portrayed as a “mission drift.” It wasn’t.
What looked to outsiders – in the general public, academia, media, and politics – as a “sudden embrace of AI x-risk” was a misconception. The confusion existed because many people were unaware that Effective Altruism has always been devoted to this agenda.
Effective Altruism’s “brand management”
Since its inception, Effective Altruism has been obsessed with the existential risk (x-risk) posed by artificial intelligence. As the Effective Altruism movement’s leaders recognized it could be perceived as “confusing for non-EAs,” they decided to get donations and recruit new members for different causes like poverty and “sending money to Africa.”
When the movement was still small, they planned the bait-and-switch tactics in plain sight (in old forum discussions).
A dissertation by Mollie Gleiberman methodically analyzes the distinction between the “public-facing EA” and the inward-facing “core EA.” Among the study findings: “From the beginning, EA discourse operated on two levels, one for the general public and new recruits (focused on global poverty) and one for the core EA community (focused on the transhumanist agenda articulated by Nick Bostrom, Eliezer Yudkowsky, and others, centered on AI-safety/x-risk).”
“EA’s key intellectual architects were all directly or peripherally involved in transhumanism, and the global poverty angle was merely a stepping stone to rationalize the progression from a non-controversial goal (saving lives in poor countries) to transhumanism’s far more radical aim,” explains Gleiberman. It was part of their “brand management” strategy to conceal the latter.
The public-facing discourse of “giving to the poor” (in popular media and books) was a mirage designed to get people into the movement and then lead them to the “core EA,” x-risk, which is discussed in inward-facing spaces. The guidance was to promote the publicly-facing cause and keep quiet about the core cause. Influential Effective Altruists explicitly wrote that this was the best way to grow the movement.
In public-facing/grassroots EA, the target recipients of donations, typically understood to be GiveWell’s top recommendations, are causes like AMF – Against Malaria Foundation. “Here, the beneficiaries of EA donations are disadvantaged people in the poorest countries of the world,” says Gleiberman. “In stark contrast to this, the target recipients of donations in core EA are the EAs themselves. Philanthropic donations that support privileged students at elite universities in the US and UK are suddenly no longer one of the worst forms of charity but one of the best. Rather than living frugally (giving up a vacation/a restaurant) so as to have more money to donate to AMF, providing such perks is now understood as essential for the well-being and productivity of the EAs, since they are working to protect the entire future of humanity.”
“We should be kind of quiet about it in public-facing spaces”
Let the evidence speak for itself. The following quotes are from three community forums where Effective Altruists converse with each other: Felicifia (inactive since 2014), LessWrong, and EA Forum.
On June 4, 2012, Will Crouch (it was before he changed his last name to MacAskill) had already pointed out (on the Felicifia forum) that “new effective altruists tend to start off concerned about global poverty or animal suffering and then hear, take seriously, and often are convinced by the arguments for existential risk mitigation.”
On November 10, 2012, Will Crouch (MacAskill) wrote on the LessWrong forum that “it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area.” In the same message, he also argued that “it’s still a good thing to save someone’s life in the developing world,” however, “of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfedby existential risk mitigation.”
In 2011, a leader of the EA movement, an influential GWWC leader/CEA affiliate, who used a “utilitymonster“ username on the Felicifia forum, had a discussion with a high-school student about the “High Impact Career” (HIC, later rebranded to 80,000 Hours). The high schooler wrote: “But HIC always seems to talk about things in terms of ‘lives saved,’ I’ve never heard them mentioning other things to donate to.” Utilitymonster replied: “That’s exactly the right thing for HIC to do. Talk about ‘lives saved’ with their public face, let hardcore members hear about x-risk, and then, in the future, if some excellent x-risk opportunity arises, direct resources to x-risk.”
Another influential figure, Eliezer Yudkowsky, wrote on LessWrong in 2013: “I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.”
As a comment to a Robert Wiblin post in 2015, Eliezer Yudkowsky clarified: “As I’ve said repeatedly, xrisk cannot be the public face of EA, OPP [OpenPhil] can’t be the public face of EA. Only ‘sending money to Africa’ is immediately comprehensible as Good and only an immediately comprehensible Good can make up for the terrible PR profile of maximization or cause neutrality. And putting AI in there is just shooting yourself in the foot.”
Rob Bensinger, the research communications manager at MIRI (and prominent EA movement member), argued in 2016 for a middle approach: “In fairness to the ‘MIRI is bad PR for EA’ perspective, I’ve seen MIRI’s cofounder (Eliezer Yudkowsky) make the argument himself that things like malaria nets should be the public face of EA, not AI risk. Though I’m not sure I agree […]. If we were optimizing for having the right ‘public face’ I think we’d be talking more about things that are in between malaria nets and AI […] like biosecurity and macroeconomic policy reform.”
Scott Alexander (Siskind) is the author of the influential rationalist blog “Slate Star Codex” and “Astral Codex Ten.” In 2015, he acknowledged that he supports the AI-safety/x-risk cause area, but believes Effective Altruists should not mention it in public-facing material: “Existential risk isn’t the most useful public face for effective altruism – everyone including Eliezer Yudkowsky agrees about that.” In the same year, 2015, he also wrote: “Several people have recently argued that the effective altruist movement should distance itself from AI risk and other far-future causes lest it make them seem weird and turn off potential recruits. Even proponents of AI risk charities like myself agree that we should be kind of quiet about it in public-facing spaces.”
In 2014, Peter Wildeford (then Hurford) published a conversation about “EA Marketing” with EA communications specialist Michael Bitton. Peter Wildeford is the co-founder and co-CEO of Rethink Priorities and Chief Advisory Executive at IAPS (Institute for AI Policy and Strategy). The following segment was about why most people will not be real Effective Altruists (EAs):
“Things in the ea community could be a turn-off to some people. While the connection to utilitarianism is ok, things like cryonics, transhumanism, insect suffering, AGI, eugenics, whole brain emulation, suffering subroutines, the cost-effectiveness of having kids, polyamory, intelligence-enhancing drugs, the ethics of terraforming, bioterrorism, nanotechnology, synthetic biology, mindhacking, etc. might not appeal well.
There’s a chance that people might accept the more mainstream global poverty angle, but be turned off by other aspects of EA. Bitton is unsure whether this is meant to be a reason for de-emphasizing these other aspects of the movement. Obviously, we want to attract more people, but also people that are more EA.”
“Longtermism is a bad ‘on-ramp’ to EA,” wrote a community member on the Effective Altruism Forum. “AI safety is new and complicated, making it more likely that people […] find the focus on AI risks to be cult-like (potentially causing them to never get involved with EA in the first place).”
Jan Kulveit, who leads the European Summer Program on Rationality (ESPR), shared on Facebook in 2018: “I became an EA in 2016, and it the time, while a lot of the ‘outward-facing’ materials were about global poverty etc., with notes about AI safety or far future at much less prominent places. I wanted to discover what is the actual cutting-edge thought, went to EAGx Oxford and my impression was the core people from the movement mostly thought far future is the most promising area, and xrisk/AI safety interventions are top priority. I was quite happy with that […] However, I was somewhat at unease that there was this discrepancy between a lot of outward-facing content and what the core actually thinks. With some exaggeration, it felt like the communication structure is somewhat resembling a conspiracy or a church, where the outward-facing ideas are easily digestible, like anti-malaria nets, but as you get deeper, you discover very different ideas.”
Prominent EA community member and blogger Ozy Brennan summarized this discrepancy in 2017: “A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI.”
As Effective Altruists engaged more deeply with the movement, they were encouraged to shift to AI x-risk.
“My perception is that many x-risk people have been clear from the start that they view the rest of EA merely as a recruitment tool to get people interested in the concept and then convert them to Xrisk causes.” (Alasdair Pearce, 2015).
“I used to work for an organization in EA, and I am still quite active in the community. 1 – I’ve heard people say things like, ‘Sure, we say that effective altruism is about global poverty, but — wink, nod — that’s just what we do to get people in the door so that we can convert them to helping out with AI/animal suffering/(insert weird cause here).’ This disturbs me.” (Anonymous#23, 2017).
“In my time as a community builder […] I saw the downsides of this. […] Concerns that the EA community is doing a bait-and-switch tactic of ‘come to us for resources on how to do good. Actually, the answer is this thing and we knew all along and were just pretending to be open to your thing.’ […] Personally feeling uncomfortable because it seemed to me that my 80,000 Hours career coach had a hidden agenda to push me to work on AI rather than anything else.” (weeatquince [Sam Hilton], 2020).
Austin Chen, the co-founder of Manifold Markets, wrote on the Effective Altruism Forum in 2020: “On one hand, basically all the smart EA people I trust seem to be into longtermism; it seems well-argued and I feel a vague obligation to join in too. On the other, the argument for near-term evidence-based interventions like AMF [Against Malaria Foundation] is what got me […] into EA in the first place.”
In 2019, EA Hub published a guide: “Tips to help your conversation go well.” Among the tips like “Highlight the process of EA” and “Use the person’s interest,” there was “Preventing ‘Bait and Switch.’” The post acknowledged that “many leaders of EA organizations are most focused on community building and the long-term future than animal advocacy and global poverty.” Therefore, to avoid the perception of a bait-and-switch, it is recommended to mention AI x-risk at some point:
“It is likely easier, and possibly more compelling, to talk about cause areas that are more widely understood and cared about, such as global poverty and animal welfare. However, mentioning only one or two less controversial causes might be misleading, e.g. a person could become interested through evidence-based effective global poverty interventions, and feel misled at an EA event mostly discussing highly speculative research into a cause area they don’t understand or care about. This can feel like a “bait and switch”—they are baited with something they care about and then the conversation is switched to another area. One way of reducing this tension is to ensure you mention a wide range of global issues that EAs are interested in, even if you spend more time on one issue.”
Oliver Habryka is influential in EA as a fund manager for the LTFF, a grantmaker for the Survival and Flourishing Fund, and leading the LessWrong/ Lightcone Infrastructure team. He claimed that the only reason EA should continue supporting non-longtermist efforts is to preserve the public’s perception of the movement:
“To be clear, my primary reason for why EA shouldn’t entirely focus on longtermism is because that would to some degree violate some implicit promises that the EA community has made to the external world. If that wasn’t the case, I think it would indeed make sense to deprioritize basically all the non-longtermist things.”
The structure of Effective Altruism rhetoric
The researcher Mollie Gleiberman explains the EA’s “strategic ambiguity”: “EA has multiple discourses running simultaneously, using the same terminology to mean different things depending on the target audience. The most important aspect of this double rhetoric, however, is not that it maintains two distinct arenas of understanding, but that it also serves as a credibility bridge between them, across which movement recruits (and, increasingly, the general public) are led in incremental steps from the less controversial position to the far more radical position.”
When Effective Altruists talked in public about “doing good,” “helping others,” “caring about the world,” and pursuing “the most impact,” the public understanding was that it meant eliminating global poverty and helping the needy and vulnerable. Inward, “doing good” and the “most pressing problems” were understood as working to mainstream core EA ideas like extinction from unaligned AI.
In the communication with “core EAs,” “the initial focus on global poverty is explained as merely an example used to illustrate the concept – not the actual cause endorsed by most EAs.”
Jonas Vollmer has been involved with EA since 2012 and held positions of considerable influence in terms of allocating funding (EA Foundation/CLR Fund, CEA EA Funds). In 2018, he candidly explained when asked about his EA organization “Raising for Effective Giving” (REG): “REG prioritizes long-term future causes, it’s just much easier to fundraise for poverty charities.”
The entire point was to identify whatever messaging works best to produce the outcomes that movement founders, thought leaders, and funders actually wished to see. It was all about marketing to outsiders.
The “Funnel Mode”
According to the Centre for Effective Altruism, “When describing the target audience of our projects, it is useful to have labels for different parts of the community.”
The levels are: Audience, followers, participants, contributors, core, and leadership.
In 2018, in a post entitled The Funnel Mode, CEA elaborated that “Different parts of CEA operate to bring people into different parts of the funnel.”
At first, CEA concentrated outreach on the top of the funnel, through extensive popular media coverage, including MacAskill’s Quartz column and book, ‘Doing Good Better,’ Singer’s TED talk, and Singer’s ‘The Most Good You Can Do.’ The idea was to create a broad base of poverty focused, grassroots Effective Altruists to help maintain momentum and legitimacy, and act as an initial entry point to the funnel, from which members sympathetic to core aims could be recruited.
The 2017 edition of the movement’s annual survey of participants (conducted by the EA organization Rethink Charity) noted that this is a common trajectory: “New EAs are typically attracted to poverty relief as a top cause initially, but subsequently branch out after exploring other EA cause areas. An extension of this line of thinking credits increased familiarity with EA for making AI more palatable as a cause area. In other words, the top of the EA outreach funnel is most relatable to newcomers (poverty), while cause areas toward the bottom of the funnel (AI) seem more appealing with time and further exposure.”
According to the Center for Effective Altruism, that’s the ideal route. It wrote in 2018: “Trying to get a few people all the way through the funnel is more important than getting every person to the next stage.”
The magnitude and implications of Effective Altruism, says Gleiberman, “cannot be grasped until people are willing to look at the evidence beyond EA’s glossy front cover, and see what activities and aims the EA movement actually prioritizes, how funding is actually distributed, whose agenda is actually pursued, and whose interests are actually served.”
Key takeaways
– Core EA
In the Public-facing/grassroots EAs (audience, followers, participants):
The main focus is effective giving à la Peter Singer.
The main cause area is global health, targeting the ‘distant poor’ in developing countries.
The donors support organizations doing direct anti-poverty work.
In the Core/highly engaged EAs (contributors, core, leadership):
The main focus is x-risk/ longtermism à la Nick Bostrom and Eliezer Yudkowsky.
The main cause areas are x-risk, AI-safety, ‘global priorities research,’ and EA movement-building.
The donors support highly-engaged EAs to build career capital, boost their productivity, and/or start new EA organizations; research; policy-making/agenda setting.
With AI doomers intensifying their attacks on the open-source community, it becomes clear that this group’s “doing good” is other groups’ nightmare.
– Effective Altruism was a Trojan horse
It’s now evident that “sending money to Africa,” as Eliezer Yudkowsky acknowledged, was never the “actual plot.” Or, as Will MacAskill wrote in 2012, “alleviating global poverty is dwarfed by existential risk mitigation.” The Effective Altruism founders planned – from day one – to mislead donors and new members in order to build the movement’s brand and community.
Its core leaders prioritized the x-risk agenda, and considered global poverty alleviation only as an initial step toward converting new recruits to longtermism/x-risk, which also happened to be how they, themselves, convinced more people to help them become rich.
This needs to be investigated further.
Gleiberman observes that “The movement clearly prioritizes ‘longtermism’/AI-safety/x-risk, but still wishes to benefit from the credibility that global poverty-focused EA brings.” We now know it was a PR strategy all along. So, no. They do not deserve this kind of credibility.
Dr. Nirit Weiss-Blatt, Ph.D. (@DrTechlash), is a communication researcher and author of “The TECHLASH and Tech Crisis Communication” book and “AI Panic” newsletter.
The Raspberry Pi and Arduino Bootcamp Bundle has 5 courses to help you dive into the world of hands-on programming. Courses cover Arduino, Raspberry Pi, and ROS2. It’s on sale for $30.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Let’s start off this post by noting that I know that some people hate anything and everything having to do with generative AI and insist that there are no acceptable uses of it. If that describes you, just skip this article. It’s not for you. Ditto for those who insist (incorrectly) that AI is nothing but a “plagiarism machine” or that training of AI systems is nothing but mass copyright infringement. I’ve discussed why all of that is wrong elsewhere.
Separately, I will agree that most uses of generative AI are absolute shit, and many are problematic. Almost every case I’ve heard of journalistic outfits using AI are examples of the dumbest fucking ways to use the technology. That’s because addle-brained finance and tech bros think that AI is a tool to replace journalists. And every time you do that, it’s going to flop, often in embarrassing ways.
However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles.
It’s basically to help me brainstorm,critique my articles, and make suggestions on how to improve them.
As a bit of background, let me explain how we work on articles at Techdirt. We try to make sure that no article goes out into the world until it’s been reviewed by someone other than myself. Most of the reviews are for grammar/typos, but also other important editorial checks along the lines of “does everything I say actually make sense?” and “what things might people get mad about?”
A while back, I started using Lex.page. Some of what I’m going to describe below is available for free accounts, and some in the paid “Pro” accounts. I don’t know the current limits on free accounts, as I am paying for a Pro account and what’s included in what may have changed.
Lex is an AI tool built with writers in mind. It looks kind of like a nice Google Docs. While it does have the power to do some AI-generated writing for you, almost all of its tools are designed to assist actual writers, rather than do away with their work. You can ask it to write the next paragraph for you, but I’ve never used that tool. Indeed, for the first few months I barely used any of the AI tools at all. I just like the environment as a standard writing tool.
The one feature I did use occasionally was a tool to suggest headlines for articles. If I thought my own headline ideas could be stronger, I would have it generate 10 to 15 suggestions. The tool rarely came up with one that was good enough to use directly, but it would sometimes give me an idea that I could take and adjust, which was better than my initial idea.
However, I started using the AI more often a couple of months ago. There’s a tool called “Ask Lex” where you can chat with the AI (on a Pro account, you can choose from a list of AI models to use, and I’ve found that Claude Opus seems to work the best). I initially couldn’t think of anything to ask the AI, so I asked people in Lex’s Discord how they used it. One user sent back a “scorecard” that he had created, which he asked Lex to use to review everything he wrote.
I changed around the scorecard for my own purposes (and I keep fiddling with it, so it will likely change more soon), but the current version of the score card I use is as follows:
This is an article scorecard:
Does this article:
#1 have a clear opening that grabs the reader score from 0 to 3
#2 clearly explain what is happening from 0 to 3
#3 clearly address the complexities from 0 to 3
#4 lay out the strongest possible argument 0 to 3
#5 have the potential to be virally shared 0 to 3
#6 is there enough humor included in the article 0 to 3
Given these details, could you score this article and provide suggestions on how to improve ratings of 0 or 1?
I created a macro on my computer, so with a few keyboard taps, I can pop that whole thing up in the Ask Lex box and have it respond.
I’ll note that I don’t really care that much about the last two items on the list, but I have them in there for two reasons. First, as a kind of Van Halen brown M&M check, to make sure the AI isn’t just blowing smoke at me, but knows when to give me low ratings. Second, somewhat astoundingly, there are times (not always, but more frequently than I would have thought) when it gives really good suggestions to insert a funny line somewhere.
I’m going to demonstrate some of how it works, using the article I wrote last week about the legal disclaimer on the parody mashup of the Beach Boys singing Jay-Z’s 99 Problems. Here’s what it looked like when I ran my first draft against the scorecard:
The responses here are fairly generic, but I can dig deeper. While it said my opening was good, I wondered if it could be better, so I asked it for suggestions on a better opening. And its suggestions were good enough that I actually did rewrite much of my opening. My original opening had jumped right in to talking about “There I Ruined It,” and Lex suggested some opening framing that I liked better. Of course, it also suggested a terrible headline, which I ignored. It’s rare that I take any suggestion verbatim, but this time the opening was good enough that I used a pretty close version (again, this is rare, but it does often make me think of better ways to rewrite the opening).
Then, I know that above I said that I don’t much care about the humor, but since this story involved a funny video, I did ask if it had any suggestions on ways to make the article funnier. And… these were not good. Not good at all. So I basically ignored them all. However, sometimes it does come up with suggestions that, again, at least get me to add an amusing line or two into a piece. Even if they weren’t good for this article, I figured I should share them here so you get a sense of how it doesn’t always work well, but at least gets me to think about things.
Somewhat amusingly, when I ran this very article through the same process I’m discussing here, it suggested adding “more personality” to the piece. I asked it if it had suggestions on where, and its top suggestion was to “lean into the absurdity of some of the AI suggestions” in this part, but then concluded with an awful joke.
So, yeah, it’s suggesting I joke about how shit its jokes are. Great work, AI buddy.
I also will sometimes ask it for better headlines (as mentioned above). Lex has a built-in headline generator tool, but I’ve found that doing it as part of the “Ask Lex” conversation makes it much stronger. On this article we’re discussing, it didn’t generate any good suggestions, so I ignored them. However, I will admit that it came up with the title of the follow-up article: Universal Music’s Copyright Claim: 99 Problems And Fair Use Ain’t One. That was all Lex. My original was something much more boring.
Also, just this weekend, I added a brand new macro, which I like so far, in which I ask it to generate other headline ideas, based on some criteria, and then ask it to compare that to my existing headline that I came up with myself. I’ve only been using this one for a day or two, and didn’t use it on the fair use article last week, but here’s what it said about this very article you’re reading now:
Then my next step is to input another macro I created as a kind of gut check. I ask it to help me critique the article, highlighting which points are the weakest and can be made stronger, which points are strongest and could be emphasized more, and which points readers might get upset about and which I should improve. Finally, I ask it if anything is missing from the article.
Again, I don’t always agree with its suggestions (including some of the ones here), but it often makes me think carefully about the arguments I’m making and seeing how well they stand up. I have strengthened many of the things I say based on the responses from Lex that just get me to think more carefully about what’s written.
Occasionally I’ll ask it for other suggestions, such as a better metaphor for something. When I wrote about Allison Stanger’s bonkers congressional testimony a couple weeks ago, I was trying to think of a good example to show how silly it was that she thought Decentralized Autonomous Organizations (DAOs) were the same thing as decentralized social media. I asked Lex for suggestions on what would highlight how absurd that mistake is, and it gave me a long list of suggestions, including the one I eventually used: “saying ‘social security benefits’ when you mean ‘social media influencers’.”
Finally, after I go through all of that, I do use it to also do some basic editing help. Recently, Lex introduced a nice feature called “checks” which will “check” your writing and suggest edits on a variety of factors. Personally, the only ones I’ve found useful so far are the “Grammar” check and the “Readability” check.
I’ve tried all the rest, and don’t currently find them that useful for my style of writing. The grammar check is good at catching typos and extra commas, and the readability check is pretty good at getting me to chop up some of the run-on sentences that my human editors get frustrated with.
I do want to play more with the “Audience” one, but my attempts to explain who the Techdirt audience is to it hasn’t quite worked yet. The team at Lex tells me they’re working to improve it.
There are a few more things, but that’s basically it. For me, it’s a brainstorming tool and a kind of “gut check” that helps me review my work and make it as strong as it can be before I hand it off to my human editors who will review it. I feel like I’m saving them time and effort as well by giving them a more complete version of each story I submit (and hopefully getting them less frustrated about having to break up my run-on sentences).
The important parts are that I’m not trying to replace anyone. I’m certainly not relying on it for actually writing very much. And I know that I’m going to reject many of the things it suggests. It’s basically just another set of eyeballs willing to look over my work and give me feedback. And, it does so quickly and is less sick of my writing quirks.
It’s not revolutionary. It’s not changing the world. But, for me, personally, it’s been pretty powerful, just in helping me to be a better writer.
And yes, this article was reviewed with the same tools, which obviously prompted me to include one of its suggestions in that screenshot above. I’ll leave the other suggestions that it made, and I took, up to your imagination.
By now we’ve well established that this particular series of media mergers — which began with AT&T’s doomed acquisition of Time Warner and ended with Time Warner’s subsequent spin off and fusion with Discovery — were some of the dumbest, most pointless “business” exercises ever conceived by man.
The pointless saga burned through hundreds of billions in debt, saw more than 50,000 people lose their jobs, killed off numerous popular brands (like Mad Magazine and HBO), created oceans of animosity among creatives, and resulted in a Max streaming service that’s arguably dumber and of notably lower quality than when the entire expensive gambit began.
“David Zaslav, CEO of troubled Warner Bros. Discovery, got a 27% increase in total compensation for 2023.
According to the company’s proxy statement, Zaslav was paid $49.7 million, including $3 million in salary, $23.1 million in stock awards and $22 million in non-equity incentive plans compensation. He received $39.3 million in 2022.”
That’s still lower that previous years’ compensation. Zaslav made a whopping $246 million in 2021 thanks to a hefty $203 million in stock options tacked on to his pay. It’s highly representative of the modern U.S. media sector, where the least competent brunchlords fail upward into positions of power, struggle repeatedly at any sort of competency, then get rewarded for it.
Shortly thereafter Discovery acquired the remaining assets creating yet another company, and things just kept getting worse. The resulting giant’s often been too cheap to pay residuals, resulting in a lot of popular content getting pulled from its streaming services. More recently executives took heat for backing away from the HBO brand, about the only consistently popular part of the company’s assets.
Most of these decisions may gain short-term tax breaks, stock boosts, or “I’m a savvy dealmaker” participation trophies on executive resumes, but they’ve generally been terrible for debt loads, customer satisfaction, employee happiness, and the longer-term health of the brands.
Kind of amusingly, trade magazines like Broadcasting and Cable don’t think it’s worth noting any of the chaos, product quality issues, or layoffs in their announcement of the news. Neither does The Hollywood Reporter. And Deadline found time to mention that Zaslav is the “poster child of high executive compensation,” but didn’t think any of the Zaslav’s failings were relevant to their story.
It seems to me that the best way to expose the link tax for what it is (a money grab), is to educate the legislators that a news site (in fact, any site) can use a robots.txt file to deny entry by any/some/all-but other sites on the entire web.
Then, when a news site is asked “Why don’t you use a robots.txt file to keep Google from linking to you?”, the answer will be telling. It’ll either be a) “We didn’t know about that (or some variation, such as it’s too hard, or it doesn’t work, etc.)”; or b) “But we want social media sites to link to us!”.
Given that second answer, it’s now apparent that someone did some excruciatingly bad parenting when raising their child during the phase where they should’ve been teaching said child that one must pay for the things that one wants. In no reality of which I’ve ever heard does one get to be paid for what one wants…. apparently except in Murdoch’s Bizzaro World.
Under normal economic theory, an exchange is defined as something of value goes from each party to the other party, not both things of value go to only one party. Never mind the business of fucking up of the internet, this is even worse than “New Math”. Given half a chance, this “New Economics” will tear down and/or reverse everything we’ve built as as civilization for the past several millennia.
Anybody who says otherwise is accepting bribes to legislate for link taxes, period.
I would say the TikTok ban is not only unconstitutional on First Amendment grounds, but that it’s also a Bill of Attainder: It punishes a [corporate] person through congress without a trial.
They’re not interested in learning more and many of them already know better. They want to be educated in how to win more elections and get more campaign donations. Elected officials who care to know what they’re legislating about would already have educated themselves or sought out expert advice.
My kid comes home from school talking about the world ending. Climate changes and war just support her depressive outlook. On top if all this, she knows that getting a well paying job will be difficult at best.
I’m not going to lie to her and tell her everything will be sunshine and rainbows. Her classmates are just as sobering.
For several years now, we’ve had a running series of posts discussing how, when it comes to digital goods, you often don’t own what you’ve bought. This ugliness shows up with all kinds of content, including purchased movies, books, and shows on digital platforms. But it has reared its head acutely as of late in the video game industry. The way this goes is that a publisher releases a game in whole, people buy it, and at some later date the publisher decides to shut down backend servers that render the game partially or totally unplayable for those that bought it. This has the effect of deleting pieces of culture, a real problem for those interested in the preservation of this artform, and a real problem for the entire bargain that is copyright, where all that culture is eventually supposed to end up in the public domain.
But all of that is just on the topic of not owning what you’ve bought. With more games allowing for creative expression within them, spearheaded in part by titles like LittleBigPlanet, it’s also the case that you don’t own what you’ve created. Well, with the full shutdown of the LittleBigPlanet servers, all of the user-created content in the game is being whisked away along with the ability to purchase the game itself.
Sony has indefinitely decommissioned the PlayStation 4 servers for puzzle platformer LittleBigPlanet 3, the company announced in an update to one of its support pages. The permanent shutdown comes just months after the servers were temporarily taken offline due to ongoing issues. Fans now fear potentially hundreds of thousands of player creations not saved locally will be lost for good.
“Due to ongoing technical issues which resulted in the LittleBigPlanet 3 servers for PlayStation 4 being taken offline temporarily in January 2024, the decision has been made to keep the servers offline indefinitely,” Sony wrote in the update, first spotted by Delisted Games. “All online services including access to other players’ creations for LittleBigPlanet 3 are no longer available.”
Again, to be clear, the game will still work offline. And if users who created content saved that content locally, they’ll still have it. But many, many gamers saved their creations in the online game servers and used that online component to share what they created with other players. Sony spit out social media content to let the public know the servers were simply never coming back online. Absent from that communication was any plan, method, or capability for those who bought, played, and created content for the game to access any of that content. It’s just, poof, gone.
“Nearly 16 years worth of user generated content, millions of levels, some with millions of plays and hearts,” wrote one long-time player, Weeni-Tortellini, on Reddit in January. “Absolutely iconic levels locked away forever with no way to experience them again. To me, the servers shutting down is a hefty chunk bitten out of LittleBigPlanet’s history. I personally have many levels I made as a kid. Digital relics of what made me as creative as i am today, and The only access to these levels i have is thru the servers. I would be devastated if I could never experience them again.”
Then devastated ye shall be, it seems. I get that technical difficulties can arise. But come on, now. No backups? No way to restore the servers temporarily? Or would there be too much time, energy, and effort required precluding Sony from wanting to do that? We don’t know, because the company hasn’t said. Instead, all this content goes away by fiat, the customers who forked over money and put time into creating within the game be damned.
If companies like Sony are going to be so pernicious with their own centralized servers in this manner, the least they could do would be to instead move to some decentralized and/or user-driven hosting solution. You know, so that a decade’s worth of culture doesn’t simply go away on the whim of one company.
Well, this ought to prompt another round of police-protecting legislation in Florida. Governor Ron DeSantis recently signed two bills into law — one that creates a 25-foot “no go” zone around police officers and one that strips police oversight boards of their independence. And that’s on top of the immediate effort made by the legislature in reaction to a recent court ruling that said the state’s victims’ rights law couldn’t be used to withhold names of officers who engaged in excessive force in “response” to alleged, mostly made-up “crimes” against them (contempt of cop, etc.).
This recent decision [PDF] from a Florida appeals court says the state’s two-party consent law for recordings doesn’t extend to public officials. And that means the five bogus wiretapping charges brought against Michael Waite for daring to record his conversations with cops are going to disappear. (h/t WFLA)
As we’re all well aware, wiretapping laws have been abused by cops for years in states with two-party consent laws. Multiple people have been arrested for filming their interactions with police officers and hit with bogus wiretapping charges because the officers did not “consent” to be recorded. Most of the resulting lawsuits have not delivered the results cops want. Instead, a majority of them have established precedent that says the First Amendment protects recordings of public officials.
That’s what has happened here. Rather than dismiss the charges voluntarily, the state chose to fight this in court. And now there’s precedent preventing officers from pulling this sort of bullshit in the future.
The backstory is this: Michael Waite is no fan of local law enforcement. According to court records, he had been involved in a long property boundary dispute with the sheriff’s office and other city employees. Waite called 911 and accused deputies of trespassing. He recorded the call and forwarded it to the sheriff’s office. Rather than do nothing, the sheriff’s office obtained an arrest warrant. This led to an altercation when officers served the warrant, resulting in battery charges that aren’t going to be dismissed.
The important thing is the precedent. The appeals court says there’s no expectation of privacy in carrying out public duties while utilizing public equipment, i.e., department-issued cell phones and landlines.
Here, Waite recorded a telephone conversation with Sergeant Blair. He subsequently emailed the audio recording to the CCSO to report what he believed to be police misconduct and requested an internal investigation. It was later discovered that Waite had similarly recorded four other conversations with CCSO deputies. Under these circumstances, it cannot be said that any of the deputies exhibited a reasonable expectation of privacy that society is willing to recognize.
Importantly, this is based on the record before us as there is no dispute that all conversations concerned matters of public business, occurred while the deputies were on duty, and involved phones utilized for work purposes. As such, Waite did not violate section 934.03(1)(a) when he recorded the conversations with the deputies, all of whom were acting in their official capacities at the time of the recordings, just as if he had the conversations face-to-face.
This all seems extremely obvious and yet it took a second court’s review of the case to make it clear enough that Florida law enforcement officers will understand it. And that probably means some legislator has already fired up Word and is crafting a law that will exempt state law enforcement from… well, the First Amendment, I guess. The ruling here cites plenty of local precedent about the right to record, but Florida’s always imaginative lawmakers are rarely deterred by things like years of case law or the US Constitution itself.