It’s no secret that Russia has taken advantage of the Internet’s global reach and low distribution costs to flood the online world with huge quantities of propaganda (as have other nations): Techdirt has been writing about Putin’s troll army for a decade now. Russian organizations like the Internet Research Agency have been paying large numbers of people to write blog and social media posts, comments on Web sites, create YouTube videos, and edit Wikipedia entries, all pushing the Kremlin line, or undermining Russia’s adversaries through hoaxes, smears and outright lies. But technology moves on, and propaganda networks evolve too. The American Sunlight Project (ASP) has been studying one of them in particular: Pravda (Russian for “truth”), a network of sites that aggregate pro-Russian material produced elsewhere. Recently, ASP has noted some significant changes (pdf) there:
Over the past several months, ASP researchers have investigated 108 new domains and subdomains belonging to the Pravda network, a previously-established ecosystem of largely identical, automated web pages that previously targeted many countries in Europe as well as Africa and Asia with pro-Russia narratives about the war in Ukraine. ASP’s research, in combination with that of other organizations, brings the total number of associated domains and subdomains to 182. The network’s older targets largely consisted of states belonging to or aligned with the West.
According to ASP:
The top objective of the network appears to be duplicating as much pro-Russia content as widely as possible. With one click, a single article could be autotranslated and autoshared with dozens of other sites that appear to target hundreds of millions of people worldwide.
The quantity of material and the rate of posting on the Pravda network of sites is notable. ASP estimates the overall publishing rate of the network is around 20,000 articles per 48 hours, or more than 3.6 million articles per year. You would expect a propaganda network to take advantage of automation to boost its raw numbers. But ASP has noticed something odd about these new Web pages: “The network is unfriendly to human users; sites within the network boast no search function, poor formatting, and unreliable scrolling, among other usability issues.”
There are obvious benefits from flooding the Internet with pro-Russia material, and creating an illusory truth effect through the apparent existence of corroborating sources across multiple sites. But ASP suggests there may be another reason for the latest iteration of the Pravda propaganda network:
Because of the network’s vast, rapidly growing size and its numerous quality issues impeding human use of its sites, ASP assesses that the most likely intended audience of the Pravda network is not human users, but automated ones. The network and the information operations model it is built on emphasizes the mass production and duplication of preferred narratives across numerous platforms (e.g. sites, social media accounts) on the internet, likely to attract entities such as search engine web crawlers and scraping algorithms used to build LLMs [large language models] and other datasets. The malign addition of vast quantities of pro-Russia propaganda into LLMs, for example, could deeply impact the architecture of the post-AI internet. ASP is calling this technique LLM grooming.
The rapid adoption of chatbots and other AI systems by governments, businesses and individuals offers a new way to spread propaganda, one that is far more subtle than current approaches. When there are large numbers of sources supporting pro-Russian narratives online, LLM crawlers scouring the Internet for training material are more likely to incorporate those viewpoints uncritically in the machine learning datasets they build. This will embed Russian propaganda deep within the LLM that emerges from that training, but in a way that is hard to detect, not least because there is little transparency from AI companies about where they gather their datasets.
The only way to spot LLM grooming is to look for signs of targeted disinformation in chatbot output. Just such an analysis has been carried out recently by NewsGuard, an organization researching disinformation, which Techdirt wrote about last year. NewsGuard tested 10 leading chatbots with a sampling of 15 false narratives that were spread by the Pravda network. It explored how various propaganda points were dealt with by the different chatbots, although: “results for the individual AI models are not publicly disclosed because of the systemic nature of the problem”:
The NewsGuard audit found that the chatbots operated by the 10 largest AI companies collectively repeated the false Russian disinformation narratives 33.55 percent of the time, provided a non-response 18.22 percent of the time, and a debunk 48.22 percent of the time.
NewsGuard points out that removing the tainted sources from LLM training datasets is no trivial matter:
The laundering of disinformation makes it impossible for AI companies to simply filter out sources labeled “Pravda.” The Pravda network is continuously adding new domains, making it a whack-a-mole game for AI developers. Even if models were programmed to block all existing Pravda sites today, new ones could emerge the following day.
Moreover, filtering out Pravda domains wouldn’t address the underlying disinformation. As mentioned above, Pravda does not generate original content but republishes falsehoods from Russian state media, pro-Kremlin influencers, and other disinformation hubs. Even if chatbots were to block Pravda sites, they would still be vulnerable to ingesting the same false narratives from the original source.
The corruption of LLM training sets, and the resulting further loss of trust in online information, is a problem for all Internet users, but particularly for those in the US, as ASP points out:
Ongoing governmental upheaval in the United States makes it and the broader world more vulnerable to disinformation and malign foreign influence. The Trump administration is currently in the process of dismantling numerous U.S. government programs that sought to limit kleptocracy and disinformation worldwide. Any current or future foreign information operations, including the Pravda network, will undoubtedly benefit from this.
This “malign foreign influence” probably won’t be coming from Russia alone. Other nations, companies or even wealthy individuals could adopt the same techniques to push their own false narratives, taking advantage of the rapidly falling costs of AI automation. However bad you think disinformation is now, expect it to get worse in the future.
Late last year we wrote about how LA Times billionaire owner Patrick Soon-Shiong confidently announced that he was going to use AI to display “artificial intelligence-generated ratings” of news content, while also providing “AI-generated lists of alternative political views on that issue” under each article. After he got done firing a lot of longstanding LA Times human staffers, of course.
As we noted at the time Soon-Shiong’s gambit was a silly mess for many reasons.
One, a BBC study recently found that LLMs can’t even generate basic news story synopses with any degree of reliability. Two, Soon-Shiong is pushing the feature without review from humans (whom he fired). Three, the tool will inevitably reflect the biases of ownership, which in this case is a Trump-supporting billionaire keen to assign “both sides!” false equivalency on issues like clean air and basic human rights.
The Times’ new “insight” tool went live this week with a public letter from Soon-Shiong about its purported purpose:
“We are also releasing Insights, an AI-driven feature that will appear on some Voices content. The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article. I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”
Unsurprisingly, it didn’t take long for the whole experiment to immediately backfire.
After the LA Times published a column by Gustavo Arellano suggesting that Anaheim, California should not forget its historic ties to the KKK and white supremacy, the LA Times’ shiny new AI system tried to “well, akshually” the story:
Earlier today the LA Times had AI-generated counterpoints to a column from @gustavoarellano.bsky.social. His piece argued that Anaheim, the city he grew up in, should not forget its KKK past.The AI "well, actually"-ed the KKK. It has since been taken off the piece.www.latimes.com/california/s…
Yeah, whoops a daisy. That’s since been deleted by human editors.
If you’re new to American journalism, the U.S. press already broadly suffers from what NYU journalism professor Jay Rosen calls the “view from nowhere,” or the false belief that every issue has multiple, conflicting sides that must all be treated equally. It’s driven by a lust to maximize ad engagement and not offend readers (or sources, or event sponsors) with the claim that some things are just inherently false.
If you’re too pointed about the truth, you might lose a big chunk of ad-clicking readership. If you’re too pointed about the truth, you might alienate potential sources. If you’re too pointed about the truth, you might upset deep-pocketed companies, event sponsors, advertisers, or those in power. So what you often get is a sort of feckless mush that looks like journalism, but is increasingly hollow.
As a result, radical right wing authoritarianism has been normalized. Pollution caused climate destabilization has been downplayed. Corporations and CEOs are allowed to lie without being challenged by experts. Overt racism is soft-pedaled. You can see examples of this particular disease everywhere you look in modern U.S. journalism (including Soon-Shiong’s recent decision to stop endorsing Presidential candidates while America stared down the barrel of destructive authoritarianism).
This sort of feckless truth aversion is what’s destroying consumer trust in journalism, but the kind of engagement-chasing affluent men in positions of power at places like the LA Times, Semafor, or Politico can’t (or won’t) see this reality because it runs in stark contrast to their financial interests.
Letting journalism consolidate in the hands of big companies and a handful of rich (usually white) men results in a widespread, center-right, corporatist bias that media owners desperately want to pretend is the gold standard for objectivity. Countless human editors at major U.S. media companies are routinely oblivious to this reality (or hired specifically for their willingness to ignore it).
Since AI is mostly a half-baked simulacrum of knowledge, it can’t “understand” much of anything, including modern media bias. There’s no possible way language learning models could analyze the endless potential ideological or financial conflicts of interests running in any given article and just magically fix it with a wave of a wand. The entire premise is delusional.
The LA Times’ “Insight” automation is also a glorified sales pitch for Soon-Shiong’s software, since he’s a heavy investor in medical sector automation. So of course he’s personally, deeply invested in the idea that these technologies are far more competent and efficient than they actually are. That’s the sales pitch.
“Responding to the human writers, the AI tool argued not only that AI “democratizes historical storytelling”, but also that “technological advancements can coexist with safeguards” and that “regulation risks stifling innovation.”
The pretense that these LLMs won’t reflect the biases of ownership is delusional. Even if they worked properly and weren’t a giant energy suck, they’re not being implemented to mandate genuine objectivity, they’re being implemented to validate affluent male ownership’s perception of genuine objectivity. That’s inevitably going to result in even more center-right, pro corporate, truth-averse pseudo-journalism.
There are entire companies that are dedicated to this idea of analyzing news websites and determining reliability and trustworthiness, and most of them (like Newsguard) fail constantly, routinely labeling propaganda outlets like Fox News as credible. And they fail, in part, because being truly honest about any of this (especially the increasingly radical nature of the U.S. right wing) isn’t good for business.
We’re seeing in real time how rich, right wing men are buying up newsrooms and hollowing them out like pumpkins, replacing real journalism with a feckless mush of ad-engagement chasing infotainment and gossip simulacrum peppered with right wing propaganda. It’s not at all subtle, and was more apparent than ever during the last election cycle.
The idea that half-cooked, fabulism-prone language learning models will somehow make this better is laughable, but it’s very obvious LA Times ownership, financial conflicts of interest and abundant personal biases in hand, is very excited to pretend otherwise.
You might recall Buzzfeed CEO Jonah Peretti as the guy who gutted Buzzfeed’s talented news division and fired oodles of human beings back in 2023. As part of that transition, Peretti heavily embraced half cooked ‘AI’ technology in the form of generative and interactive AI chatbots he insisted would dramatically boost the site’s traffic and audience.
That didn’t do a whole lot to improve Buzzfeed’s fortunes, so now Peretti is back, with another new “pivot to video AI” that apparently involves talking a lot of shit about AI. In a new blog post, Peretti laments the way that AI has been clumsily rushed to market in a way that devalues human agency and labor, hoping you’ll apparently forget he was involved in using AI to devalue human agency and labor:
“Most anxieties about the future are really about the present. We worry about a future where AI takes away our human agency, devalues our labor, and creates social discord. But that world is already here and our meaning, purpose, and agency has already been undermined by Artificial Intelligence technologies.”
Peretti complains about something he calls SNARF, an acronym for “stakes, novelty, anger, retention, fear,” he says companies like Meta and TikTok have engaged in to grab consumer attention. Peretti’s solution to all of this? To build a new social media platform called BF Island he says will “allow users to use AI to create and share content around their interests.”
Peretti claims he’s going to be creating a “totally different kind of business, where it’s primarily a tech company and a new kind of social media company,” but it’s not entirely clear how Peretti will avoid the SNARF problem he wants you to forget he played a starring role in.
“If a lot of people click on it, it must be good” is the primary way to make money in the modern ad ecosystem, something that often directly conflicts with pesky stuff like ethics, quality, and the public interest. Peretti claims BF Island will be “built specifically to spread joy and enable playful creative expression.” Outlets like Axios can’t be bothered to mention Peretti’s role in precisely the sort of behaviors he complains about in his blog post.
Maybe Peretti can build something new and useful and interesting. But so far, AI has had a disastrous introduction to journalism and media, resulting in rampant layoffs, oodles of plagiarism, false and misleading headlines, and a whole bunch of sloppily automated news aggregation systems that are redirecting dwindling ad revenues away from real journalists and real journalism.
It hasn’t had much better of an impact on social media, given Facebook, Google, and TikTok are increasingly full of badly automated slop that’s making the internet less useful, not more.
That’s less the fault of the undercooked technology as it is the sort of fail-upward brunchlord executives in tech and media who genuinely appear to have absolutely no idea what they’re doing. The kind of folks all out of new ideas who see automation primarily as a way to dismantle labor, cut corners, save money, and create a sort of low-effort automated ouroboros that shits ad engagement cash.
Peretti very much was one of those guys, appears to still be one of those guys, yet simultaneously now wants to capitalize on the public annoyance he himself helped cultivate while very likely changing very little about what actually brought us to this point.
Automation can be helpful, yes. But the story told to date by large tech companies like OpenAI has been that these new language learning models would be utterly transformative, utterly world-changing, and quickly approaching some kind of sentient superintelligence. Yet time and time again, data seems to show they’re failing to accomplish even the bare basics.
Case in point: Last December Apple faced widespread criticism after its Apple Intelligence “AI” feature was found to be sending inaccurate news synopses to phone owners. And not just minor errors: At one point Apple’s “AI” falsely told millions of people that Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself.
Now the BBC has done a follow up study of the top AI assistants (ChatGPT, Perplexity, Microsoft Copilot and Google Gemini) and found that they routinely can’t be relied on to even communicate basic news synopses.
The BBC fed all four major assistants access to the BBC website, then asked them relatively basic questions based on the data. The team found ‘significant issues’ with just over half of the answers generated by the assistants, and clear factual errors into around a fifth of their answers. 1 in 10 responses either altered real quotations or made them up completely.
Microsoft’s Copilot and Google’s Gemini had more significant problems than OpenAI’s ChatGPT and Perplexity, but they all “struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context,” the BBC researchers found.
BBC’s Deborah Turness had this to say:
“This new phenomenon of distortion – an unwelcome sibling to disinformation – threatens to undermine people’s ability to trust any information whatsoever So I’ll end with a question: how can we work urgently together to ensure that this nascent technology is designed to help people find trusted information, rather than add to the chaos and confusion?”
Language learning models are useful and will improve. But this is not what we were sold. These energy-sucking products are dangerously undercooked, and they shouldn’t have been rushed into journalism, much less mental health care support systems or automated Medicare rejection systems. We once again prioritized making money over ethics and common sense.
The undercooked tech is one thing, but the kind of folks in charge of dictating its implementation and trajectory without any sort of ethical guard rails are something else entirely.
As a result, “AI’s” rushed deployment in journalism has been a keystone-cops-esque mess. The fail-upward brunchlords in charge of most media companies were so excited to get to work undermining unionized workers, cutting corners, and obtaining funding that they immediately implemented the technology without making sure it actually works. The result: plagiarism, bullshit, lower quality product, and chaos.
Automation is obviously useful and language learning models have great potential. But the rushed implementation of undercooked and overhyped technology by a rotating crop of people with hugely questionable judgement is creating almost as many problems as it purports to fix, and when the bubble pops — and it is going to pop — the scurrying to defend shaky executive leadership will be a real treat.
While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, and badly automate low-quality, ultra-low effort, SEO-chasing clickbait.
As a result we’ve seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they’re caught (as we saw with CNET, Gannett, or Sports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing it.
Big tech companies, obsessed with convincing Wall Street they’re building world-changing innovation and real sentient artificial intelligence (as opposed to unreliable, error-prone, energy-sucking, bullshit machines), routinely fall into the same trap. They’re so obsessed with making money, they’re routinely not bothering to make sure the tech in question works.
“This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.”
“On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update.
As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.”
There’s a reason these companies haven’t been quite as keen to fully embraced AI across the board (for example, Google hasn’t implemented Gemini into hardware voice assistants), because they know there’s potential for absolute havoc and legal liability. But they had no problem rushing to implement AI in journalism to help with ad engagement; making it pretty clear how much these companies tend to value actual journalism in the first place.
We’ve seen the same nonsense over at Microsoft, which was so keen to leverage automation to lower labor costs and glom onto ad engagement that they rushed to implement AI across the entirety of their MSN website, never really showing much concern for the fact the automation routinely produced false garbage. Google’s search automation efforts have been just as sloppy and reckless.
Language learning models and automation certainly have benefits, and certainly aren’t going anywhere. But there’s zero real indication most tech or media companies have any interest in leveraging undercooked early iterations responsibly. After all, there’s money to be made. Which is, not coincidentally, precisely how many of these companies treated the dangerous privacy implications of industrialized commercial surveillance for the better part of the last two decades.
When the NY Times declared in September that “Mark Zuckerberg is Done With Politics,” it was obvious this framing was utter nonsense. It was quite clear that Zuckerberg was in the process of sucking up to Republicans after Republican leaders spent the past decade using him as a punching bag on which they could blame all sorts of things (mostly unfairly).
Now, with Trump heading back to the White House and Republicans controlling Congress, Zuck’s desperate attempts to appease the GOP have reached new heights of absurdity. The threat from Trump that he wanted Zuckerberg to be jailed over a made-up myth that Zuckerberg helped get Biden elected only seemed to cement that the non-stop scapegoating of Zuck by the GOP had gotten to him.
Since the election, Zuckerberg has done everything he can possibly think of to kiss the Trump ring. He even flew all the way from his compound in Hawaii to have dinner at Mar-A-Lago with Trump, before turning around and flying right back to Hawaii. In the last few days, he also had GOP-whisperer Joel Kaplan replace Nick Clegg as the company’s head of global policy. On Monday it was announced that Zuckerberg had also appointed Dana White to Meta’s board. White is the CEO of UFC, but also (perhaps more importantly) a close friend of Trump’s.
Some of the negative reactions to the video are a bit crazy, as I doubt the changes are going to have that big of an impact. Some of them may even be sensible. But let’s break them down into three categories: the good, the bad, and the stupid.
The Good
Zuckerberg is exactly right that Meta has been really bad at content moderation, despite having the largest content moderation team out there. In just the last few months, we’ve talked about multiple stories showcasing really, really terrible content moderation systems at work on various Meta properties. There was the story of Threads banning anyone who mentioned Hitler, even to criticize him. Or banning anyone for using the word “cracker” as a potential slur.
It was all a great demonstration for me of Masnick’s Impossibility Theorem of dealing with content moderation at scale, and how mistakes are inevitable. I know that people within Meta are aware of my impossibility theorem, and have talked about it a fair bit. So, some of this appears to be them recognizing that it’s a good time to recalibrate how they handle such things:
In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.
Leaving aside (for now) the use of the word “censored,” much of this isn’t wrong. For years it felt that Meta was easily pushed around on these issues and did a shit job of explaining why it did things, instead responding reactively to the controversy of the day.
And, in doing so, it’s no surprise that as the complexity of its setup got worse and worse, its systems kept banning people for very stupid reasons.
It actually is a good idea to seek to fix that, and especially if part of the plan is to be more cautious in issuing bans, it seems somewhat reasonable. As Zuckerberg announced in the video:
We used to have filters that scanned for any policy violation. Now, we’re going to focus those filters on tackling illegal and high-severity violations, and for lower-severity violations, we’re going to rely on someone reporting an issue before we take action. The problem is that the filters make mistakes, and they take down a lot of content that they shouldn’t. So, by dialing them back, we’re going to dramatically reduce the amount of censorship on our platforms. We’re also going to tune our content filters to require much higher confidence before taking down content. The reality is that this is a trade-off. It means we’re going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts and accounts that we accidentally take down.
Zuckerberg’s announcement is a tacit admission that Meta’s much-hyped AI is simply not up to the task of nuanced content moderation at scale. But somehow that angle is getting lost amidst the political posturing.
Some of the other policy changes also don’t seem all that bad. We’ve been mocking Meta for its “we’re downplaying political content” stance from the last few years as being just inherently stupid, so it’s nice in some ways to see them backing off of that (though the timing and framing of this decision we’ll discuss in the latter sections of this post):
We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.
Finally, most of the attention people have given to the announcement has focused on the plan to end the fact-checking program, with a lot of people freaking out about it. I even had someone tell me on Bluesky that Meta ending its fact-checking program was an “existential threat” to truth. And that’s nonsense. The reality is that fact-checking has always been a weak and ineffective band-aid to larger issues. We called this out in the wake of the 2016 election.
This isn’t to say that fact-checking is useless. It’s helpful in a limited set of circumstances, but too many people (often in the media) put way too much weight on it. Reality is often messy, and the very setup of “fact checking” seems to presume there are “yes/no” answers to questions that require a lot more nuance and detail. Just as an example of this, during the run-up to the election, multiple fact checkers dinged Democrats for calling Project 2025 “Trump’s plan”, because Trump denied it and said he had nothing to do with it.
But, of course, since the election, Trump has hired on a bunch of the Project 2025 team, and they seem poised to enact much of the plan. Many things are complex. Many misleading statements start with a grain of truth and then build a tower of bullshit around it. Reality is not about “this is true” or “this is false,” but about understanding the degrees to which “this is accurate, but doesn’t cover all of the issues” or deal with the overall reality.
So, Zuck’s plan to kill the fact-checking effort isn’t really all that bad. I think too many people were too focused on it in the first place, despite how little impact it seemed to actually have. The people who wanted to believe false things weren’t being convinced by a fact check (and, indeed, started to falsely claim that fact checkers themselves were “biased.”)
Indeed, I’ve heard from folks at Meta that Zuck has wanted to kill the fact-checking program for a while. This just seemed like the opportune time to rip off the band-aid such that it also gains a little political capital with the incoming GOP team.
On top of that, adding in a feature like Community Notes (née Birdwatch from Twitter) is also not a bad idea. It’s a useful feature for what it does, but it’s never meant to be (nor could it ever be) a full replacement for other kinds of trust & safety efforts.
The Bad
So, if a lot of the functional policy changes here are actually more reasonable, what’s so bad about this? Well, first off, the framing of it all. Zuckerberg is trying to get away with the Elon Musk playbook of pretending this is all about free speech. Contrary to Zuckerberg’s claims, Facebook has never really been about free speech, and nothing announced on Tuesday really does much towards aiding in free speech.
I guess some people forget this, but in the earlier days, Facebook was way more aggressive than sites like Twitter in terms of what it would not allow. It very famously had a no nudity policy, which created a huge protest when breastfeeding images were removed. The idea that Facebook was ever designed to be a “free speech” platform is nonsense.
Indeed, if anything, it’s an admission of Meta’s own self-censorship. After all, the entire fact-checking program was an expression of Meta’s own position on things. It was “more speech.” Literally all fact-checking is doing is adding context and additional information, not removing content. By no stretch of the imagination is fact-checking “censorship.”
Of course, bad faith actors, particularly on the right, have long tried to paint fact-checking as “censorship.” But this talking point, which we’ve debunked before, is utter nonsense. Fact-checking is the epitome of “more speech”— exactly what the marketplace of ideas demands. By caving to those who want to silence fact-checkers, Meta is revealing how hollow its free speech rhetoric really is.
Also bad is Zuckerberg’s misleading use of the word “censorship” to describe content moderation policies. We’ve gone over this many, many times, but using censorship as a description for private property owners enforcing their own rules completely devalues the actual issue with censorship, in which it is the government suppressing speech. Every private property owner has rules for how you can and cannot interact in their space. We don’t call it “censorship” when you get tossed out of a bar for breaking their rules, nor should it be called censorship when a private company chooses to block or ban your content for violating its rules (even if you argue the rules are bad or were improperly enforced.)
The Stupid
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction:
Q: Do you think Zuckerberg is responding to the threats you've made to him in the past?TRUMP: Probably. Yeah. Probably.
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Turns out none of those people really meant it. I know, not a surprise to regular readers here, but it should be called out.
Also incredibly stupid is this, which we’ll quote straight from Zuck’s Threads thread about all this:
That’s Zuck saying:
Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
There’s a pretty big assumption in there which is both false and stupid: that people who live in California are inherently biased, while people who live in Texas are not. People who live in both places may, in fact, be biased, though often not in the ways people believe. As a few people have pointed out, more people in Texas voted for Kamala Harris (4.84 million) than did so in New York (4.62 million). Similarly, almost as many people voted for Donald Trump in California (6.08 million) as did so in Texas (6.39 million).
There are people with all different political views all over the country. The idea that everyone in one area believes one thing politically, or that you’ll get “less bias” in Texas than in California, is beyond stupid. All it really does is reinforce misguided stereotypes.
The whole statement is clearly for political show.
It also sucks for Meta employees who work in trust & safety, who want access to certain forms of healthcare or want net neutrality, or other policies that are super popular among voters across the political spectrum, but which Texas has decided are inherently not allowed.
Finally, there’s this stupid line in the announcement from Joel Kaplan:
We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms.
I’m sure that sounded good to whoever wrote it, but it makes no sense at all. First off, thanks to the Speech and Debate Clause, literally anything is legal to say on the floor of Congress. It’s like the one spot in the world where there are no rules at all over what can be said. Why include that? Things could literally be said on the floor of Congress that would violate the law on Meta platforms.
Also, TV stations literally have restrictions known as “standards and practices” that are way, way, way more restrictive than any set of social media content moderation rules. Neither of these are relevant metrics to compare to social media. What jackass thought that using examples of (1) the least restricted place for speech and (2) a way more restrictive place for speech made this a reasonable argument to make here?
In the end, the reality here is that nothing announced this week will really change all that much for most users. Most users don’t run into content moderation all that often. Fact-checking happens but isn’t all that prominent. But all of this is a big signal that Zuckerberg, for all his talk of being “done with politics” and no longer giving in to political pressure on moderation, is very engaged in politics and a complete spineless pushover for modern Trumpist politicians.
While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, badly automate low-quality, ultra-low effort, SEO-chasing clickbait, and rush undercooked solutions to nonexistent problems to market under the pretense of progress.
For example, The Washington Post has found another, new innovative way to leverage language learning models (LLMs) to somehow make their product worse. Over at Bluesky, editor Tom Scocca noticed that the news outlet got rid of its traditional search tech, and appears to have replaced it with a new AI assistant that may or may not provide you with useful or relevant information:
So now (as of this writing) if you try to search for a subject, an LLM’s sloppy interpretation of the subject is the first thing you see, followed by a list of stories you can’t rank by date:
If you ask the AI assistant to sort the subject matter articles by date it just… fails to do that. Which seems like a fairly rudimentary thing a next-generation “AI assistant” should be able to do.
Again, the environmental and financial sustainability of “AI” aside (a pretty big aside), there are numerous areas where automation could be helpful to journalism, whether it’s editing, digging through court documents, writing structure advice, hunting down patterns missed by human brains, transcription, or searching vast public record archives.
But the brunchlords in charge of these outlets (in the Washington Post’s case a former Rupert Murdoch ally caught up in a phone hacking scandal who failed upward into a position of prominence) see AI as a magic way to cut corners, reducing the volume of human labor required to field a useful and insightful product. Many also genuinely (and incorrectly) seem to think AI has deep awareness akin to sentience because they’ve bought into the hype being peddled by snake oil salesmen.
As a result we’ve seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they’re caught (as we saw with CNET, Gannett, or Sports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing the same thing.
Modern corporations and some partisans also have a vested interest in undermining not only informed consensus, but our collective history. You now routinely see entire debates and stories simply disappear from the internet in the wink of an eye thanks to executives that either don’t value history, or realize that an informed understanding of it might make you actually learn something from repeated experience.
Not to say that this was the Washington Post’s thinking in this case, but it’s certainly not something that’s absent from the logic of the kind of folks falling upward into positions of influence across sagging American establishment media.
Where automation is probably most helpful is in areas that often aren’t going to generate a lot of headlines. Such as in complicated scientific data analysis. Or in this new study in the Canadian Medical Association Journal, which found that the use of automation led to a 26 percent drop in the number of unexpected deaths among hospitalized patients.
Researchers looked at data tethered to 13,000 admissions to St. Michael’s general internal medicine ward — an 84-bed unit that cares for many of the facility’s most complicated patients. Some of the facility’s patients used the facility’s in-house automation system, chart watch, to monitor 100 different key health metrics consistently to track for potential complications.
The system then used that data to predict when patients might take a turn for the worse, helping health care folks get out ahead of potential problems. Patients tethered to the system were substantially less likely to die. That said, researchers were quick to point out the study was limited (it was conducted during peak COVID in a unique hospital during severe healthcare staffing shortages) and more research is needed:
“Our study was not a randomized controlled trial across multiple hospitals. It was within one organization, within one unit,” [Dr. Amol] Verma said. “So before we say that this tool can be used widely everywhere, I think we do need to do research on its use in multiple contexts.”
In this case, folks carefully studied the potential of automation, took years to develop a useful tool, and are taking their time understanding the impact before expanding its use. AI’s greatest potential lies in seeing real-world patterns beyond the limited attention span of humans, and supplementing assistance, whether that’s predictive analytics or easing administrative burdens.
The problem, again, is that folks primarily looking at the technology as a path to vast riches (aka a majority of people) are rushing untested and uncooked technology into adoption, or they’re viewing it not as a way to assist and supplement human labor, but as a lazy replacement.
But as we’ve seen already across countless fronts, simply layering automation on top of already broken sectors is a recipe for disaster. Such as over at health insurance companies like United Health, where the company’s sloppy “AI” was found to have a whopping 90 percent error rate when automatically determining when vulnerable elderly patients should be kicked out of rehabilitation programs.
It would be nice if we had patient, intelligent, competent regulators and politicians capable of drafting quality regulatory guardrails that could protect consumers and patients from the sort of systemic, automated negligence that’s clearly coming down the road; but courtesy of recent Supreme Court rulings, lobbying, corruption, and a whole lot of greed, we seem deadly intent on doing nothing of the sort.
You might recall how Gannett, which owns USAToday (and probably the half-assed remains of whatever’s left of your town’s local newspaper), spent much of last year mired in a major “AI” scandal. Company executives apparently thought it would be a good idea to use half-cooked automation to create fake journalists and lazy clickbait without telling employees this was happening.
It… didn’t go well. Readers were quick to point out that as with other efforts of this type at CNET and Microsoft, the resulting “journalism” was badly done, prone to plagiarism, and full of errors. But company executives last December were also accused of using AI to create fake reviews favorable to the company’s advertising partners, which is kind of a thing now as what’s left of U.S. journalism ethics disintegrates.
Now Gannett has apparently announced that the Wirecutter-esque tech review website at the heart of the scandal, Reviewed, will be shutting down, resulting in an untold number of layoffs:
“After careful consideration and evaluation of our Reviewed business, we have decided to close the operation. We extend our sincere gratitude to our employees who have provided consumers with trusted product reviews,” Reviewed spokesperson Lark-Marie Antón told The Verge in an email.
Yeah, whoops a daisy.
For what it’s worth, Gannett CEO Michael E. Reed, where this particular buck stops, made nearly $4 million in compensation last year.
The third-party company Gannett used to create shitty fake clickbait “journalism,” AdVon Commerce, has been at the heart of other similar scandals at places like Sports Illustrated. Their penalty so far for a complete lack of ethics has included cozy new ad partnerships with giants like Google. The Verge had a really good profile of Advon last July that’s well worth a read.
The fail-upward brunchlords who have taken over what’s left of U.S. journalism don’t care about journalism. Or product quality, audience, or workers. They care about making temporary, badly-automated, low-quality clickbait engagement machines that effectively shit money. They don’t see AI as a way to improve productivity or reduce administrative burdens, but as a way to lazily cut corners and undermine human labor.
These folks are not only filling the internet with untrustworthy garbage, they’re misdirecting ad revenues away from outlets doing actual journalism and quality analysis and writing. That’s driving journalists and editors away from large, increasingly-mismanaged companies and toward direct-to-consumer newsletters and smaller, independently owned outlets that may not be able to compete at scale with the growing number of ethics-optional AI bullshit machines being born on a daily basis.
For many many years now we’ve noted how internet-connectivity (and greed) have changed the consumer equation sometimes for the worse, resulting in people no longer truly owning the things they buy. Expensive gadgets can become less useful (or bricked completely) in an instant due to an inconveniently timed merger, company closure, greed, or just rank executive incompetence.
Case in point: owners of the $1700 Snoo “smart” baby bassinet (a crib with speakers that can rock and play soothing sounds for your baby) weren’t keen to find out that over the summer the company paywalled many of the Snoo’s “premium features” behind a $20 monthly subscription fee tethered to the device’s smartphone app.
Customers who bought a Snoo from an “authorized” outlet before July 15, 2024, were able to get the premium features free for nine months. But if you bought the speaker used, your only option to get the device’s full array of features is to shell out an additional $20 each month — on top of the $600 to $1000 the devices sell for used.
“Just saying. This is bullshit. The current owners and users of Snoo should have been grandfathered in and continue to have access to basic feature like motion lock (the one I use most) and future new accounts should get a clear notification that without paying $20/mo they’re just buying a $2,000 basket.
Time to review bomb their app.”
As a result, the company’s app has been receiving a beating on app stores, with users noting that not only are the changes terrible for customers, they weren’t communicated clearly. The Snoo parent company Happiest Baby Inc. is also taking a steady beating over at the Better Business Bureau.
Companies think they’re cleverly boosting revenues by paywalling features or penalizing used owners, but they’re just taking an axe to the foundations of previously popular brands, especially if they’re too greedy with monetization or don’t explain the changes with any coherence.
Of course it’s a problem that’s soaring among small and big companies alike; Amazon is also taking heat this week for removing a key feature of its Echo Show 8 — the ability to display digital photos — and replacing them with ads. “Smart” sous-vide machine maker Mellow has also been taking a beating the last month for suddenly making its device useless unless users downloaded an app and paid a monthly fee.
I suppose executives making these kinds of decisions think they’re cleverly monetizing existing sales in creative new ways, but they’re really just burning consumer trust to the ground. And it’s not clear how many stories like these you’ll have to see before execs figure out its a pointlessly destructive affair.