We’ve noted repeatedly how early attempts to integrate “AI” into journalism have proven to be a comical mess, resulting in no shortage of shoddy product, dangerous falsehoods, and plagiarism. It’s thanks in large part to the incompetent executives at many large media companies, who see AI primarily as a way to cut corners, assault unionized labor, and automate lazy and mindless ad engagement clickbait.
The folks rushing to implement half-cooked AI at places like Red Ventures (CNET) and G/O Media (Gizmodo) aren’t competent managers to begin with. Now they’re integrating “AI” with zero interest in whether it actually works or if it undermines product quality. They’re also often doing it without telling staffers what’s happening, revealing a widespread disdain for their own employees.
After CNET repeatedly published automated dreck, Wikipedia has taken the step of no longer ranking the formerly widely respected news site as a “generally reliable” news source. As Futurism notes, the website’s crap automated content crafted by fake automated journalists increasingly doesn’t pass muster:
“Let’s take a step back and consider what we’ve witnessed here,” a Wikipedia editor who goes by the name “bloodofox” chimed in. “CNET generated a bunch of content with AI, listed some of it as written by people (!), claimed it was all edited and vetted by people, and then, after getting caught, issued some ‘corrections’ followed by attacks on the journalists that reported on it,” they added, alluding to the time that CNET’s then-Editor-in-Chief Connie Guglielmo — who now serves as Red Ventures’ “Senior Vice President of AI Edit Strategy” — disparagingly referred to journalists who covered CNET’s AI debacle as “some writers… I won’t call them reporters.””
Of course CNET was already having credibility problems long before AI came on the scene. The website, like many “tech news” websites, increasingly acts more of an extension of gadget marketing departments than an adult news venture. CNET editorial standards have long been murky, as exemplified by that whole CES Dish Network award scandal roughly a decade ago.
Things got worse once CNET was purchased by Red Ventures, which has been happy to soften the outlet’s coverage to please advertisers, and, like most modern media companies, sees journalism not as a truth-telling exercise, but as a purely extractive path toward chasing engagement at impossible scale.
That sentiment is everywhere you currently look, as a rotating crop of trust fund failsons drive what’s left of U.S. journalism into the soil. These folks see journalism as an irrelevant venture, and they’re keen to turn it into a sort of automated journalism simulacrum; stuff that looks somewhat like useful reporting, but is predominantly an unholy fusion of facts-optional marketing and engagement bait.
It’s great to see the folks at Wikipedia take note and act accordingly.
Earlier this year, we wrote about outspoken financier Bill Ackman’s threat to sue Business Insider over articles regarding accusations by the publication that Ackman’s wife, Neri Oxman, had plagiarized parts of her dissertation years ago. The timeline and context of what happened here is important because Ackman continues to ignore it.
Ackman got upset about activity by students at his alma mater, Harvard, in response to the Hamas attacks on Israel on October 7th. He then helped orchestrate a campaign to oust Harvard’s new President, Claudine Gay, because of what he viewed as her insufficient response to antisemitism on campus. While those initial efforts went nowhere, the situation gained more attention when some nonsense peddlers of the grifter class found examples of what they called plagiarism, but which many academics felt were inadvertent errors in weak paraphrasing, or inadvertent failure to properly cite sources.
For example, one of the people Gay was accused of plagiarizing came to her defense, noting that while it may have been technically improper, it was over minor bits and not the heart of what she was writing:
The plagiarism in question here did not take an idea of any significance from my work. It didn’t steal my thunder. It didn’t stop me from publishing. And the bit she used from us was not in any way a major component of what made her research important or valuable.
So how serious a violation of academic integrity was this?
From my perspective, what she did was trivial—wholly inconsequential. That’s the reason I’ve so actively tried to defend her.
This effort continued for some time, with Ackman again being a leading voice, perhaps recognizing that what he failed to accomplish by complaining about her handling of antisemitism, he could eventually accomplish through piling on and promoting the claims of plagiarism. And it worked. Soon after, Gay lost her job as President of Harvard.
Around that time, Business Insider published its first piece about Neri Oxman, Ackman’s wife, noting that her dissertation at MIT was also found to contain some plagiarized passages. The article was pretty explicit that it was not accusing Oxman of some inherent unethical behavior, but rather noting the similarities between what she had done and what Gay had done:
Like Oxman, Gay was found to have lifted passages from other academics’ work without using quotation marks while citing the authors.
Gay’s plagiarism was seen by some academics, including many of those she plagiarized, as relatively inconsequential.
George Reid Andrews, a history professor at the University of Pittsburgh and one of the people Gay plagiarized, told the New York Post that what Gay did “happens fairly often in academic writing and for me does not rise to the level of plagiarism.”
That is, the entire point of the article was to highlight the parallel situations between Gay and Oxman. It was to emphasize that inconsequential copying or inadvertent failure to properly cite something minor in an academic paper happens all the time.
The point was not that Oxman was terrible. The point was to highlight Ackman’s double standard. Indeed, Business Insider wrote an entire article comparing the accusations against both Gay and Oxman while highlighting Ackman’s noticeably different approach to each.
“Part of what makes her human is that she makes mistakes, owns them, and apologizes when appropriate,” he wrote in a post on X following Business Insider’s report on Oxman’s plagiarism.
That’s a starkly different approach from the one he took toward Gay after she stepped down as president earlier this week. At the time, Ackman said she should be fired from Harvard’s faculty entirely because of what he called “serious plagiarism issues.”
“Students are forced to withdraw for much less,” he posted on X. “Rewarding her with a highly paid faculty position sets a very bad precedent for academic integrity at Harvard.”
However, the instances of Oxman’s and Gay’s plagiarism have more similarities than differences, according to experts and an internal analysis.
At no point that I’ve seen in this ongoing ordeal has Ackman acknowledged that. Rather, he has gone on rant after rant after rant, combined with threats to sue people for their free speech (while pretending to be a free speech absolutist), pretending that the point of the Business Insider articles was to smear Oxman to punish Ackman for his support of Israel.
A few weeks ago, Ackman promised to sue and has hired Libby Locke of the firm Clare Locke to issue a massive (and massively ridiculous) threat letter to Axel Springer/Business Insider, demanding corrections and retractions of various articles. It’s a Gish gallop of a threat letter. Responding to every single bit of nonsense in the threat letter is beyond the scope of my time, and even so this article is going to be ridiculously long.
Just as an aside, no one who hires Clare Locke is a “free speech absolutist.” Clare Locke (and especially partner Libby Locke) are immensely proud of their ability to threaten media outlets to kill stories (and they’re not as effective as their media portrayal would have you believe). That’s the opposite of free speech absolutists. They are speech suppressors. Their website kinda brags about this:
Sending a 77-page “demand letter” is ridiculous and suggests that you don’t have a clear ask or a clear explanation. Ackman, over on ExTwitter, admits that the letter was written on purpose to be turned into a complaint:
It will not go unnoticed that the demand letter reads remarkably similarly to the pleadings of a lawsuit. If needed, we can convert the demand letter into a complaint and file a lawsuit, which I hope is unnecessary
The letter is long, repetitive, and silly. It does not engage with the actual purpose of the Business Insider articles, to compare Gay’s inadvertent failures to cite with Oxman’s similar mistakes in a manner that highlights how Ackman’s freakout over Gay suggests a huge double standard. Instead, it opens by arguing that Business Insider and the reporters and editors who worked on these articles are antisemitic and targeted Ackman because of his pro-Israel views.
Ackman’s criticism, particularly of Claudine Gay, the former president of his alma mater, Harvard, did not sit well with Katherine Long (an Investigative Reporter at Business Insider), John Cook (Business Insider’s Executive Editor), and Henry Blodget (Business Insider’s Founder and Chairman), who have publicly expressed anti-Zionist and purportedly antisemitic views.
It then goes on at great length (and great repetitiveness) to claim that it’s not plagiarism if it wasn’t done on purpose. Really.
As confirmed by Business Insider and the common definition of plagiarism, plagiarism requires an intent to steal or defraud. Unintentional citation mistakes and honest errors are not considered plagiarism as the word is commonly understood
Now, there are two major problems with this. First of all, as noted here (but not in anything from Ackman), if that’s the case, then it appears Gay did not plagiarize either. And, again, that was the whole point of the Business Insider articles.
But, secondly, yes, you can absolutely plagiarize without intent to do so. The letter plays a very sloppy game of “use the definition we want at different times throughout our argument.” Note that even in the quote above, Locke’s letter says “as the word is commonly understood.” But… that’s not true. As commonly understood, inadvertent plagiarism… is still plagiarism. It might not be as serious. But it’s still plagiarism.
And the most incredible bit is that the letter admits that itself. Much later in the letter, it argues that Oxman couldn’t have done anything terrible because of MIT’s guidelines on plagiarism at the time. The letter, early on, states the following:
As MIT itself plainly explains in advising students of its academic standards, plagiarism “does not include honest error.” MIT also recognizes that “unintentional” plagiarism is not considered academic misconduct. In other words, honest mistakes happen, but those simple errors do not count as academic misconduct.
But, again, the whole point was that Gay appeared to have committed similar unintentional acts of plagiarism, yet Ackman demanded her head over them.
Either way, later on in the complaint letter, they show snippets from MIT’s guidebook which… read quite differently in context. They do not at all seem to be suggesting that unintentional plagiarism is not plagiarism. Rather, they seem to be stating that unintentional plagiarism is still very much plagiarism, and that’s why one should be very careful to not even engage in unintentional plagiarism. Here’s page 12 of the letter, in which it seems pretty clear that MIT is saying “don’t plagiarize, even if it’s unintentional,” but where Oxman/Ackman/Locke seem to be pretending it’s saying “meh, as long as you didn’t mean it, you’re fine.”
Notice, clearly in there, that MIT is not saying that “accidental” and “unintentional” plagiarism is fine. Both of the clips above are trying to help students understand why accidental plagiarism is still wrong and how they need to learn how to properly do academic writing by citing sources and writing up things by yourself.
Page 13 of the letter provides even more examples of this, where they seem to think it is absolving Oxman and revealing Business Insider’s ill-intent, when it really just seems to show that Oxman/Ackman/Locke don’t understand what they’re looking at:
Those are all clearly explanations for how to avoid that kind of “botched paraphrasing” which it appears both Gay and Oxman may have engaged in.
Notably, this demand letter leaves out the line right after those two screenshotted selections above, which proves that Libby Locke is omitting important context. Here, see it for yourself:
“In any event, even if the plagiarism is unintentional, the consequences can still be very painful.”
And then it explains why it might be painful:
Plagiarism in the academic world can lead to everything from failure for the course to expulsion from the college or university.
Plagiarism in the professional world can lead, at the very least, to profound embarrassment and loss of reputation and, often, to loss of employment. Famous cases of plagiarism include the historian Stephen Ambrose (accusations about six of his books have been made, most famously about The Wild Blue) and historian Doris Kearns Goodwin (who ended up asking the publisher to destroy all unsold copies of The Fitzgeralds and the Kennedys). Such plagiarism may be accidental, but its consequences are the same as for intentional plagiarism.
The threat letter leaves out all of this context and seems to pretend that MIT is suggesting that such unintentional plagiarism is fine. When the very document they’re quoting from says the exact opposite.
And what’s funny is that throughout the 77-page letter, Locke keeps insisting that omissions by Business Insider that distort the meaning of things are clearly defamatory and/or evidence of actual malice. Yet Locke engages in identical behavior.
The next page of the letter actually drives this point home (though again, the letter’s author does not appear to recognize this) by including a screenshot of the MIT Academic Integrity handbook that explains how to avoid “inadvertent plagiarism.”
All of that undermines Oxman’s argument, but the letter seems to think it boosts it. That’s because it confuses what counts as “research misconduct” with what counts as plagiarism. Looking at the MIT documents in context suggests that they are talking about two different things: what counts as plagiarism (which could include accidental or inadvertent copying and missed citations) and what counts as misconduct for which sanctions make sense, which requires intent.
But none of that really matters for the point that the Business Insider piece was trying to make: comparing Gay’s conduct (which Ackman insisted was a horrible, fireable offense) to Oxman’s (which Ackman continues to insist was no big deal).
On the very next page of the letter, it (falsely) suggests all this proves that Oxman’s “inadvertent” failure to properly cite somehow was not problematic. Even as the very documents they screenshot say the literal opposite. It also claims that “quoting one part of an article without quoting another part which might tend to qualify or contradict the quoted part is evidence of actual malice” even though that’s the same thing this letter does in this very section.
Business Insider’s purpose in excluding references to these portions of MIT’s Academic Integrity Handbook and academic misconduct policies in its articles on Dr. Oxman is clear: Including them would have debunked the notion that Dr. Oxman had committed intentional plagiarism and academic fraud, and Business Insider wanted to create the false impression that Dr. Oxman committed intellectual theft.
Business Insider’s wholesale omission of MIT’s policies and procedures contradicting its preconceived narrative was deliberate, and it is further evidence of Business Insider’s actual malice towards Dr. Oxman and Ackman. Indeed, the law holds that “quot[ing] one part of an article without quoting another part which might tend to qualify or contradict the part quoted” is evidence of actual malice. Goldwater, 414 F.2d at 336; see also Eramo, 209 F. Supp. 3d at 872 (“[D]isregard[ing] contradictory evidence” is supportive of actual malice.); Murray, 613 F. Supp. at 1285 (“It would be unjust and nonsensical to allow the defendant to rely on the report for certain purposes and to ignore it for others.”).
Once again, it’s unlikely that anyone with half a brain reading the BI pieces would think they were accusing Oxman of anything particularly nefarious. They were simply comparing what she had done to what Gay had done and noting the similarities.
There’s so much more that’s silly about this threat letter that there’s no way to go through it all, so I’m going to skip some of it and give highlights of other parts.
There’s an entire section whining about the use of the word “marred” in one of BI’s headlines, claiming that because it was only inadvertent, it couldn’t have been “marred.” I shit you not:
Given that the only instances of alleged plagiarism Business Insider identified in this article were only four paragraphs with eight missing quotation marks and one instance in which Dr. Oxman failed to cite an author she cited extensively elsewhere in her 330-page dissertation, it is wildly inaccurate to characterize her dissertation as “marred” (i.e., ruined or spoiled) “by plagiarism.”
Except they’re using a… weird definition of marred. It’s most commonly “damaged or spoiled to a certain extent; made less perfect.” As such, even small defects (such as those described) sure would seem to count as marred. My articles are often marred by typos, but that doesn’t mean that every word is a typo. And, either way, the use of the word “marred” is, in no world, anywhere close to the standards of defamation.
Then we get to the whole “citing Wikipedia” nonsense. Ackman had argued on ExTwitter back when this first came about that at the time of Oxman’s dissertation Wikipedia was still new and there weren’t general agreements on citing it, but that’s… nonsense. On multiple levels. First off, it wasn’t that new. Wikipedia was widely known and widely used at that point. Second, even if there wasn’t agreement on how to cite Wikipedia, that did not change the simple fact that it was still very much widely considered plagiarism to copy directly from it without citation/quotation. The lack of understanding of how to cite Wikipedia is a separate issue from the question of copying without attribution.
I had thought that once a lawyer got their hands on this fight, this argument would die a sudden death, but apparently the law firm of Clare Locke has no problem pushing totally specious arguments, because that makes it in here too:
Business Insider, however, intentionally omitted that MIT’s Academic Integrity Handbook at the time Dr. Oxman wrote her dissertation in 2009 and 2010 did not address—much less require— citation to Wikipedia, which itself is a collaborative resource with no single author to whom ideas could be attributed, and which at the time of her dissertation was of relatively nascent origin. In fact, Wikipedia was so inchoate that MIT had not yet developed or published any guidance on how researchers should use Wikipedia. Only later—several years after Dr. Oxman’s dissertation was published—did MIT revise its Academic Integrity Handbook to include a prohibition on citing Wikipedia for academic work. In 2009 and 2010, when Dr. Oxman wrote her dissertation, no such prohibition existed.
Note the shift here between citing and copying without attribution. Those are two separate things that this letter seeks to conflate. Even if MIT hadn’t published policies on how to cite Wikipedia, it has zero impact on whether or not copying directly from Wikipedia might be considered plagiarism. It still was. And it’s ridiculous to suggest that people didn’t think that to be the case in 2010.
There’s a whole section complaining that BI could not possibly call out Oxman for plagiarism unless it did an “inquiry or investigation into Dr. Oxman’s mental state to support such a finding.” To which I will just say… did Bill Ackman conduct such an “inquiry or investigation into Dr. Gay’s mental state” to support the many statements he made about her alleged plagiarism?
Or do we just admit that the billionaire gets to live by different standards than he seeks to impose on others?
After BI published its initial article, Oxman posted some tweets admitting that she had failed to properly put quote marks in certain sections:
Any reasonable read of this is that Oxman is admitting to not quoting things she should have quoted, which… is plagiarism, even by the definitions that were quoted earlier in the threat letter. Thus, BI published a new article saying that she admitted to plagiarism. The threat letter is apoplectic in insisting that she didn’t admit to plagiarism, and only to omitting quotation marks, which is fucking crazy.
Shortly after the first article was published at 2:28 PM on January 4, Dr. Oxman acknowledged in a post on X that, in “four paragraphs” of her 330-page dissertation, she did not “place the subject language in quotation marks, which would be the proper approach for crediting work,” and in one sentence she paraphrased an author but inadvertently did not cite him. She apologized for these errors. She did not, however, admit to plagiarism, intentional or otherwise. Three hours and 30 minutes later, Business Insider published a follow up article falsely claiming in its inflammatory headline that “Neri Oxman admits to plagiarizing in her doctoral dissertation after BI report.”91
Business Insider knew that when it published this article that its statement was false— Dr. Oxman had not admitted to plagiarism. Business Insider read and included a link to Dr. Oxman’s post in the article, but it purposefully mischaracterized Dr. Oxman’s post in the headline creating the false impression that Dr. Oxman had admitted to intellectual theft.
I’m still amazed at the chutzpah here. I’ve read Oxman’s tweet multiple times, and it’s pretty clear that she is admitting to plagiarism, though saying it was inadvertent. But, again, (1) inadvertent plagiarism is still considered plagiarism (including by MIT) and (2) it’s the same sort of thing that Dr. Gay was accused of, which was the whole point of BI’s efforts.
There’s another whole section on all of the Jeffrey Epstein stuff which I won’t get into (Oxman had a very, very distant connection to Epstein via the MIT Media Lab where she worked, and which Epstein infamously had donated money to, though apparently unrelated to her work). But the letter (which I’ll note claims to be on behalf of Oxman and not Ackman) whines quite a bit about BI stating that Ackman had sought to “pressure” then Media Lab director Joi Ito not to name Oxman in response to a media inquiry. It also whines about BI’s claiming that the Boston Globe had “uncovered” emails between Ackman and Ito, when (according to this letter) Ackman had sent them willingly to the Boston Globe.
But, the emails he forwarded sure do look like “pressuring” Ito. I guess it depends on your definition of “pressure” but the entire point of the email was asking Joi not to name Oxman and giving a bunch of reasons why he shouldn’t. That sure sounds like it meets one of the common definitions of pressure: “the act of trying to persuade or force someone to do something.” The threat letter, instead, seems to think “pressure” must involve threats of some kind, which… is not what the word means. And, remember, the threat letter itself talks about the use of “common definitions” (quoted above).
The letter says that Business Insider “falsely” claimed that Oxman and Ackman (who again, the letter does not purport to represent) “did not dispute the facts” in the BI articles, and then points out that this is false, because… of Ackman’s silly rant about citing Wikipedia:
In just one example, at 9:57 PM on January 5, just a few hours after Business Insider published its article falsely accusing Dr. Oxman of plagiarizing from Wikipedia and other sources, Ackman posted on X disputing that using Wikipedia for definitions is plagiarism. He asked rhetorically, “How can one defend oneself against an accusation of plagiarizing Wikipedia … Isn’t the whole point of Wikipedia that it is a dynamic source of info that changes minute by minute based on edits and contributions from around the globe? Has anyone (other than my wife) ever been accused of plagiarism based on using Wikipedia for a definition?” 110 Among other challenges to Business Insider’s reporting, Ackman directly disputed the notion that Dr. Oxman’s inclusion of definitions from Wikipedia in her dissertation was plagiarism.
But… that’s not disputing the facts. That’s disputing the interpretation of the facts (it’s also silly).
Much of that section is just a hilarious list of Bill Ackman not refuting any of the facts to the actual reporters or editors of the piece, but reaching out to various super rich executives somewhat associated with Business Insider, who assured him they were looking into things. That is not the same thing as “disputing the facts” to the actual journalists. That’s whining to the rich in hopes they’ll smack down the poor reporters who dared to make you look silly.
There are five (five!) pages that are just screenshots of Ackman’s (again, not officially represented in the letter) WhatsApp messages to Axel Springer boss Mathias Dopfner “disputing” the stories, but basically none of what is disputed is actual provably false statements of fact. They pretty much all appear to be differences of opinion on how things were portrayed in the BI stories. That’s not defamation. And it’s not even disputing the underlying facts — which is all BI claimed.
Hilariously, the only response from Dopfner to Ackman is a short email, which does not agree to anything that Ackman claimed. It just says “Thanks for your e-mails. Very helpful input to clarify things during the investigation” and then notes that because Ackman had announced plans to sue BI, his general counsel had (correctly) told him not to communicate with Ackman anymore:
Then we get to “actual malice.” On its website, Clare Locke declares itself “the leading defamation law firm in the United States.” I guarantee you that Libby Locke knows what “actual malice” means in the context of a defamation lawsuit. And it is not “they didn’t like the plaintiff” or “they were biased against the plaintiff.” Yet, Libby Locke seems to not care what the legal definition of actual malice is in their laughably wrong section on actual malice.
Business Insider never had any interest in journalistic integrity or the truth when reporting on Dr. Oxman. From the outset, its reporting was tainted by its progressive political bias and the desire of its anti-Zionist reporters and editors to smear a prominent, Jewish advocate and his family for speaking up against former Harvard President Gay. The Business Insider employees primarily responsible for this attack have a history of unethical conduct and have publicly expressed their anti Zionist and/or purported antisemitic views.
Beyond being fucking ridiculous, it’s also got nothing to do with actual malice. Actual malice means that the statement was made “with knowledge that it was false or with reckless disregard of whether it was false.” Also, “reckless disregard” doesn’t mean that you were just sloppy or lazy. It means that the speaker had serious doubts about the truth of the statements but published them anyway. The Supreme Court has been quite clear that it doesn’t mean biased reporting. And it doesn’t even mean mere negligence in reporting.
For there to be actual malice, BI’s reporters would have to fundamentally know (or have very strong beliefs) that what they were publishing was false, and then publish it anyway. But, they’ve (rightly) stood by their reporting. And Ackman, repeatedly, is only complaining about their interpretation of the facts, not the underlying facts themselves.
The letter then goes on to trash the reputation of Henry Blodget, BI’s founder, who had talked to Ackman early on when Ackman was first freaking out about the stories (hilariously, Blodget suggested Ackman could write for BI at one point, and in return he gets trashed). Blodget is, of course, easy to trash. He somewhat infamously settled with the SEC for publicly pumping up dot-com era stocks, while privately trashing those stocks. Some of us still remember all that.
The letter also tries (pathetically) to trash the reputations of the reporters and editors who worked on the BI stories, including digging editor John Cook’s self-admitted story about how as a teenager in the 1980s he was suspended from high school for publishing an obnoxious underground newspaper (I too published an underground newspaper in high school, and it was also obnoxious, but also I didn’t get suspended, in part because I wrote the back page of the first issue that was an entire article about how the First Amendment works, citing numerous Supreme Court cases on why the school couldn’t take action against those of us who wrote the paper… which was, perhaps, a preview of what my life was to become).
But what does that have to do with actual malice? Fuck all! It’s just Ackman burning bridges for show — and potentially as a threat to try to convince others not to report on his wife, or he’ll trash your reputation too (come at me, Bill).
The letter then moves on to misleadingly claim that Business Insider was trying to get Oxman fired. Again, this misunderstands what seemed pretty obviously to be the point of the articles: to compare Ackman’s response to the accusations around Gay as compared to his wife. The letter makes a big deal of Insider’s reporter, Katherine Long, asking in her initial email to Ackman if he expects Oxman to lose her job (Long, at the time, mistakenly believed that Oxman was still at MIT, when she had left a few years earlier):
In context, it’s obvious why Long asked this question. Since Ackman had pushed so strongly for Gay to lose her job at Harvard, it’s a kind of obvious question for a reporter to ask about Ackman’s wife (who they thought was still at MIT) given the whole point of the exercise was to showcase Ackman’s selective outrage and differential treatment of Gay compared to his wife.
But the letter treats this as an attempt to make Oxman lose her job and seems outraged. Which is fucking hilarious given Ackman’s tirades trying to get Gay fired from her job.
Business Insider’s Coverage Of Dr. Oxman Was Motivated by Its Desire To Get Dr. Oxman Fired by MIT.
Almost no one could possibly think this is what Long was trying to do. It seems blatantly obvious that she was simply seeing if Ackman felt his wife should face the same treatment that he helped engineer for Gay.
There’s also some just incredible hubris in the letter, in that it reveals Ackman petulantly demanding in text messages to Blodget that the articles be taken down while the promised investigation on the reporting occurred (which would be an extraordinary step that would have brought Streisand Effect levels of extra attention to the claims) and Ackman seems to think that BI’s refusal to accede to his demands when Blodget promised he was “working” on the issue is somehow more proof of malice (when the more sensible, and likely accurate, reason is that BI investigated, found that the story still held, and there was no reason to take it down).
There are also about eight whole pages of the letter going on (at ridiculous length) about what an amazing, brilliant, and famous person Neri Oxman is, which is hilarious since when all this started and people pointed out to Ackman that defamation against public figures involves a high bar (that high bar being the real actual malice, not the pretend one in this letter) Ackman tried to argue she wasn’t a public figure:
So, according to Bill Ackman, she’s not a celebrity academic or a public figure, but the threat letter on her behalf has eight pages lauding all of her accomplishments, awards, public exhibitions including at top museums around the world, the description in the NY Times of how she’s “a Modern-day da Vinci” and more. So, I guess they’re not even going to try to argue that she’s not a public figure.
There’s also a ridiculous number of words describing the alleged “harm” all of this has had, failing to recognize that if Ackman hadn’t made such a big deal of all of this, the story likely would have died out after a day or two as people got a good laugh at Ackman’s hypocrisy and moved on. Instead, his continued talking about it, and now sending this letter have only guaranteed that many more people are aware of all of this. If there’s any harm (and that seems unlikely) much of it should be pinned on Ackman’s inability to let this go.
On the final page of this opus, we get the “demands.”
Axel Springer and Business Insider must mitigate the damage they have caused by correcting their libelous reporting, issuing statements setting the record straight, making a sincere and meaningful public apology to Dr. Oxman and Ackman, and creating a fund to compensate other victims of Business Insider’s libelous reporting and to discourage their inappropriate conduct in the future. (Dr. Oxman is seeking no compensation for herself to make available additional resources for other victims.) Failure to take these steps will expose Axel Springer and Business Insider to substantial legal liability and will be further evidence of actual malice directed toward my client.
This is nonsense. I’m quite sure BI’s general counsel is not worried about this. Nothing in the letter indicates anything close to the level that would be defamation. The only real question — and the likely real intent of the letter — is whether or not all the rich folks that Ackman called up and texted during this whole mess, including Dopfner, Henry Kravis, and Axel Spring board member Martin Varsavsky, decide to just go along with this to hush up the mouthy rich guy so they won’t have to deal with more of this nonsense.
At this point, it’s pretty clear that Oxman (and Ackman) have no actual defamation case here. They have a lot of noise and bluster. And sometimes that’s enough to get a publication to back down (which Clare Locke seems to want you to believe they can produce in every case). But it would be a fucking shame and an embarrassment if Axel Springer/BI caved here, and would put all of its future reporting in question by showing that they could be bullied by specious, vexatious legal threats.
In Ackman’s tweet revealing this letter, he claims that he hasn’t sued first because “people we highly respect” had told him that Axel Springer was “perhaps the strongest long-term supporter of the state of Israel of any media organization, and also an important advocate against antisemitism.” What that has to do with anything in the letter, I do not know.
In the end, this is just more censorial bullshit. It’s hilarious that Ackman presents himself as a “free speech absolutist” when he’s doing this shit to seek to pressure (as it’s commonly defined!) BI into removing these stories. It misses the entirety of the point of these articles and pretends they’re about attacking Oxman, when it’s obvious to anyone outside of Ackman’s immediate sphere that the intent was to highlight the very, very different treatment Ackman gives to the accusations against Gay and Oxman.
Indeed, this very letter demonstrates that point to a much greater level. All this letter does is call that much more attention to Ackman’s disgusting double standard. When it’s someone he doesn’t like for other reasons, he’s willing to play up the plagiarism claims and push for them to lose their job. When it’s his wife, he tries to burn down an entire media outlet.
All this letter shows is that Bill Ackman is a censorial hypocrite.
Now the broader problem with Google News quality control seems to have gotten worse with the rise of “generative AI” (half baked language learning models). AI-crafted clickbait, garbage, and plagiarized articles are now dominating the Google News feed, reducing the already shaky service’s utility even further:
“Google News is boosting sites that rip-off other outlets by using AI to rapidly churn out content, 404 Media has found. Google told 404 Media that although it tries to address spam on Google News, the company ultimately does not focus on whether a news article was written by an AI or a human, opening the way for more AI-generated content making its way onto Google News.”
As we’ve seen in the broader field of content moderation, moderating these massive systems at scale is no easy feat. Compounded by the fact that companies like Google (which feebly justified more layoffs last week despite sitting on mountains of cash) would much rather be spending time and resources on things that make them more money, instead of ensuring that existing programs and systems actually work as advertised.
But the impact of Google’s cheap laziness is multi-fold. One, sloppy moderation of Google News only helps contribute to an increasingly lopsided signal to noise ratio as a dwindling number of under-funded actual journalists try to out-compete automated bullshit and well-funded propaganda mills across a broken infotainment and engagement economy. It’s already not a fair fight, and when a company like Google fails to invest in functional quality control, it actively makes the problem worse.
For example, many of automated clickbait plagiarism mills are getting the attention and funding that should be going to real journalism operating on shoestring budgets, as the gents at 404 Media (whose quality work ironically isn’t even making it to the Google News feed) explore in detail. For its part, Google reps had this to say:
“Our focus when ranking content is on the quality of the content, rather than how it was produced. Automatically-generated content produced primarily for ranking purposes is considered spam, and we take action as appropriate under our policies.”
Except they’re clearly not doing a good job at any part of that. And they’re not doing a good job at that because the financial incentives of the engagement economy are broadly perverse; aligned toward cranking out as much bullshit as possible to maximize impressions and end user engagement at scale, and against spending the money and time to ensure quality control at that same scale.
It’s not entirely unlike problems we saw when AT&T would actively support (or turn a blind eye to) scammers and crammers on its telecom networks. AT&T made money from the volume of traffic regardless of whether the traffic was harmful, muting any financial incentive to do anything about it.
This isn’t exclusively an AI problem (LLMs could be used to improve quality control). And it certainly isn’t exclusively a Google problem. But it sure would be nice if Google took a more responsible lead on the issue before what’s left of U.S. journalism drowns in a sea of automated garbage and engagement bait.
There’s plenty of hypocrisy and bad faith to go around in the ridiculous Claudine Gay plagiarism scandal. While Gay’s accusers are right that she technically violated Harvard’s plagiarism rules by copying phrases either without quotation marks or required attribution, they don’t actually care about plagiarism, only “scalping” Gay. What’s more, their own plagiarism accusations have already started biting them back. And while Gay’s defenders are right that her offenses were comically trivial, because she copied mere banalities, Harvard students are punished severely for doing exactly the same thing. In fact, some of Gay’s defenders probably did the punishing.
A pox on both their houses. Plagiarism is fine, plagiarism rules are stupid, and the plagiarism police should mind their own business.
Everyone “knows” plagiarism is bad, but no one can provide a coherent explanation why. Some people say plagiarism defrauds the reader. Give me a break. Readers don’t care, or if they do, it’s only because they’ve been browbeaten into believing plagiarism is wrong. Others say plagiarism is like stealing. But no one owns ideas, and no one should own the words we use to express them, either.
I’ll be blunt. The plagiarism police are just intellectual landlords, demanding rent in the form of attribution. And plagiarism rules are just a sneaky way for authors to claim de facto ownership of ideas, while cloaking themselves in false virtue. When the plagiarism police cry, “J’accuse!,” we should respond with a raspberry.
Don’t get me wrong, I’m not opposed to attribution. In fact, attribution is great, so long as it’s voluntary, rather than mandatory. Authors should absolutely attribute expressions and ideas, when they think it will help readers, or even just to honor an author they admire. But authors shouldn’t be required to attribute, unless they think it’s deserved. Let us cite out of love, rather than obligation.
Some people worry that eliminating plagiarism rules will harm disadvantaged authors, who often don’t get the credit they deserve. I doubt it. For one thing, plagiarism rules have existed for at least 2000 years. If they were going to protect disadvantaged authors, they would have done it by now. For another, plagiarism rules actually create a “Matthew Effect,” in which the most prominent authors get all the credit, and the disadvantaged authors get ignored. Why not adopt attribution norms that encourage citation of deserving disadvantaged authors instead of undeserving privileged ones?
Think about it. We want to believe plagiarism rules protect original expressions and ideas. But AI shows us that most of what we produce is generic banalities. Why treat them like spun gold, rather than the chaff they really are?
We’ve now spent weeks debating how to interpret and apply plagiarism rules. If anything comes out of this idiotic “scandal,” I hope it’s that, when it comes to plagiarism norms, the juice definitely isn’t worth the squeeze. We should just admit they’re a waste of time and abandon them. We should stop punishing authors for “stealing” clichés, And we should especially stop punishing students “for their own good.” Plagiarism is also a way of learning, so we should encourage it, whenever it helps students learn more effectively and efficiently.
By the way, every word of this op-ed is plagiarized. Or maybe it isn’t. I’m not telling, because it doesn’t matter.
The rushed integration of half-baked “AI” (aka not at all sentient language learning models) into journalism has been a gargantuan mess. Execs at companies like Red Ventures (CNET) and G/O Media (Gizmodo) have made it very clear they see LLMs primarily as a way to attack labor and cut corners, resulting in soulless and low quality product, oodles of plagiarism, and no shortage of employee ill will.
Enter Sports Illustrated, which appears to have been caught (like Gannett) creating fake journalists with fake bios in the hopes nobody would notice. Futurism first noticed the practice, which involves using fake people with AI-generated headshots to write what smells like automated copy. And, as we’ve seen repeatedly, the resulting product isn’t very good, either:
“The AI authors’ writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball ‘can be a little tricky to get into, especially without an actual ball to practice with.'”
Initially when Futurism asked Sports Illustrated owner The Arena Group about the practice, they just deleted all the AI-generated content and refused to comment, which speaks for itself. Only once the story gained media traction did the company shift course and (just like Gannet did) try to blame the whole thing on one of their third-party vendors:
“Today, an article was published alleging that Sports Illustrated published AI-generated articles. According to our initial investigation, this is not accurate. The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce. A number of AdVon’s e-commerce articles ran on certain Arena websites. We continually monitor our partners and were in the midst of a review when these allegations were raised. AdVon has assured us that all of the articles in question were written and edited by humans.“
The Arena Group’s “investigation” doesn’t sound like much of one. They’re trying to claim they had no idea they’d hired a third party company (the same company that helped Gannett do the exact same thing with a history of doing exactly this) to produce AI-generated content by AI-generated people. Poorly. Employees that actually work at Sports Illustrated tell Futurism the claim is laughable.
So many of the executives at major media giants genuinely view AI as a way to create an automated ad-engagement machine that effectively shits money for pennies on the dollar. Just a giant, automated ouroboros that throws billions in ad engagement dollars their way without concerns about any of the pesky stuff like product quality, audience interest, public welfare, or folks eager to be paid a living wage.
There’s no interest in journalism or even editorial ethics here; Sports Illustrated not only created fake people with fake headshots and fake bylines, they constantly rotated new fake reporters in and out without any transparency with readers or staff. There’s a lack of ethics and competency that’s a problem before language learning models even enter the frame.
There certainly are innovative, helpful uses for AI in journalism. The problem is the least competent people imaginable have failed upward into positions of power at most media orgs. And if they have their way, what’s left of the already-shaky art of journalism is going to become a homogenized, soulless, engagement machine that will make our current lopsided signal to noise ratio look downright adorable.
While recent evolutions in “AI” have netted some profoundly interesting advancements in creativity and productivity, its early implementation in journalism has been a sloppy mess thanks to some decidedly human-based problems: namely greed and laziness.
If you remember, the cheapskates over at Red Ventures implemented AI over at CNET without telling anybody. The result: articles rife with accuracy problems and plagiarism. Of the 77 articles published, more than halfhad significant errors. It ultimately cost them more to have humans editors come in and fix the mistakes than the money they’d actually saved. After backlash, Red Ventures paused the effort.
Until last week, where another Red Ventures website, the financial news outlet Bankrate, started, once again, publishing AI-generated articles. And, once again, the articles were filled with all kinds of basic errors, like misstating the median income or median home prices of the markets it was writing about. And, once again, humans failed to adequately fact check any of it before publication:
“With so many eyes on the company’s use of AI, you would expect that these first few new AI articles — at the very least — would be thoroughly scrutinized internally before publication. Instead, a basic examination reveals that the company’s AI is still making rudimentary mistakes, and that its human staff, nevermind the executives pushing the use of AI, are still not catching them before they end up in front of unsuspecting readers.”
When contacted, Bankrate deleted the article but defended the AI, blaming the errors on an outdated dataset (which still would have been an AI and editing error):
“Overall, it feels like one more installment in a familiar pattern: publishers push their newsrooms to post hastily AI-generated articles with no serious fact-checking, in a bid to attract readers from Google without making sure they’re being provided with accurate information. Called out for easily-avoidable mistakes, the company mumbles an excuse, waits for the outrage to die down, and then tries again.”
Like so many problems with modern tech, the problem is often the humans, not the technology.
You’ve probably noticed that U.S. journalism was already a hot mess. We can clearly monetize everything from Nazis to foot fetishes, yet somehow can’t figure out a the kind of innovative funding models needed to keep the media industry’s lights on or pay journalists a living wage. Not that we’ve tried very hard.
It’s because VCs and slash-and-burn hedge fund bros are trying to make a quick buck on a not particularly profitable public service. Conversations about publicly funding journalism are basically a nonstarter in the U.S., where even NPR and its tiny government contributions are mindlessly demonized by rabid partisans who increasingly view truth, reality, academia, expertise, and journalism as a mortal enemy.
Greedy idiots, not journalists, are running most U.S. newsrooms. And their first impulse is to not use AI to create better journalism, but to use AI to cheaply mass produce clickbait and gibberish, and use it as a blunt weapon against already comically underpaid labor. The end result is going to create more media distrust and an even worse signal to noise ratio in a country already drowning in bullshit and propaganda.
It’s a shame because the underlying chatbot and AI technology could very well be a useful tool to create real journalism and tools to help consume journalism. But real journalism isn’t what most media owners are interested in, and this dynamic likely isn’t fixed until the underlying greed and hubris is addressed. Which, if you’ve looked around, doesn’t appear to be a top priority anytime soon.
It wasn’t particularly surprising if you’ve watched the outlet’s coverage over the last decade become increasingly inundated with affiliate blogspam and often toothless, corporate friendly stenography of company press releases. And who could forget that time former CNET owner CBS blocked the company from doling out a CES award to Dish Network as part of a petty legal dispute over cable box ad skipping.
A major reason for CNET’s more recent problems are thanks to its owner, private equity firm Red Ventures, which acquired CNET from CBS in 2020. Recently leaked internal communications and employee accounts from inside CNET indicate that Red Ventures was so excited by AI’s ability to generate content at scale cheaply, it didn’t really care if the resulting content was rife with inaccuracies:
“They were well aware of the fact that the AI plagiarized and hallucinated,” a person who attended the meeting recalls. (Artificial intelligence tools have a tendency to insert false information into responses, which are sometimes called “hallucinations.”) “One of the things they were focused on when they developed the program was reducing plagiarism. I suppose that didn’t work out so well.”
Amusingly, the whole point of doing this, lower costs, never materialized because editing the resulting AI content was more time consuming that editing human work:
The AI system was always faster than human writers at generating stories, the company found, but editing its work took much longer than editing a real staffer’s copy. The tool also had a tendency to write sentences that sounded plausible but were incorrect, and it was known to plagiarize language from the sources it was trained on.
But AI aside, insiders say the environment created by Red Ventures is one in which affiliate blogspam style coverage takes precedent, and the company is all too happy to obliterate editorial firewalls and soften coverage if it makes advertisers happy:
Multiple former employees told The Verge of instances where CNET staff felt pressured to change stories and reviews due to Red Ventures’ business dealings with advertisers. The forceful pivot toward Red Ventures’ affiliate marketing-driven business model — which generates revenue when readers click links to sign up for credit cards or buy products — began clearly influencing editorial strategy, with former employees saying that revenue objectives have begun creeping into editorial conversations.
Reporters, including on-camera video hosts, have been asked to create sponsored content, making staff uncomfortable with the increasingly blurry lines between editorial and sales.One person told The Verge that they were made aware of Red Ventures’ business relationship with a company whose product they were covering and that they felt pressured to change a review to be more favorable.
U.S. journalism is, if you hadn’t noticed, already in crisis. There’s a decided lack of creative new financing ideas. There are also endless layoffs, and homogenized, feckless content that’s increasingly afraid of challenging sources, advertisers, or event sponsors. Twice a year the entire United States tech press turns their front pages into glorified blogspam affiliates for Amazon, and nobody, in any position of editorial authority, ever seems to think that’s in any way gross, unethical, or problematic.
AI will likely help human beings in multitude of ways we can’t even begin to understand. But it’s also going to supercharge existing problems (like propaganda) in similarly complicated and unforeseen ways, whether that’s making it easier for corporations to run sleazy astroturf lobbying campaigns, or inexpensively slather the Internet with feckless clickbait and blogspam at unprecedented scale.
There’s a fascinating article by Rebecca Jennings on Vox which explores the vexed question of plagiarism. Its starting point is a post on TikTok, entitled “How to EASILY Produce Video Ideas for TikTok.” It gives the following advice:
Find somebody else’s TikTok that inspires you and then literally copy it. You don’t need to copy it completely, but you can get pretty close.
If it’s not “literally” copying it, then it’s more a matter of following a trend than plagiarism, which involves taking someone else’s work and passing it off as your own. Following a trend is universal, not just online, but in the analogue world too, for example in business. As soon as a new product or new category comes along that is highly successful, other companies pile in with their own variants, which may be quite close to the original. If they offer something more than the original – extra features, a new twist – they might even be more successful. However unfair that might seem to the person or company that came up with the idea in the first place, it’s really only survival of the fittest, where fit means popular.
More interesting than the TikTok advice is the example of Brendan I. Koerner, contributing editor at Wired and author of several books, also mentioned in the Vox article. It concerns a long and interesting story he wrote for The Atlantic last year. Jennings explains:
Someone published a podcast based exclusively on a story [Brendan I. Koerner]’d spent nine years reporting for The Atlantic, with zero credit or acknowledgment of the source material. “Situations like this have become all too common amid the podcast boom,” he wrote in a now-viral Twitter thread last month.
I’ve not listened to the podcast (life is too short), so I can’t comment on what exactly “based exclusively on” means in this context. If it means taking the information of Koerner’s article and repackaging it, well, you can’t copyright facts. Multiple verbatim extracts is a more complex situation, and might require a court case to decide whether under current copyright law it’s allowed.
I think there are more interesting questions here than what exactly is plagiarism, which arises from copyright’s obsessions with ownership. Things like: did Koerner get paid a fair price by The Atlantic for all his work? If he did, then the issue of re-use matters less. It’s true that others may be freeriding off his work, but in doing so, it’s unlikely they will improve on his original article. In a way, those pale imitations serve to validate the superior original.
If Koerner wasn’t paid a fair price, for whatever reason, that’s more of an issue. In general, journalists aren’t paid enough for the work they do (although, as a journalist, I may be biased). The key question is then: how can journalists – and indeed all artists – earn more from their work? The current structures based around copyright really don’t work well, as previous posts on Walled Culture have explored. One alternative is the “true fans” model, whereby the people who have enjoyed your past work become patrons who sponsor future work, because they want more of it.
For someone like Koerner, with a proven track record of good writing, and presumably many thousands of fans, this might be an option. It would certainly help to boost his circle of supporters if everyone that draws on his work gives attribution. That’s something that most people are willing to add, as his Twitter thread indicates, because it’s clearly the right thing to do. Better acknowledgement by those who use his work would always be welcome.
On the issue of drawing support from fans, it’s interesting to note that the Vox article mentioned at the start of this post has the following banner at the top of the page:
Financial support from our readers helps keep our unique explanatory journalism free. Make a gift today in support of our work.
This is becoming an increasingly popular approach. For what it’s worth, I now support a number of titles and individual journalists in precisely this way, because I enjoy their work and wish to see it flourish. The more other people do the same, the less the issue of plagiarism will matter. Once creators are earning a fair wage through wider financial support, they won’t need to worry about “losing” revenue to those who free ride on their work, and can simply view it as free marketing instead, at least if it includes proper attribution to the original. The main thing is that their fans will understand and value the difference between the original and lower quality derivatives.
Last month we launched out Plagiarism Collection of NFTs, plagiarized from law professor/conceptual artist Brian L. Frye’s paper (and NFTs) called Deodand. The content isn’t just about plagiarism, they’re instructing people to experiment with plagiarism, so they seem perfectly set up for being plagiarized. And, since straight plagiarism doesn’t add much value, we decided to take his text, and make it a lot nicer by creating wonderful, colorful, animated GIFs, turning them into NFTs and auctioning them off on OpenSea.
Of course, we were realizing, you haven’t really seen these NFTs up close — so today I’m posting all of them for you to see. If you’d like to own the NFT associated with any of them, just click through and bid in the auction:
I think my personal favorite is Plagiarism Piece 2, though others have been growing on me. I was unsure about Plagiarism Piece 7 and Piece 8, but the more I look at each of them, the more both have been growing on me…
A few weeks ago, we wrote about our latest experiment with NFTs (which is part of the research we’re doing into NFTs for a deep dive paper I’m working on). There’s a very long explanation to explain the NFTs in question and why we’re plagiarizing Prof. Brian Frye (but making them much, much cooler). But, after we posted that, we discovered one little problem. The platform that we were using, OpenSea (the most popular and user friendly NFT marketplace)… didn’t work. At least not for us. We’ve spent 3 weeks asking OpenSea to fix things and last night they finally figured out the problem, so that you can now (finally) actually bid in the open auction for our plagiarized set of NFTs about plagiarism.
There are tons of reasons to back them — some good, some less good — but at the very least, it will help support Techdirt, it will show that culture works by building on those who came before, not by locking up content, and it will let you experiment with NFTs if you haven’t already. Also, it’ll let you show how maybe people shouldn’t freak out over plagiarism all the time — and when else do you have a chance to do that?
The entire collection can be seen here, and they do look amazing, if I do say so myself.