The AI Journalism ‘Revolution’ Continues To Go Poorly As Gannett Accused Of Making Up Fake Humans To Obscure Lazy AI Use

from the I'm-sorry-I-can't-do-that-Dave dept

While recent evolutions in “AI” have netted some profoundly interesting advancements in creativity and productivity, its early implementation in journalism has been a comically sloppy mess thanks to some decidedly human problems: namely greed, incompetence, and laziness.

If you remember, the cheapskates over at Red Ventures implemented AI over at CNET without telling anybody. The result: articles rife with accuracy problems and plagiarism. Of the 77 articles published, more than half had significant errors. It ultimately cost them more to have humans editors come in and fix the mistakes than the money they’d actually saved. After backlash, Red Ventures paused the effort.

Gannett, the giant media company that owns USA Today (and very likely whatever’s left of your local newspaper), was also forced to pause its use of AI earlier this year because the resulting product was laughably bad and full of obvious errors. Even when used for the kind of basic writing LLMs are supposed to excel at, like basic box score journalism.

Fast forward to this week, and Gannett is once again under fire for allegedly making up writer bylines as cover for a different low-quality AI experiment. This time the problems bubbled up at Reviewed, a USA Today-owned product review website, where staffers noticed that badly written product reviews of products staffers had never seen were popping up under the bylines of people who didn’t exist:

“Not only were Reviewed staffers unfamiliar with the bylines on the stories — names like “Breanna Miller” and “Avery Williamson” — they were unable to find evidence of writers by those names on LinkedIn or any professional websites.”

All of the articles in question are sterile and not particularly engaging, and all shared notable similarities. Here, for example, is one of their reviews for scuba masks, contrasted to their reviews for water bottles:

While “AI” can definitely improve journalism efficiency on everything from transcription to editing, the kind of fail-upward types at the top of the media industry food chain generally see the technology as a way to cut corners and assault already woefully mistreated and underpaid human labor, especially of the unionizing variety.

Unionized writers at Reviewed say that Gannett was trying to obfuscate its efforts to undermine unionized human staff after its embarrassing face plant earlier this year:

Carrillo, a shop steward for the union, said the mysterious reviews — which appeared just weeks after staff staged a one-day walkout to demand management negotiate on a new contract — harm the reputations of actual employees.

“It’s gobbledygook compared to the stuff that we put out on a daily basis,” he said. “None of these robots tested any of these products.”

Amusingly, when approached for comment by the Washington Post, a Gannett spokesperson first tries to deny that the articles were AI generated, then implies that if they were AI-generated, it was all the fault of a third-party marketing firm:

“In a statement to The Post, a spokesperson said the articles — many of which have now been deleted — were created through a deal with a marketing firm to generate paid search-engine traffic. While Gannett concedes the original articles “did not meet our affiliate standards,” officials deny they were written by AI.

“We expect all our vendors to comply with our ethical standards and have been assured by the marketing agency the content was NOT AI generated,” the spokesperson said in an email.”

The marketing firm in question redirected questions back to Gannett. WAPO reporters couldn’t find evidence any of the writers exist. The site’s human writers say it’s obvious that AI was used, noting the marketing firm in question clearly advertises that it engages in “polishing AI generative text.”

Again, the problem here generally isn’t the technology itself. AI will ultimately improve and become increasingly useful in a myriad of ways. The problem is the kind of humans implementing it. And the way they’re implementing it without involving or even telling existing staffers.

The affluent hedge fund brunchlord types that dominate key positions across U.S. media “leadership” clearly see AI not as a path toward better product or more efficient workforce, but as a shortcut to building an automated ad engagement machine that effectively shits money. And, as an added bonus, a way to undermine staffers peskily demanding health insurance and a living wage.

Large U.S. media companies are filled to the brim with managers who are terrible at their jobs to begin with, making their failures on AI unsurprising. When it comes to the folks shaping the contours of modern journalism, ethics, product quality, accurately informing the public, staff happiness, or genuine human interest very often never even enter the frame.

Filed Under: , , , , , , ,
Companies: gannett

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “The AI Journalism ‘Revolution’ Continues To Go Poorly As Gannett Accused Of Making Up Fake Humans To Obscure Lazy AI Use”

Subscribe: RSS Leave a comment
11 Comments
Anonymous Coward says:

Pretty sure I’m seeing a speed run on the enshittification trail here. Only Gannett has kicked in the afterburners in an obvious attempt to get there ahead of everyone else.

But equally to the point, so-called “brunchlords” are really just Poster Boys for The Peter Principle. For those not familiar with said principle, it describes in detail how a person manages to be promoted upwards to his/her level of incompetence. Any further upward movement is merely gilding the lily.

Anonymous Coward says:

Another article on AI I ran across today shows that Google is not immune to AI corruption. For instance, if you search for African nations starting with K, the “Featured Snippet” is…

While there are 54 recognized countries in Africa, none of them begin with the letter “K”. The closest is Kenya, which starts with a “K” sound, but is actually spelled with a “K” sound. It’s always interesting to learn new trivia facts like this.

Attributed to ycombinator at this exact moment, but previously attributed to Emergent Mind (reported by boingboing ).

Emergent Mind is a project about AI hallucinations, appropriately enough.

The joke here, is that I gave Google feedback this afternoon (as, perhaps did many other people) that “this is stupid”. What did they do? Took down the top answer, but didn’t check the queue for whether the penultimate answer was any better or any different.

Apparently “due diligence at scale” (to paraphrase a TD saying) is also tough.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...