How Close Can AI Get To Writing A Techdirt Post?

from the man-vs.-machine dept

I’ve talked on Techdirt about just a few of my AI-related experiments over the past few years, including how I use it to help me edit pieces, which I still write myself. I still have no intention of letting AI write for me, but as the underlying technology has continued to level up, every so often I’ll run a test to see if it could write a better Techdirt post than I can. I don’t think it’s there (and I’m still not convinced it will ever get there), but I figured I can share the process with you, and let you be the judge.

I wanted to pick a fairly straightforward article, rather than a more complex one, just to see how well it works. In this case, I figured I’d try it with the story I published last week about Judge Boasberg ruling against the Trump administration and calling out how the DOJ barely participated in the case, and effectively told him to “pound sand” (a quote directly from the judge).

I know that just telling it to write a Techdirt article by itself will lead to pretty bland “meh” content. So before I even get to the prompt, there are some steps I need to include. First, over time I continue to adjust the underlying “system prompt” I use for editing my pieces. I won’t post the entire system prompt here as it’s not that interesting, but I do use it to make it clear its job is to help me be a better writer, not to be a sycophant, not to try to change things just for the sake of change, and to suggest things that will most help the reader.

I also have a few notes in it about avoiding recommending certain “AI-style” cliches like “it’s not this, it’s that.” Also, a specific one for me: “don’t suggest changing ‘fucked up’ to ‘messed up.’” It does that a lot for my writing.

But that’s not all. I also feed in Techdirt samples, which are a collection of ten of my favorite articles, so it gets a sense of what a “Techdirt article” looks like. On top of that, I give it a “Masnick Style Guide” that I had created after feeding a bunch of Techdirt articles into three different LLMs, asking for each to produce a style guide, and then having NotebookLLM combine them all into a giant “Masnick style-guide.”

Then, I feed it any links, including earlier stories on Techdirt, that are relevant, before finally writing out a prompt that can be pretty long. In this test case, I fed it the PDF file of the decision. I also gave it Techdirt’s previous stories about Judge Boasberg.

Finally, I gave it a starting prompt with a fair bit of explanation of what angle I was hoping to see a Techdirt post on this topic. So here’s my full prompt:

Can you write a Techdirt style first draft of a post (see the attached Techdirt post samples, as well as the even more important Masnick style guide, which you should follow) about the attached ruling in the JGG v. Trump case by Judge James Boasberg. I have also attached a page of previous articles about Judge Boasberg which you should consider, especially as some reference this same case.

You may also want to highlight that Judge Boasberg just was vindicated after the DOJ filed a vexatious complaint against him because of earlier rulings in this case, but that complaint has been easily dismissed. The crux of this article, though, should be on the very strong language Boasberg uses, including the astoundingly candid statement that he felt the government “apparently was no interested in participating in this process” and how he read their response as telling “the Court to pound sand.” There are other parts of the ruling that are useful as well. It should also call out the specific remedy here, and how weak it feels, given the situation explained in the ruling and the DOJ’s actions. Yes, they lost the case, but this still feels like a slap on the wrist with no real consequences for the DOJ.

As always with Techdirt posts, instead of doing in-line quotes, we usually do blockquotes, often of one or even multiple paragraphs from the original sources, so please adhere to that style. Take all of this info and draft the best Techdirt post you can regarding the topic.

And… you can see the results embedded below. I’ve also embedded my original article (though you can just go read it on the site directly). I’m amused that it added a “from the such and such dept” line as well as tags, both of which are core parts of Techdirt posts, but not things I asked it for.

I’m extremely biased, as this is a question of who writes better, me or the machine, and I still think I’m better than the machine. But, I also have to admit it’s not bad. It’s decent. If I had that submitted by another writer, there are definitely changes and edits I’d make, especially to the crappy cliche’d ending. But it’s way better than what most people think of as traditional ChatGPT output, in part because of all the context and detailed instructions I fed it.

Just for fun, I then also asked my editing tool to do its own edit on its own writing (usually not a good idea to let it grade its own homework), but here’s what it had to say:

solid draft. the “pound sand” framing works well and the frustration comes through. few things:

the tension setup: you flag “frustratingly toothless” in graf 3 then say “on one level this is a win” much later. that’s the right editorial instinct—there IS genuine tension here—but you’re burying the lede on your own argument. consider setting up that complexity earlier so readers know they’re getting nuance, not whiplash.

missing a key detail from the ruling: Boasberg notes that “even Plaintiffs’ counsel could not represent how many of the 137 are still in Venezuela and how many wish to proceed with habeas.” that’s devastating. the chaos is so complete that the lawyers literally don’t know where everyone is. worth including.

the numbered remedies block: you cut it off mid-list. the 5th point actually matters—there’s a March 13 deadline for the government to file a Status Report explaining how/when they’ll transport people back. that’s your next milestone. either include it or acknowledge you’re excerpting.

the Venezuela carve-out: you mention it but undersell it. the ruling specifically says it’s “mindful” of the “situation in Venezuela remains in flux” given “the United States’s recent law-enforcement action there.” that’s pretty ominous language from a federal judge about whatever the hell the administration is doing in Venezuela right now. might be worth a parenthetical noting this deserves its own article.

the closer: “I wouldn’t hold my breath” is fine but feels like boilerplate. you’ve got stronger material to end on—maybe the March 13 deadline, or something that points forward rather than just shrugging.

one small thing: “a complaint that was summarily dismissed” in graf 2—might want “recently dismissed” or similar since it was just last week. “summarily” suggests procedure when really the story is how embarrassing the dismissal was.

overall structure is good. the piece does what Techdirt does well: walks through the actual document, quotes liberally, and doesn’t pretend the “win” is bigger than it is.

You’ll note that it also calls out its own weak ending, just like I did. It is kind of wild to see the AI call out its own weaknesses, while making you wonder—if that’s the case—why did it not catch those earlier and fix them? And that then makes you wonder if it’s just coming up with a critique for the sake of giving one (though, that ending really is weak).

The other thing I’ll note is, again, this actually was still a fair bit of work. It was less than writing the whole thing myself, but even just writing out the prompt itself took time, and this piece would still need a fair bit of editing anyway for publication which would probably take away any time benefit.

Overall, though, you can see how the technology is certainly getting better. I still don’t think it can write as well as I do, but there are some pretty good bits in there.

Once again, this tech remains quite useful as a tool to assist people with their work. But it’s not really good at replacing your work. Indeed, if I asked the AI to write articles for Techdirt, I’d probably spend just as much time rewriting/fixing it as I would just writing the original in the first place. It still provides me very good feedback (on this article that you’re reading now, for example, the AI editor warned me that my original ending was pretty weak, and suggested I add a paragraph talking more about the conclusions which, uh, is what I’m now doing here).

I honestly think the biggest struggle with AI over the next year or so is going to be between the people who insist it can totally replace humans, leading to shoddy and problematic work, and the smaller group of people who use it as a tool to assist them in doing their own work better. The problems come in when people overestimate its ability to do the former, while underestimating its ability to do the latter.

Filed Under: , ,
Companies: techdirt

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “How Close Can AI Get To Writing A Techdirt Post?”

Subscribe: RSS Leave a comment
62 Comments
This comment has been deemed insightful by the community.
MightyMetricBatman says:

AI critiquing itself makes a lot more sense when you understand it is not aware, but a statistical model but words in front of itself statistically.

When it is trained on bad writing, it will do the bad writing because it was statistically common in similar situations.

And because it has critiques of the bad writing in the training, it also does that as well.

Once you take away the human element, it makes a lot more sense what it does. And also that it cannot create anything truly new or unique.

Anonymous Coward says:

Re:

AI critiquing itself makes a lot more sense when you understand it is not aware

In other words, that calling it “intelligent” is puffery. Large language models are just the latest in a long string of technologies to be hyped up as such. (Also see: neural networks, Markov models, “expert” systems, automated video game opponents, Bayesian networks, genetic algorithms… people eventually called bullshit on applying the term “intelligence” to that stuff, and I expect it’ll happen again.)

Sok Puppette says:

Which “ChatGPT”? ChatGPT, even GPT 5.2, is a collection of at least 3 or 4 models. Not all of them are available on the free tier, or even the lowest paid tier. And if you take the “auto” option, it may or may not actually send your prompt to the “thinking” model.

why did it not catch those earlier and fix them

Because that’s not how it works. The core model is a word by word text predictor. It cannot go back and change text it’s already output. If you used the “thinking” version, and especially if you paid extra for “pro”, it might do something similar to that by running over essentially the whole article in its “thoughts” before giving it to you. But I don’t think the “chain of thought” is usually that verbose or detailed.

You’ll also find that if you sit and go back and forth with it for very long, randomly discussing the article, and especially if you go off on any tangents, it’ll start to get stupider. You have to think about what’s in its context.

If you’d used the canvas or something and asked it to critique its work that way, it could have done that. It might even have been able to do that without the canvas, since the context does remember what the mode just said. But you would have had to ask.

That’s the sort of thing scaffolding does. People who use this stuff seriously don’t just take the output of the chat UI; they use a huge variety of wild and wonderful frameworks.

This comment has been flagged by the community. Click here to show it.

Scout says:

Lame

How Close Can AI Get To Writing A Techdirt Post? That’s the wrong question to ask, especially in the year of our lord 2026.

The fascist project known as “AI” will inevitably go the way of asbestos and CFCs, with society rightly choosing to reject it. But to get there, we need a million little acts of “Hey, I respect you less because you use AI to edit your damn blog posts”. Sheesh.

Anonymous Coward says:

It's fairly unsurprising...

…that a machine built to do plagiarism is good at plagiarism.

Those of us who have done serious work in AI/NLP/etc. were building models that did the same thing decades ago — although much more slowly and in much more limited contexts. What’s available now isn’t really any better: it’s just running on acres of CPUs, using a lot of power and water, and doing its part to accelerate global warming. So while you’re correct that isn’t particularly bad from a stylometric viewpoint, it’s also not very impressive: it’s exactly what we’d expect a stochastic parrot to do.

See: On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency

Drew Wilson (user link) says:

Crud. I was thinking of running a similar experiment to see how well it can handle a Freezenet post. TechDirt beat me to it. Ah well. Should’ve pulled the trigger sooner, but had a lot on my plate since I don’t have staff helping me out with news writing like TechDirt does.

One of the things I had long suspected was that it is takes just as much time and effort to come up with a prompt that might make something somewhat passable. Interesting that this was one of the conclusions here.

n00bdragon (profile) says:

It is kind of wild to see the AI call out its own weaknesses

It’s not wild at all. You asked it to critique a block of text (that it previously generated the block of text is irrelevant, it doesn’t know that and it wouldn’t care if it did). All its doing is predicting the sorts of words that appear in critiques of things. It has no ability to make value judgements, it just apes what judgement looks like.

Go ahead. Feed it Shakespeare (or whatever writing you find most unimpeachable) and it will happily dunk on it all day long.

Actually, better suggestion. Go ahead and feed it the worst TechDirt article ever written, the one you hate the most. I bet the criticism will look more or less the same.

Anonymous Coward says:

Re:

I think this is the most common mistake I see people making about these technologies and it INFRIATES me.

Having to point it out to people always brings to mind that classic line from the film produced during the last AI bubble of the 1980s, Short Circuit.

“It’s a Machine. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. IT JUST RUNS PROGRAMS.”

Every time someone ascribes any kind of intentionality, reflection, or understanding to an entirely predictable simplistic process of pattern matching, it’s a fundamental category error. It bears a superficial passing resemblance to something else, but that is not what it’s doing. And worse, it is often designed in such a way as to deliberately obfuscate that fact and trip the operator up into making that mistake.

For example, in how you can often ask these systems to explain the process by which they reached a conclusion, and what they will actually do is generate, on the fly, an often entirely plausible approximation of a hypothetical process by which it could be imagined to have come to that conclusion, based on the most common such explanations in its dataset…

Anonymous Coward says:

The other thing I’ll note is, again, this actually was still a fair bit of work. It was less than writing the whole thing myself, but even just writing out the prompt itself took time, and this piece would still need a fair bit of editing anyway for publication which would probably take away any time benefit.

Here’s a question for you Mike:

Do you find your editing style changes between a first draft written by AI and one written by a human?

I ask, because one of the main challenges is that AIs are designed to produce believable output, which means a lot of the “tells” we’d normally catch in human-generated prose are absent in AI-generated prose, while AIs can slip some stuff (content/structure) in that a human would never do, and because of that, are easier for a human editor to miss.

This has been the biggest hurdle for me with AI-generated content; I sometimes find myself mentally exhausted after an editing session, because I have to check every word and phrase, not just read through it and leave it up to my editorial reflexes trained on human writing to catch when something’s going off the rails.

Anonymous Coward says:

You get some use out of AI; fine and great, have at it. I still think it’s a scam like NFTs and crypto. Some limited utility doesn’t mean it isn’t. Hell Trump University was a scam, and yet, some classes happened.

But besides that. I’m struggling to understand why the time and effort apparently required for ‘quality’ output couldn’t be plugged into simply editing for yourself instead.
I mean if you still have to put work in, then why not build/practice your own skill rather than constantly making sure a bot’s shoes are tied properly? Do you not enjoy editing, or writing?

Also one last little thing.

we usually do blockquotes, often of one or even multiple paragraphs from the original sources, so please adhere to that style.

“Please?” Is it just a tool, or isn’t it.

Anonymous Coward says:

Re:

I mean if you still have to put work in, then why not build/practice your own skill rather than constantly making sure a bot’s shoes are tied properly? Do you not enjoy editing, or writing?

I can’t speak for Mike, but yesterday I used an LLM to go through a big CSS stylesheet, reorganize it, and cut it down by about 60%. It’s thousands of lines of code that I took from a large framework and am gradually whittling down to just what I need. I would not have learned anything from repeatedly running CSS pruning software and tediously comparing the output to make sure nothing broke. If I did it entirely by hand, or rewrote it from scratch, it would take me weeks if not months, and I would not enjoy it. I don’t think my use of this tool means I don’t enjoy or want to learn more about programming. If anything, by making these daunting tasks more approachable, it gives me more confidence and mental bandwidth to focus on more valuable aspects of the task.

“Please?” Is it just a tool, or isn’t it.

It’s trained on human text. It will respond in kind. Being polite helps.

Anonymous Coward says:

Re: Re:

I have fewer issues with AI use in areas like coding and research, doing tasks that would be unreasonable or unfeasible for a human to do unaided.
It’s in these areas I see AI most as a tool, with true utility.

If that was all it was, I’d have much less– though not no– problem with the tech. But I see it as primarily a scam because just like crypto and NFTs before it, AI keeps getting pushed on us instead of really being sold to us, and to the clear benefit of the already wealthy.

Reducing tedium is well and good, that is a proper tool. But it keeps being used and further trained for things that are, ultimately, about reducing the value of human skill.

Mike teaching a bot to write articles for him cheapens the value of his writing ability; why would anyone, including himself, bother having him write anything when you could just get a bot to do it. Why should I bother coming to TechDirt if I could simply train a bot to write TechDirt-styled articles for me at home?
They don’t need to be ‘perfect.’ McDonald’s is far from perfect, as food goes. Convenience is king, as ever.
Why bother with human writers and artists. Why bother with human lawyers. Why bother with human therapists. Why bother with human relationships? Social and emotional skills were already undervalued; chatbots of increasing complexity and capability won’t help.

And this is all besides any energy, environmental, and economical concerns. The rich are using it for financial games to make themselves even wealthier while shoving it down our throats, externalities be damned as always.
AI has some uses I can see as legitimate but basically everything it’s doing beyond that is devaluing humans and enriching oligarchs.

So is it worth it?
Personally I simply don’t think it is, even though I can, at the same time, be happy you saved yourself some time and bother.

Arianity (profile) says:

Overall, though, you can see how the technology is certainly getting better.

To me, that’s the most worrisome part, it will only get better. That said, if you’re interested, you likely could improve this quite a bit just by using slightly more sophisticated set ups. For instance, the way you’ve got one AI critiquing another, you can use that as a feedback loop for adversarial improvement. Or maybe even something like a LoRA finetune? It sounds like you’re kind of still using it like a chatbot, but you can get more complex than just a system prompt these days, if you wanted to really push the tech.

and the smaller group of people who use it as a tool to assist them in doing their own work better.

Just speaking personally, I would feel like something expressive is lost, if I were to write this way. And that makes me a bit sad. I do think AI has it’s uses but for writing in particular I worry about losing my voice

Anonymous Coward says:

Re:

110%.
Even when it’s instructed to mimic a specific tone or author, these “AI” written or assisted pieces come off as bland to me. These models trend towards shaving off the edges that can make a piece memorable or an author unique in the name of “simplifying” or “streamlining” the prose.

Personally, I would rather the whole “AI” project as it exists today be chucked in the bin. But if people are GOING to use it, I hope they keep a good sense of what makes their writing unique and worthwhile rather than allow these tools to homogenize their work.

Anonymous Coward says:

I’m amused that it added a “from the such and such dept” line as well as tags, both of which are core parts of Techdirt posts, but not things I asked it for.

I’d be surprised if those things hadn’t been added. They are standard for every TechDirt post, so the AI doesn’t need a lot of training to make the correlation. Besides, it’s pretty likely that an LLM has already seen a batch of TechDirt posts as part of its standard training set.

Anonymous Coward says:

Re:

If you don’t want to read the AI-generated article, that’s fine, but you should at least read the human-written article about the AI-generated article before you comment underneath it. The human-written article explains that the AI-generated article did take a significant amount of the human’s time, and we can suppose that the human-written article itself took the normal amount of time for a human-written article, too.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...