How Close Can AI Get To Writing A Techdirt Post?
from the man-vs.-machine dept
I’ve talked on Techdirt about just a few of my AI-related experiments over the past few years, including how I use it to help me edit pieces, which I still write myself. I still have no intention of letting AI write for me, but as the underlying technology has continued to level up, every so often I’ll run a test to see if it could write a better Techdirt post than I can. I don’t think it’s there (and I’m still not convinced it will ever get there), but I figured I can share the process with you, and let you be the judge.
I wanted to pick a fairly straightforward article, rather than a more complex one, just to see how well it works. In this case, I figured I’d try it with the story I published last week about Judge Boasberg ruling against the Trump administration and calling out how the DOJ barely participated in the case, and effectively told him to “pound sand” (a quote directly from the judge).
I know that just telling it to write a Techdirt article by itself will lead to pretty bland “meh” content. So before I even get to the prompt, there are some steps I need to include. First, over time I continue to adjust the underlying “system prompt” I use for editing my pieces. I won’t post the entire system prompt here as it’s not that interesting, but I do use it to make it clear its job is to help me be a better writer, not to be a sycophant, not to try to change things just for the sake of change, and to suggest things that will most help the reader.
I also have a few notes in it about avoiding recommending certain “AI-style” cliches like “it’s not this, it’s that.” Also, a specific one for me: “don’t suggest changing ‘fucked up’ to ‘messed up.’” It does that a lot for my writing.
But that’s not all. I also feed in Techdirt samples, which are a collection of ten of my favorite articles, so it gets a sense of what a “Techdirt article” looks like. On top of that, I give it a “Masnick Style Guide” that I had created after feeding a bunch of Techdirt articles into three different LLMs, asking for each to produce a style guide, and then having NotebookLLM combine them all into a giant “Masnick style-guide.”
Then, I feed it any links, including earlier stories on Techdirt, that are relevant, before finally writing out a prompt that can be pretty long. In this test case, I fed it the PDF file of the decision. I also gave it Techdirt’s previous stories about Judge Boasberg.
Finally, I gave it a starting prompt with a fair bit of explanation of what angle I was hoping to see a Techdirt post on this topic. So here’s my full prompt:
Can you write a Techdirt style first draft of a post (see the attached Techdirt post samples, as well as the even more important Masnick style guide, which you should follow) about the attached ruling in the JGG v. Trump case by Judge James Boasberg. I have also attached a page of previous articles about Judge Boasberg which you should consider, especially as some reference this same case.
You may also want to highlight that Judge Boasberg just was vindicated after the DOJ filed a vexatious complaint against him because of earlier rulings in this case, but that complaint has been easily dismissed. The crux of this article, though, should be on the very strong language Boasberg uses, including the astoundingly candid statement that he felt the government “apparently was no interested in participating in this process” and how he read their response as telling “the Court to pound sand.” There are other parts of the ruling that are useful as well. It should also call out the specific remedy here, and how weak it feels, given the situation explained in the ruling and the DOJ’s actions. Yes, they lost the case, but this still feels like a slap on the wrist with no real consequences for the DOJ.
As always with Techdirt posts, instead of doing in-line quotes, we usually do blockquotes, often of one or even multiple paragraphs from the original sources, so please adhere to that style. Take all of this info and draft the best Techdirt post you can regarding the topic.
And… you can see the results embedded below. I’ve also embedded my original article (though you can just go read it on the site directly). I’m amused that it added a “from the such and such dept” line as well as tags, both of which are core parts of Techdirt posts, but not things I asked it for.
I’m extremely biased, as this is a question of who writes better, me or the machine, and I still think I’m better than the machine. But, I also have to admit it’s not bad. It’s decent. If I had that submitted by another writer, there are definitely changes and edits I’d make, especially to the crappy cliche’d ending. But it’s way better than what most people think of as traditional ChatGPT output, in part because of all the context and detailed instructions I fed it.
Just for fun, I then also asked my editing tool to do its own edit on its own writing (usually not a good idea to let it grade its own homework), but here’s what it had to say:
solid draft. the “pound sand” framing works well and the frustration comes through. few things:
the tension setup: you flag “frustratingly toothless” in graf 3 then say “on one level this is a win” much later. that’s the right editorial instinct—there IS genuine tension here—but you’re burying the lede on your own argument. consider setting up that complexity earlier so readers know they’re getting nuance, not whiplash.
missing a key detail from the ruling: Boasberg notes that “even Plaintiffs’ counsel could not represent how many of the 137 are still in Venezuela and how many wish to proceed with habeas.” that’s devastating. the chaos is so complete that the lawyers literally don’t know where everyone is. worth including.
the numbered remedies block: you cut it off mid-list. the 5th point actually matters—there’s a March 13 deadline for the government to file a Status Report explaining how/when they’ll transport people back. that’s your next milestone. either include it or acknowledge you’re excerpting.
the Venezuela carve-out: you mention it but undersell it. the ruling specifically says it’s “mindful” of the “situation in Venezuela remains in flux” given “the United States’s recent law-enforcement action there.” that’s pretty ominous language from a federal judge about whatever the hell the administration is doing in Venezuela right now. might be worth a parenthetical noting this deserves its own article.
the closer: “I wouldn’t hold my breath” is fine but feels like boilerplate. you’ve got stronger material to end on—maybe the March 13 deadline, or something that points forward rather than just shrugging.
one small thing: “a complaint that was summarily dismissed” in graf 2—might want “recently dismissed” or similar since it was just last week. “summarily” suggests procedure when really the story is how embarrassing the dismissal was.
overall structure is good. the piece does what Techdirt does well: walks through the actual document, quotes liberally, and doesn’t pretend the “win” is bigger than it is.
You’ll note that it also calls out its own weak ending, just like I did. It is kind of wild to see the AI call out its own weaknesses, while making you wonder—if that’s the case—why did it not catch those earlier and fix them? And that then makes you wonder if it’s just coming up with a critique for the sake of giving one (though, that ending really is weak).
The other thing I’ll note is, again, this actually was still a fair bit of work. It was less than writing the whole thing myself, but even just writing out the prompt itself took time, and this piece would still need a fair bit of editing anyway for publication which would probably take away any time benefit.
Overall, though, you can see how the technology is certainly getting better. I still don’t think it can write as well as I do, but there are some pretty good bits in there.
Once again, this tech remains quite useful as a tool to assist people with their work. But it’s not really good at replacing your work. Indeed, if I asked the AI to write articles for Techdirt, I’d probably spend just as much time rewriting/fixing it as I would just writing the original in the first place. It still provides me very good feedback (on this article that you’re reading now, for example, the AI editor warned me that my original ending was pretty weak, and suggested I add a paragraph talking more about the conclusions which, uh, is what I’m now doing here).
I honestly think the biggest struggle with AI over the next year or so is going to be between the people who insist it can totally replace humans, leading to shoddy and problematic work, and the smaller group of people who use it as a tool to assist them in doing their own work better. The problems come in when people overestimate its ability to do the former, while underestimating its ability to do the latter.


Comments on “How Close Can AI Get To Writing A Techdirt Post?”
You mean the pro-AI articles from people associated with right wing lobbyist groups weren’t written by AI?
Re:
Were they written to the standards of a Techdirt post?
Re: Re:
They were on Techdirt and approved by the owner, so they think so, even if their presence lowers the overall standard.
Re: Re:
Standards are getting lower every day as Mike sells out for AI.
inb4 the mouth-foaming AI haters overrun the comments section
Re:
It’s like they think foaming at the mouth at any mention of AI is a substitute for having a personality.
Re:
Oh no! What will I do about the AI lover! Oh wait, nothing, he is probably so fucking stupid he will die jobless in a few years.
Re:
Idk maybe you should check yourself for rabies.
AI critiquing itself makes a lot more sense when you understand it is not aware, but a statistical model but words in front of itself statistically.
When it is trained on bad writing, it will do the bad writing because it was statistically common in similar situations.
And because it has critiques of the bad writing in the training, it also does that as well.
Once you take away the human element, it makes a lot more sense what it does. And also that it cannot create anything truly new or unique.
Re:
In other words, that calling it “intelligent” is puffery. Large language models are just the latest in a long string of technologies to be hyped up as such. (Also see: neural networks, Markov models, “expert” systems, automated video game opponents, Bayesian networks, genetic algorithms… people eventually called bullshit on applying the term “intelligence” to that stuff, and I expect it’ll happen again.)
Which “ChatGPT”? ChatGPT, even GPT 5.2, is a collection of at least 3 or 4 models. Not all of them are available on the free tier, or even the lowest paid tier. And if you take the “auto” option, it may or may not actually send your prompt to the “thinking” model.
Because that’s not how it works. The core model is a word by word text predictor. It cannot go back and change text it’s already output. If you used the “thinking” version, and especially if you paid extra for “pro”, it might do something similar to that by running over essentially the whole article in its “thoughts” before giving it to you. But I don’t think the “chain of thought” is usually that verbose or detailed.
You’ll also find that if you sit and go back and forth with it for very long, randomly discussing the article, and especially if you go off on any tangents, it’ll start to get stupider. You have to think about what’s in its context.
If you’d used the canvas or something and asked it to critique its work that way, it could have done that. It might even have been able to do that without the canvas, since the context does remember what the mode just said. But you would have had to ask.
That’s the sort of thing scaffolding does. People who use this stuff seriously don’t just take the output of the chat UI; they use a huge variety of wild and wonderful frameworks.
This comment has been flagged by the community. Click here to show it.
Lame
How Close Can AI Get To Writing A Techdirt Post? That’s the wrong question to ask, especially in the year of our lord 2026.
The fascist project known as “AI” will inevitably go the way of asbestos and CFCs, with society rightly choosing to reject it. But to get there, we need a million little acts of “Hey, I respect you less because you use AI to edit your damn blog posts”. Sheesh.
Re:
Being this wrong about it in 2024 was understandable. In 2026 it’s delusional.
Re:
Hmm. Again, as I’ve said before, I think it’s odd to call it fascist. Especially as you can use the tech to break free of giant tech companies and take back control over your own life.
Is it better for me to have full control over my own tools, or have to hand it over to a giant tech company to control?
As for your respect, well, your call obviously. But where do you draw the line. Is it a sin to use spellcheck or grammar check? Because that’s basically a form of AI as well.
Re: Re:
Explain Palantir and Grok then.
Re: Re: Re:
“Big companies make/use AI, therefore one can’t use AI to take back control from giant tech companies”.
Is that your implication?
Re: Re: Re:
That is a complete non sequitur to what I said.
What an odd comment.
Re: Re:
Many people can’t see past the creator to the tool they create. Of course, if that were a universal principle then the machine gun would by definition be a wonderful tool, as it was intended to make wars impossible…
Also, I would much rather read less-polished writing that is AI free. No one expects you to be perfect. In fact, our imperfections is what make us interesting. 🙂
Re:
That is not the norm in my experience. The criticism in the comments of my articles is fierce when I make a small mistake or error. And the stories where AI helps me edit first before handing it off to my human editor get less of that hatred.
You may be the exception, but I have to go based on what I see across the board. And by the numbers, we have gotten better reactions to the stories I’ve written that were reviewed by an AI.
Re: Re:
Of course. The AI has been optimized on getting a positive response from humans. Not on being factually correct, not on effectively getting the message across, but on getting a positive response. If it needs to be sycophantic, or emit baseless confidence in order to get that response, it will be.
Re: Re: Re:
Again, the tool I used is NOT trained that way.
Re: Re:
NotebookLLM is powered by Gemini and (for reason well-established) is unethical to use in any capacity. Whether it “does the job” is moot.
It's fairly unsurprising...
…that a machine built to do plagiarism is good at plagiarism.
Those of us who have done serious work in AI/NLP/etc. were building models that did the same thing decades ago — although much more slowly and in much more limited contexts. What’s available now isn’t really any better: it’s just running on acres of CPUs, using a lot of power and water, and doing its part to accelerate global warming. So while you’re correct that isn’t particularly bad from a stylometric viewpoint, it’s also not very impressive: it’s exactly what we’d expect a stochastic parrot to do.
See: On the Dangers of Stochastic Parrots | Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
Re:
Your definition of plagiarism does not match the world’s definition of plagiarism. Nothing in the article it wrote for me is “plagiarism” under any reasonable definition of the word. Saying otherwise makes me think you don’t know what you’re talking about.
And, it’s WAY fucking better than a stochastic parrot because of all the context.
Re: Re:
If you’d actually studied AI and neural networks and NLP and syntactic pattern recognition and expert systems and statistical pattern recognition, if you actually learned how to do the math, you’d understand this much better. But you haven’t, and so you’re easily impressed that a system built to copy patterns of text…can copy patterns of text.
Whereas as those us who have studied such things and can do the math understand that not only is this unremarkable, it’s laughable — because people were doing the same thing decades ago using vastly less computing capacity. So why didn’t we build these things? Because building a plagiarism machine like this is pointless, unethical, and really quite stupid.
It’s a parlor trick that fools the uneducated, no more.
Re: Re: Re:
I mean, I literally have spoken to some of the leading experts on this and not a single one agrees with you.
And, no, what the machines were doing decades ago is nothing like this. Hell, what machines were doing a year ago is nothing like this.
I get it. There are a group of people who concluded years ago that this tech doesn’t work and they will never, ever explore what is actually happening.
I have made it clear that the tech is not perfect, that it is overhyped, that it has problems, and that it doesn’t work as a writer replacement. But I did think it was an interesting experiment to see how far you could stretch it.
And you insist that it’s all a “parlor trick”? You don’t look smart. You look like a fool.
Re: Re:
Did AI write that reply for you also?
Re: Re: Re:
It’s so weird that I keep explaining how and why I don’t let AI write for me, and why it’s no good for that, and you think it’s clever to say “hurr durr ai wrote that reply.”
Are you fucking stupid?
Crud. I was thinking of running a similar experiment to see how well it can handle a Freezenet post. TechDirt beat me to it. Ah well. Should’ve pulled the trigger sooner, but had a lot on my plate since I don’t have staff helping me out with news writing like TechDirt does.
One of the things I had long suspected was that it is takes just as much time and effort to come up with a prompt that might make something somewhat passable. Interesting that this was one of the conclusions here.
Re:
Yes. It is possible though unlikely that if you put enough effort into the initial work that you can get passable results regularly, but I still think the overall amount of work would remain about the same. It’s a fun thought experiment though.
In some cases, for simple news coverage of straightforward stories (like high school sports or something) I could see how it might work better? But anything that involves analysis and thinking it still comes up short.
Re: Re:
Would you be ok if someone else were to try it on your writing? Obviously not publishing it or anything, but ever since your editing post I’ve been curious if I could get to make a passable simulacrum. Feels kind of rude to just do it without permission though
Re: Re: Re:
Try what? To try to write a Mike Masnick/Techdirt style post? Fine with me if someone wants to try.
Re: Re:
Interesting. Well, if I have a moment in time, I might still run with this kind of experiment. In my case, I’ll be doing it from the perspective of someone who doesn’t use AI at all and coming in cold. Might be an interesting thought experiment to see how people with little to no exposure to AI content generation can run into problems and what thought processes can contribute to leading to certain outcomes (whether good or bad).
It’s not wild at all. You asked it to critique a block of text (that it previously generated the block of text is irrelevant, it doesn’t know that and it wouldn’t care if it did). All its doing is predicting the sorts of words that appear in critiques of things. It has no ability to make value judgements, it just apes what judgement looks like.
Go ahead. Feed it Shakespeare (or whatever writing you find most unimpeachable) and it will happily dunk on it all day long.
Actually, better suggestion. Go ahead and feed it the worst TechDirt article ever written, the one you hate the most. I bet the criticism will look more or less the same.
Re:
I think this is the most common mistake I see people making about these technologies and it INFRIATES me.
Having to point it out to people always brings to mind that classic line from the film produced during the last AI bubble of the 1980s, Short Circuit.
“It’s a Machine. It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes. IT JUST RUNS PROGRAMS.”
Every time someone ascribes any kind of intentionality, reflection, or understanding to an entirely predictable simplistic process of pattern matching, it’s a fundamental category error. It bears a superficial passing resemblance to something else, but that is not what it’s doing. And worse, it is often designed in such a way as to deliberately obfuscate that fact and trip the operator up into making that mistake.
For example, in how you can often ask these systems to explain the process by which they reached a conclusion, and what they will actually do is generate, on the fly, an often entirely plausible approximation of a hypothetical process by which it could be imagined to have come to that conclusion, based on the most common such explanations in its dataset…
Here’s a question for you Mike:
Do you find your editing style changes between a first draft written by AI and one written by a human?
I ask, because one of the main challenges is that AIs are designed to produce believable output, which means a lot of the “tells” we’d normally catch in human-generated prose are absent in AI-generated prose, while AIs can slip some stuff (content/structure) in that a human would never do, and because of that, are easier for a human editor to miss.
This has been the biggest hurdle for me with AI-generated content; I sometimes find myself mentally exhausted after an editing session, because I have to check every word and phrase, not just read through it and leave it up to my editorial reflexes trained on human writing to catch when something’s going off the rails.
Re:
I don’t know since I’m not using the AI for writing, so I don’t spend much time editing pieces written by AI (that I know of). I assume if I was using AI for writing then there would be issues like what you describe.
It’ll never have Bode’s flair for sublime new insulting names and adjectives for our modern villains.
Re:
After reading this comment, I asked Claude Opus 4.6 what it thought and got the following:
—
Let me look through Karl Bode’s writing on Techdirt to get a sense of his style and the kinds of colorful language he uses.OK, I’ve now read through a substantial amount of Karl’s writing. Let me synthesize what I’ve found into a fun analysis.
This is a great question — Karl has a really distinctive voice. After going through a bunch of his pieces, here’s a taxonomy of the Bode Insult Lexicon, organized by villain type:
The Signature: “Brunchlords”
His most iconic coinage. Used for affluent, out-of-touch executives — especially in media and tech — who run companies into the ground while living comfortably. He frequently modifies it: “trust fund brunchlords,” “fail-upward brunchlords,” “feckless trust fund brunchlords,” “center-right brunchlords,” “invertebrate brunchlords.” It’s become practically a Techdirt tag unto itself.
The Extended Brunchlord Universe — terms he stacks and combines:
His Favorite Adjectives of Contempt:
“Feckless” is probably his single most-used word — he layers it onto everything (feckless corporations, feckless press, feckless coverage, feckless capitulation). Then: “shitty,” “dodgy,” “shoddy,” “half-assed,” “half-cooked/under-cooked,” “lazy,” “mindless,” and “badly automated.”
Structural Insults (the compound noun phrases Karl might coin next):
Based on his patterns — combining class signifiers + incompetence + a vivid noun — here are some coinages that feel very Bode-adjacent and that he might plausibly reach for:
The core formula is: [class/wealth signifier] + [incompetence descriptor] + [vivid, slightly archaic or food-adjacent noun]. “Brunchlords” works so well because it nails all three — brunch signals affluent leisure, and “lords” signals unearned feudal authority. Any new coinage would need that same compression.
Re: Re:
Some of its suggestions have potential! I feel like Karl could punch ’em up a bit more, but they’re on track at least.
Re: Re:
God i hate these AI summaries.
Re: Re:
I think “Spreadsheet ghouls” and maybe “ad-engagement slop merchant” are the only good suggestions. The cosplayer ones don’t really make sense. The rest are either awkward or don’t really have any punch.
The overall summary is pretty good – reinforces your point that this kind of tool can be good for synthesizing a large amount of information into a summary. But, not to expect anything special when it comes to creative output.
Have you seen this interesting article about a term for what LLMs do: semantic ablation?
https://www.theregister.com/2026/02/16/semantic_ablation_ai_writing/
Re: Re: Re:
I sorta liked “merger goblins” too. But yeah, on the whole, again it shows the limitations of the tech. It’s not bad but it’s not good either.
As for that article, yeah, I can see how that’s possible if you just let the tech run without any human input. But it’s also why when i use it for editorial help I’ll reject suggestions that I think are doing that kind of thing (and I add more to the system prompt telling it not to recommend such changes).
There’s this weird bit among many AI haters that they seem to think anyone who uses AI accepts every suggestion AI makes. As if they have no agency on their own.
Just remember. Along side that, a few dozen images of csam were likely made, along with dozens of ai scams, and someone just saw their energy bill increase even more.
Re:
I mean… you could say the same thing about the internet itself. And yet, here you are, using it. These attacks are really quite silly and don’t make you look savvy or smart. It makes you look very silly.
You get some use out of AI; fine and great, have at it. I still think it’s a scam like NFTs and crypto. Some limited utility doesn’t mean it isn’t. Hell Trump University was a scam, and yet, some classes happened.
But besides that. I’m struggling to understand why the time and effort apparently required for ‘quality’ output couldn’t be plugged into simply editing for yourself instead.
I mean if you still have to put work in, then why not build/practice your own skill rather than constantly making sure a bot’s shoes are tied properly? Do you not enjoy editing, or writing?
Also one last little thing.
“Please?” Is it just a tool, or isn’t it.
Re:
I can’t speak for Mike, but yesterday I used an LLM to go through a big CSS stylesheet, reorganize it, and cut it down by about 60%. It’s thousands of lines of code that I took from a large framework and am gradually whittling down to just what I need. I would not have learned anything from repeatedly running CSS pruning software and tediously comparing the output to make sure nothing broke. If I did it entirely by hand, or rewrote it from scratch, it would take me weeks if not months, and I would not enjoy it. I don’t think my use of this tool means I don’t enjoy or want to learn more about programming. If anything, by making these daunting tasks more approachable, it gives me more confidence and mental bandwidth to focus on more valuable aspects of the task.
It’s trained on human text. It will respond in kind. Being polite helps.
Re: Re:
I have fewer issues with AI use in areas like coding and research, doing tasks that would be unreasonable or unfeasible for a human to do unaided.
It’s in these areas I see AI most as a tool, with true utility.
If that was all it was, I’d have much less– though not no– problem with the tech. But I see it as primarily a scam because just like crypto and NFTs before it, AI keeps getting pushed on us instead of really being sold to us, and to the clear benefit of the already wealthy.
Reducing tedium is well and good, that is a proper tool. But it keeps being used and further trained for things that are, ultimately, about reducing the value of human skill.
Mike teaching a bot to write articles for him cheapens the value of his writing ability; why would anyone, including himself, bother having him write anything when you could just get a bot to do it. Why should I bother coming to TechDirt if I could simply train a bot to write TechDirt-styled articles for me at home?
They don’t need to be ‘perfect.’ McDonald’s is far from perfect, as food goes. Convenience is king, as ever.
Why bother with human writers and artists. Why bother with human lawyers. Why bother with human therapists. Why bother with human relationships? Social and emotional skills were already undervalued; chatbots of increasing complexity and capability won’t help.
And this is all besides any energy, environmental, and economical concerns. The rich are using it for financial games to make themselves even wealthier while shoving it down our throats, externalities be damned as always.
AI has some uses I can see as legitimate but basically everything it’s doing beyond that is devaluing humans and enriching oligarchs.
So is it worth it?
Personally I simply don’t think it is, even though I can, at the same time, be happy you saved yourself some time and bother.
Re: Re: Re:
What a weird thing to say. So first of all, the article makes it pretty clear I didn’t teach a bot to write articles for me. I experimented with whether or not that was possible, because it’s an interesting theoretical question, and I feel that I showed it was not possible.
Second, even in this case where it was not possible, I (not the bot) needed to supply a lot of the key elements of what to write about, what the focus of the article was, etc. So even if someone else could actually train a bot to write techdirt style articles, it would still be missing the key issues that I would always decide of “what is worth writing about” and “what is the opinion/focus in writing about it.”
Uh, as the article makes clear, because the bots just aren’t that good.
Re: Re: Re:2
What a weird reply. In the article:
I’m sorry, but how is it not accurate to summarize the process as: you taught it to write one of your articles for you.
That you did so for funzies– that you weren’t making the experiment to replace yourself– doesn’t change the process. That in your estimation it failed its assignment doesn’t change the process.
You taught it your writing. No?
And yet that doesn’t really address my concern about imperfect but ‘good enough’ AI McArticles.
Ah but about this, about it ‘not being there,’ that’s how you feel now, because that’s as good as it is now. This doesn’t address how everyone is continuously racing to make these things better… or at least are pouring money into the bubble in the hopes of doing so.
Unless it’s even more of a scam than I give it credit for, at some point AI will become the fast food version of… well, whatever you like. Every kind media you want, among other services.
And then what? I think people who can afford it, in either time or money, will seek out the artisanal; the human-made.
What about everybody else? Given how aggressively its being pushed now, how much choice will even be left at that point? Think of food deserts, except for everything AI can do.
Call me a doomer or hater if you like. I fully admit my views of, expectations for, this technology are cynical and grim. But can you blame me?
Re: Re: Re:3
All that training was for the editing functionality. It’s so that when it’s editing the articles that I write, it understands what a Techdirt article really looks like.
That I used in this test case to see if it could write an article (and decided it could not do it particularly well) is unrelated to the training of the tech.
I thought that was obvious. It is bizarre that you read it otherwise.
For the purpose of editing my work better. Which it has done for nearly two years. This wasn’t new (though I’ve made it better and better over that time).
Sure it does. If they are imperfect AI McArticles, they won’t be very good and won’t be very useful.
I mean… I agree? It’s exactly the point of this article. The AI written stuff just won’t be very satisfying, except maybe for very formulaic, less important content (basic financial stories, maybe?)
Again, I feel fairly confident that those uses won’t ever be all that good. And if they somehow are… I am equally confident that it would then open up new opportunities.
It’s weird. You hate AI, but you seem to vastly overestimate what it can do compared to me… who is optimistic about the tech.
I don’t blame you. I just think you… are wrong.
Why does techdirt keep pushing this, AI isn’t horrible actually viewpoint? Very disappointing.
Re:
Because it’s not horrible? Because I’m not going to lie to readers to appease a bunch of people who insist the sky is green?
The tech is overhyped. But it is legitimately interesting, and it’s valuable to explore where and why it’s interesting and where and why it’s overhyped. THat’s what I’m doing here, exploring the limits of the tech, explaining why I don’t think it’s good for writing.
Would you prefer I lie and pretend it doesn’t exist? Or that it can’t do the things it can do?
I don’t think most people would like it if I lied. If you want to be lied to there are other sites for you.
To me, that’s the most worrisome part, it will only get better. That said, if you’re interested, you likely could improve this quite a bit just by using slightly more sophisticated set ups. For instance, the way you’ve got one AI critiquing another, you can use that as a feedback loop for adversarial improvement. Or maybe even something like a LoRA finetune? It sounds like you’re kind of still using it like a chatbot, but you can get more complex than just a system prompt these days, if you wanted to really push the tech.
Just speaking personally, I would feel like something expressive is lost, if I were to write this way. And that makes me a bit sad. I do think AI has it’s uses but for writing in particular I worry about losing my voice
Re:
110%.
Even when it’s instructed to mimic a specific tone or author, these “AI” written or assisted pieces come off as bland to me. These models trend towards shaving off the edges that can make a piece memorable or an author unique in the name of “simplifying” or “streamlining” the prose.
Personally, I would rather the whole “AI” project as it exists today be chucked in the bin. But if people are GOING to use it, I hope they keep a good sense of what makes their writing unique and worthwhile rather than allow these tools to homogenize their work.
Re: Re:
It’s weird to read this as if that… wasn’t the entire point of this article. I did the experiment and showed exactly what you said. It does shave the edges off. The writing isn’t as good.
I’d be surprised if those things hadn’t been added. They are standard for every TechDirt post, so the AI doesn’t need a lot of training to make the correlation. Besides, it’s pretty likely that an LLM has already seen a batch of TechDirt posts as part of its standard training set.
Bluntly, if someone can’t be bothered to put time into writing something, I’m not gonna put time into reading it.
Re:
If you don’t want to read the AI-generated article, that’s fine, but you should at least read the human-written article about the AI-generated article before you comment underneath it. The human-written article explains that the AI-generated article did take a significant amount of the human’s time, and we can suppose that the human-written article itself took the normal amount of time for a human-written article, too.
Really greatful to Mike for thoroughly documenting his decline into LLM-psychosis. Hopefully it will serve as evidence at Altman, Musk, et al’s trial.
Re:
What a strange comment.
Me: experiments with AI, shows why it’s not yet suitable for the purpose many people think it’s suitable for.
You: OH LOOK AT YOU WITH LLM-PSYCHOSIS.
I think, maybe the only one dealing with psychosis is you dude.