AI Might Be Our Best Shot At Taking Back The Open Web
from the hear-me-out dept
I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturally). Some friends I had met on Usenet were students at the University of Illinois at Urbana-Champaign, and told me to download NCSA Mosaic (this would have been early 1994). And suddenly the possibility of the internet as a visual medium became clear. I rushed down to the university bookstore and picked up a giant 400ish page book on building websites with HTML (I only finally got rid of that book a few years ago). I don’t think I ever read beyond the first chapter. But what I did do was learn how to right click on webpages and “view source.”
And from that, magic came.
I had played around with trying to build websites, and I remember another friend telling me about GeoCities (I can’t quite recall if this was before or after they had changed their name from their original “Beverly Hills Internet”) handing out web sites for free. You just had to create the HTML pages and upload them via FTP.
And so I started designing really crappy websites. I don’t remember what the early ones had, but like all early websites they probably used the blink tag and had under construction images and eventually a “web counter.”
But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.
Right click. View source. Copy. Mess around. A week later I had my own (very different) version of the sliding doors on my GeoCities site, but using the same HTML bones as Derek’s brilliant work.
You could just build stuff. You could look at what others were doing and play around with it. Copy the source, make adjustments, try things, and have something new. There were, certainly, limitations of the technology, but it was incredibly easy for anyone to pick up. Yes, you had to “learn” HTML, but you could pick up enough basics in an afternoon to build a decent looking website.
But then two things happened, and it’s worth separating them because they’re different problems with different causes.
First, the technical barrier went up. CSS and Javascript opened up incredible possibilities to make websites beautiful and interactive, but they also meant it was a lot more difficult to just view source, copy, and mess around. The gap between “basic functional website” and “actually looks good” widened into a chasm that required real expertise to cross. Plenty of dedicated people learned these skills, but the casual tinkerer — the person who’d spend an afternoon copying Derek’s frames to make sliding doors — increasingly couldn’t keep up.
But the technical complexity alone didn’t kill amateur web building. The centralization did. While there was an interim period where people set up their own blogs, it quickly moved to walled “social media gardens” where some giant tech company decided what your page looked like. Why bother learning CSS when you could just dump text in a Facebook box and reach more people? The incentive to build your own thing evaporated, replaced by the convenience of posting to someone else’s platform under someone else’s (hopefully benign) rules.
These two problems reinforced each other. The harder it got to build your own thing, the more attractive the walled gardens became. The more people moved to walled gardens, the less reason there was to learn to build.
The rise of agentic AI tools is opening up an opportunity to bring us back to that original world of wonder where you could just build what you wanted, even without a CS degree. And here I need to be specific about what I mean by “agentic AI” — because too many people are overly focused on the chatbots that answer questions or generate text or images for you. I’m talking about AI systems that can actually do things: write code, execute it, debug it, iterate on it based on your feedback. Tools like Claude Code, Cursor, Codex, Antigravity, or similar coding agents that can take a description of what you want and actually build it.
For all those years that tech bros would shout “learn to code” at journalists, the reality now is that being able to write well and accurately describe things is a superpower that is even better than code. You can tell a coding agent what to do… and for the most part it will do it.
Let me give you the example that still kind of blows my mind. A few weeks ago, in the course of a Saturday — most of which I actually spent building a fence in my yard — I had a coding agent build an entire video conferencing platform. It built a completely functional platform with specific features I’d wanted for years but couldn’t find in existing tools. I’ve now used it for actual staff meetings. The fence took longer to build than the software.
All it took was describing what I wanted to an agent that could code it for me. And it addresses both problems I described earlier: it lowers the technical barrier back down to “can you describe what you want clearly?” while also enabling you to build your own thing rather than accepting whatever some platform offers you.
Over the last few months I’ve been finding I need to retrain my brain a bit about what we accept and learn to deal with vs. what we can fix ourselves. In the past I’ve talked about the learned helplessness many people feel about the tech that we use. We know that it’s vaguely working against us, and we all have to figure out what trade-offs we’re willing to accept to accomplish whatever goals we have.
But what if we could just fix things rather than accepting the tradeoffs?
I’ve talked in the past about how I’ve used an AI-assisted writing tool called Lex over the past few years, which doesn’t write for me, but is a very useful editorial assistant. Over the last few months, though, I decided to see if I could effectively rebuild that tool myself, fully controlled by me, without having to rely on a company that might change or enshittify the app. I actually built it directly into the other big AI experiment I’ve spoken about: my task management tool, which I’ve also moved away from a third party hosting service onto a local machine. Indeed, I’m writing this article right now in this tool (I first created a task to write about it, and then by clicking a checkbox that it was a “writing project” it automatically opens up a blank page for me to write in, and when I’m done, I’ll click a button and it will do a first pass editorial review).
But the amazing thing to me is that I keep remembering I can fix anything I come across that doesn’t work the way I want it to. With any other software I have to adjust. With this software, I just say “oh hey, let’s change this.” I find that a few times a week I’ll make a small tweak here or there that just makes the software even better. In the past, I would just note a slight annoyance and figure out how to just deal with software not working the way I wanted. But now, my mind is open to the fact that I can just make it better. Myself.
An example: literally last night, I realized that the page in the task tool that lists all the writing projects I’m working on was getting cluttered by older completed projects that were listed as still being in “drafting” mode. With other tools (including the old writing tool I was using), I would just learn to mentally compartmentalize the fact that the list of articles was a mess and train myself to ignore the older articles and the digital clutter. But here, I could just lay out the issue to my coding agent, and after some back and forth, we came up with a system whereby once a task on the task management side was checked off as “completed” the corresponding writing project would similarly get marked as completed and then would be hidden away in a minimized list.
I keep coming across little things like this that, in the past, I would have been mildly annoyed by, but needed to live with. And it’s taking some effort to remind myself “wait, I don’t have to live with this, I can fix it.” Rather than training my brain to accept a product that doesn’t do what I want, I can just tell it to work better. And it does.
And, the more I do that, the more I start to open up my mind to possibilities that were impossible before. “Huh, wouldn’t it be nice if this tool also had this other feature? Let’s try it!” I find that the more I do this, the bigger my vision gets of what I can do because the large segment of things that were fundamentally impossible before are now open to me, just by describing what I want.
It really does give me that same underlying feeling that I felt when I was first playing around with HTML and being able to “just make things.” Except, now, it’s way more powerful. Rather than copying Derek’s use of HTML frames to create “sliding doors” on a webpage, I can create basically anything I dream up.
Then, when combined with open social protocols, you can build in social features or identity to any service as well — without having to worry about getting other users. They’re already there. For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.
Now, there are legitimate criticisms of “vibe coded” tools. Critics point out that AI-generated code can be buggy, insecure, hard to maintain, and that users who can’t read the code can’t verify what it’s actually doing. These are real concerns — for certain contexts.
The thing is, most of these criticisms apply to tools being built as businesses to serve customers at scale. If you’re shipping code to millions of users who are depending on it, you absolutely need security audits, proper testing, maintainable architecture. But that’s not what I’m talking about. I’m talking about building totally customized, personal tools for yourself—tools where you’re the only user, where the stakes are “my task list doesn’t sync properly” rather than “customer data got leaked.”
There’s also a more subtle concern worth addressing: is this actually democratizing, or does it just shift which skills you need? After all, you still need to accurately describe what you want, debug when things go wrong, and understand what’s even possible. That’s different from learning HTML, but it’s still a skill. I think the honest answer is that the kind of skill needed has shifted. “Learn to code” becomes “learn to think clearly and describe things precisely” — which happens to be a superpower that writers, editors, and domain experts already have. The barrier has moved to territory that many more people already inhabit.
It’s also an area where you can easily start small, learn, and grow. I started by building a few smaller apps with simpler features, but the more I do, the more I realize what’s possible.
Also, I’d note that this is actually an area where the LLM chatbots are kind of useful. Before I kick off an actual project with a coding agent, I’ve found that talking it through with an LLM first helps sharpen my thinking on what to tell the agent. I don’t outsource my mind to the chatbot, and will often reject some of its suggestions, but in having the discussion before setting the agent to work, it often clarifies tradeoffs and makes me consider how to best phrase things when I do move over to the agent.
What gets missed in most conversations about AI and the open web: these two pieces need each other. Open social protocols without AI tools stay stuck in the domain of developers and the highly technical — which is exactly why adoption has been slow. And AI tools without open protocols just replicate the old problem: you’re building cool stuff, but you’re still trapped inside someone else’s walls.
Put them together, though, and something clicks. Open protocols like ATProto give AI agents bounded, consent-driven contexts to work in — your agent can scan your Bluesky feed because the protocol allows that, not because some company decided to grant API access that it could revoke tomorrow. And AI agents give regular people the ability to actually build on those protocols without needing an engineering team. My morning briefing tool scans Bluesky not because I wrote a bunch of API calls, but because I described what I wanted and a coding agent made it happen.
Each piece makes the other more powerful and safer.
Blaine Cook — who was Twitter’s original architect back when it was still a protocol-minded company — recently wrote a piece at New_ Public that gets at this from the infrastructure side:
My long-standing hope has been that we’re able to move past the extractive, monopolizing, and competitive phase of social networks, and into a new era of creativity, collaboration, and diversity. I believe we’re poised to see a Cambrian explosion of new ways to interact online, and there’s evidence to suggest that it’s already happening: just today, I saw three new apps to share what you’re reading and watching with friends, each with their own unique take on the subject!
In this light, LLMs may be a killer app for decentralized networks — and decentralized networks may be the missing constraint that makes LLM integrations safer, more legible, and more aligned with user interests. It’s a symbiosis, and I believe we need both pieces. Rather than trying to integrate LLMs with everything, I think that deliberately bounded, consent-driven integrations will produce better outcomes.
Cook’s framing of LLMs as a “killer app for decentralized networks” is exactly right — and it runs the other way too. Decentralized networks might be the killer app for making AI tools something other than another vector for corporate lock-in, or just another clone of an existing centralized service.
Now, I can already hear the objection, and it’s a fair one: am I really suggesting we escape dependence on giant tech platforms by… becoming dependent on giant AI companies? Companies that have scraped the entire web, that burn massive amounts of energy and water, that are built on the labor of underpaid content moderators, and that seem to want to consolidate power in ways that look an awful lot like the last generation of tech giants?
Yeah, I get it. If the pitch is “use OpenAI to free yourself from Meta,” that’s just switching landlords.
But that’s not actually where this is heading. The trajectory matters more than the current snapshot.
First, if you’re using frontier models through the API or a pro subscription, you have significantly more control than most people realize. Your data generally isn’t feeding back into training. You’re using the model as a tool, not handing over your content to a platform. That’s a meaningfully different relationship than the one you have with social media companies, where you’re feeding them data, and their business model is based on monetizing that data.
But much more importantly, you don’t have to use the frontier models at all. Open source AI is maturing fast — models like Qwen, Kimi, and Mistral can run entirely on certain hardware, no cloud required. They’re behind the frontier models, but only by a bit. Six months to a year, roughly. But for a lot of the “build your own tools” use cases I’m describing, they’re already good enough.
Musician and YouTuber Rick Beato recently showed how easy it was for him to install local models on his own machine, and why he thinks the largest AI companies will eventually be undercut by home AI usage:
I’ve been doing something similar with Ollama hosting a Qwen model locally. It’s slower and less sophisticated. But it works. And I already use different models for different tasks, defaulting to local when I can. As those models improve — and they are improving quickly — the frontier labs become less necessary, not more. If you’re a professional, perhaps you’ll still need them. But if you’re just building something for yourself, it’s less and less necessary.
This is what the “AI is just another Big Tech power grab” critics are missing: the technology is moving toward decentralization, not away from it. That’s unusual. Social media started decentralized and got captured. AI is starting captured and getting more open over time. The economic pressure from open source models is real, and it’s pushing in the right direction. But it’s important we keep things moving that way and not slow down the development of open source LLMs.
On the training data question — which is a legitimate concern whether or not you think training on copyrighted works is fair use — efforts like Common Corpus are building large-scale training sets from public domain and openly licensed materials. Anil Dash has been writing about what “good AI” looks like in practice — AI that’s transparent about its training data, that respects consent, that minimizes externalities rather than ignoring them. There are ways to do this right.
None of this is fully solved yet. But the direction is clear, and the tools to do it responsibly are improving faster than most critics acknowledge.
When you use AI as a tool (rather than letting it use you as the tool), it can give you a kind of superpower to get past the learned helplessness of relying on whatever choices some billionaire or random product manager made for you. You can get past having to mentally compensate for your tools not really working the way you think they should work. Instead, you can just have the internet and your tools work the way you want them to. It’s the most excited I’ve been about the open web since those early days of realizing I could right click, copy and then figure out how to build sliding doors out of frames.
The promise of the open web was colonized by internet giants. But the power of LLMs and agentic coding means we can start to take it back. We can build customized, personal software for ourselves that does what we want. We can connect with communities via open social protocols that allow us to control the relationship rather than a billionaire intermediary. This is what the Resonant Computing Manifesto was all about, and why I’ve argued ATproto is so key to that vision.
But the other part of realizing the manifesto is the LLM side. That made some people scoff early on, but hopefully this piece shows how these things work hand in hand. These agentic AI tools give the power back to you and me.
Thirty years ago, I right-clicked on Derek Powazek’s beautiful website, viewed the source, copied it, messed around with it, and built something new. I didn’t ask anyone’s permission. I didn’t agree to terms of service. I didn’t fit my ideas into someone else’s template. I just built the thing I wanted to build.
Then we gave that away. We traded it for convenience, for reach, for the path of least resistance — and we got walled gardens, manipulated feeds, and the quiet understanding that our tools would never quite work the way we wanted them to, because they weren’t really ours.
Today’s equivalent of right-clicking on Derek’s site is describing what you want to a coding agent, watching it build, telling it what’s wrong, and iterating until it works for you. Different mechanics, same magic. And this time, with open protocols and increasingly open models, we have a shot at keeping it.
Let’s not give it away again.
Filed Under: agentic ai, ai, coding, html, llms, open social, open web, protocols


Comments on “AI Might Be Our Best Shot At Taking Back The Open Web”
“here-me-out”?
Re:
He forgot to make an ai agent to spell correctly.
Tech-bro here. Well, programmer, to be specific.
Would it surprise you to learn that most of “learning to code” really is just “learn to think clearly and describe things precisely” … in an artificial language?
Probably not, as you learned HTML (the L stands for ‘language’).
Your AI tool for websites is to HTML/CSS/JavaScript as Python is to machine language. A way to automate the repetitive precise detailing needed, and in a language easier to understand.
Re:
I’ve worked software QA for over a decade and I will say with my entite chest that programmers are absolutely not trained how to describe anything clearly or precisely, and most devs don’t even recognize that as a part of their job, because (as far as they are concerned) that’s what the BA and project managers are supposed to do.
Not to be a nudge, but Powazek had the copyright to the material, you copied it without his permission, so you broke the law … which sounds exactly like what content-creators are concerned about with AI.
Re:
Even if you ignore the entire concept of fair use, copyright law does not grant the privilege of preventing people from learning from copyrighted work, nor does it apply to purely utilitarian expression (which, in many cases, is what small HTML and CSS snippets would be).
Re:
I mean, that’s simply not true. Copying the source of web pages was something widely considered to be fair use, and widely done.
That’s like claiming learning to play a Beatles song to learn to play guitar is copyright infringement. Not how it works.
Re: Re:
If you go and play it for money, yes it is.
If the website you build is 90% from another website, yeah you are breaking copyright
Re: Re: Re:
Again, that’s wrong on so many levels.
Re: Re: Re:
Most GeoCities websites and most websites in the 90s weren’t commercial in nature. But you added this to move the goalposts.
Again, you added the 90% here. Where did Mike say 90%? He said he copied code from one part of the website just for the sliding doors effect and then made changes.
Tags aren’t able to be copyrighted. The text content is obviously different. What’s left is de minimis at best and likely fair use otherwise.
Not to mention that you don’t know if the page you copied it from was the original source of the code. In the 90s, people posted code to share what they could do and everyone learned to code that way because there were few resources to learn any other way.
Re: Re: Re:
There are many, many people playing Beatles songs for money, and it’s all nice and legal as long as the correct ass-caps are paid off.
Kinda stealing someone else’s comment here, but I do want to say how…darkly hilarious it is that even though they likely saw The Matrix, LLM developers still decided to use the term “agents”.
Also: Greetings, fellow GeoCities HTML learner! I gotta say, HTML5/semantic HTML coding (in conjunction with CSS) is probably a little easier to learn that table-based layout coding. 🤣
Re:
Table layouts and frames had plenty of problems (infinitely recursive frames were a particularly fun one), but I wouldn’t say CSS makes things easier. It’s just a different arsenal of footguns.
Re:
“Agent” as a compsci term predates The Matrix.
Whether the Wachowskis were aware of that and intentionally used it as a term with a double-meaning, or whether it was just a coincidence, I couldn’t tell you.
Re: Re:
100% correct; some reasarch shows it at least dates back to 1995’s Artificial Intelligence: A Modern Approach (Russell & Norvig).
I can’t trace back how far it goes in science fiction at the moment, but I suspect at least back to the 1970s.
Re: Re: Re:
The “autonomous agent” Wikipedia page has a citation from 1991; but, like you, I suspect the term would have been used much earlier in science fiction. And I expect that the Wachowskis knew about it.
Then again, it’s quite a literal application of the term: “One who acts for, or in the place of, another (the principal), by that person’s authority”.
Re:
This comment is best viewed in Netscape Navigator 3.0.
Nothing helps take something back quite like allowing billion dollar companies to stripmine it as they see fit then build near impenetrable walls around the crater and then grant them a perpetual license to retain complete control of every road leading there as well as all the signage and erect tollbooths every 500 meters.
Pull the other one.
You may not like copyright holders but the bright shiny AI driven future is a worse version of the web3 imagined by VC twats like Marc Andreesen. At least Web3 wouldn’t continually hammer everything else online with scrapers and openly plan to have AI generate knockoffs of webpages it seems unworthy of a direct link, you know, the thing Google have filed patents for. If people just surrender to AI the way enthusiasts wish, we won’t eventually be treated better no more than we were when trickle down economics was tried for the hundredth time, they will just wall off yet another chunk of the commons entirely
Re:
Open source AI isn’t going to ‘save us’, it’s going to be a figleaf at best, likely eventually sponsored by Google or OpenAI to ward off antitrust efforts in countries that still have laws. It reminds me of all the good and ethical crypto projects people swore would redeem that bundle of scammers, best part of a decade on we’re still waiting for them to make something other than headlines and unnecessary e-waste.
Meanwhile in the real world, AI is causing untold damage to actual open source projects as slop is poured in through any opening they can find, and even if the AI generating things is open too, free range and locally sourced, the sloppy output will still be slop..? Oh, and the environmental issues haven’t even been touched in any meaningful way yet. I doubt the good AI will be building green energy and desalinisation plants to power it all, no more than Anthropic have.
Re: Re:
Agreed. Not sure what world the author is living in, but there isn’t going to be an upside of openness via AI. The companies building this crap aren’t going to let that happen. It’s weird that so many tech boosters don’t see what is going on and are still falling for these pie in the sky stories.
Re: Re: Re:
At this point I’m convinced Masnick is getting a fat paycheck from Altman or something, there’s no way a serious tech journalist would write this nonsense.
Re: Re: Re:2
I’m not. I won’t even use OpenAI’s products.
And, considering that nearly all the positive feedback I’ve gotten on the piece, the idea that “no serious tech journalist would write this” seems… silly. The only pushback I’ve gotten is… here? Almost all of the replies on Bluesky were positive. My email is overflowing with people appreciating the piece.
Do you have an actual critique?
So far, every single critique seems like “nuh uh stupid.” Or just “AI BAD, YOU BAD.”
It’s difficult to take people serious if they don’t have a legitimate critique. I agree the tech is oversold. I agree it’s bad that it’s being shoved into places where it doesn’t belong, or being forced upon those who don’t want to use it.
I agree with the problematic aspect of the data center builds.
But, I’m trying to point out that there are aspects that are legitimately useful, and in particular useful in GETTING OUT FROM UNDER big tech.
That’s the thing, if you kept your critiques to the actual problems, I’d agree with you. But people like you who insist that anyone who does find it useful must be paid off, just make ANY critique of AI seem stupid because you seem unwilling to live in reality and deal with actual nuances and trade offs.
So, seriously: go away. This site is for people who can live in the nuances, and you obviously can’t.
Re: Re: Re:3
Interesting that you’re mainly seeing pushback from people who are on the actual open web.
Re: Re: Re:4
I’m not. What makes you assume that? I’ve mostly been getting support from people on the open web.
This comment has been flagged by the community. Click here to show it.
Re: Re: Re:5
And then everybody clapped.
Re: Re: Re:5 bluesky
you said bluesky. while I’ll believe you if you say you’re also getting support on the open web, the commenter you’re responding to is just responding to your statement as written.
Re: Re: Re:2
I think it comes down to the fact that when we view someone as credible, reliable, and well informed, it’s hard to reconcile that when it turns out that they have a blind spot in a particular area, and so we’ll try to find explanations for what we can see as actions obviously inconsistent with our image of them.
Re: Re: Re:
It’s just a new aristocracy finding newer, deeper frontiers for enclosure.
Re:
That would be concerning, but I’m trying to figure out how you think that happens. Literally as explained in this piece, I have total control over the tools I’ve built, down to the fact that they live on a local machine sitting on a table 5 fee away from me. They have way less control over it than any online tool I’ve used in 3 decades.
It’s so weird to see people claim that this is all tollbooths and silos, when my literal experience is the exact opposite.
It’s so confusing to me.
Are there reasons to be concerned that the big AI companies will go down that path? Yeah, sure, absolutely. But that’s why it’s important to understand the freedom these tools enable RIGHT NOW and to make sure we don’t lose that.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Because you are ignorant. As nicely as I can, you are so ignorant about AI and programming that what you don’t know will hurt you. It’s not an if, it’s a when.
In order to control something you have to actually know and understand it. You can’t make changes without AI, and you don’t actually know if what you think it does is what it actually does.
Re: Re: Re:
We shall see.
No offense, but that’s the dumbest shit I’ve heard all day today. There are tons of things we control that we don’t fully understand. I control my car, but I have to take it to a mechanic to repair. The nature of innovation is that we often use tools to build things, but we don’t always know how they work. You use a computer, but most people don’t understand how computer hardware works.
Honestly, this “oh you don’t understand it so it’ll be bad” strikes me as nonsense gatekeeping from programmers.
Re: Re: Re:2
But you’re advocating the software equivalent of everyone building their own car.
Re: Re: Re:3
I’m not, though. I’m advocating that for people who want tools that they can control, there are now options available to them that weren’t there before.
And your response is “you’re too stupid to use the tools.”
Re: Re:
I mean, it’s not that surprising, when most people’s personal experience with it is the corporate stuff, is it? Yes, the open source stuff exists, and you did take the time to mention it, but for most people that’s not what they’re experiencing, yet. And that’s going to color how they look at it.
Just need to be patient, and that will change with time. Hopefully, anyway. There are still some (way smaller) barriers, in the same way Apple vs Android or Windows vs Linux exist.
Re: Re:
You’re kidding, right? Google have set AI summary as the default for search and have recently patented technology for using AI to completely rewrite and create facsimiles of pages they deem not up to snuff. They have literally told people if they block AI scraping they may well end up unlisted entirely. We have literally seen what was the gateway to the open internet for many people eraked a 20 foot high barricade in front of it while letting everyone on the other side starve of ad revenue. Microsoft have done likewise, shoving AI into every aspect of Windows and things they control like Github, claiming ownership of everything they can unless people know to opt out and leaving people with no choice but to capitulate in most cases.
You may well see open source alternatives as providing a magical get out of enforced tech oligarchy, but people are busily forcing slop into the open source software and major device makers are making sideloading near impossible for most people so that option may as well not exist. We are watching fresh barriers to the web as it was be erected on a daily basis, and whatever you think open source AI will achieve, it will not, it’s just openwashing a technology that will only ever be a for profit fencing off of the commons, on creative works on a scale the media you rail against could only dream of.
You are not most people. Spend some time looking at what non techies do when they use the internet, speak to normal people, they are not buzzing about open source saving anything, they don’t know how to get these things and likely never will because those gates are closing and you and other columnists here are more focussed on telling people not to yell and try to stop it. You don’t like the present, none of us do, but the future your embracing will be so much worse.
Re: Re: Re:
Again, I honestly don’t understand this position at all. It’s so incredibly defeatist and pointless. You want to give up? Give up, but don’t drag down the people actually working to build better systems and proving you can.
Yup, and at every point we’ve decried that kind of slop and forcing AI into places where people don’t want. At no point and in no way do I support such things. Absolutely every use of AI that I rely on and that I talk about is where I use it entirely by choice and I am the one in control. So you’re complaining about something different, and you seem upset that my discussion isn’t covered by your complaint… so you just… complain?
Again, I have spoken out against the dangerous aspects of this, but the whole point of this discussion is to show people there’s a better way so they don’t have to go down that path.
And your answer is… what? Don’t show that there’s a better way? Don’t show that there are ways to take back control and… what? Hope the tech goes away? Because, dude, the tech isn’t going away. This isn’t NFTs.
Yes, I admit that I’m a half step ahead and that I’m willing to tinker and play around with this stuff, but that’s WHY I’m talking about it. It’s WHY I’m trying to inspire more people to start thinking about how they can use the tech to their advantage, rather than just being upset about how others are trying to force it on you.
Literally yesterday I spent an hour and a half with a very skeptical friend who feels similarly to you about AI, but who also remembered a simple app she had used many years ago. So we sat down in a cafe and tried to rebuild that app.
The point I’m trying to make is that for people who want to build a better internet that doesn’t rely on internet giants, there is such an open opportunity right now.
And your response is “but the internet giants are bad.” No fucking shit, dude. That’s why I’m talking about ways to get outside of those silos and not have to rely on them any more.
I don’t see how that’s a “worse” future. It seems way better. I know it’s way better because I’m already there.
Re: Re: Re:2
Defeatist is accepting a future you do not want and you know full well others do not want as inevitable and trying to polish a turd by openwashing a broken product that benefits the gatekeepers. Refusing to use the technology, being vocal about it and all the countless ethical and environmental issues associated with it is a damn sight more helpful than going, ‘Oh but this form of rolling coal autocorrect is good, even though none of the problems are addressed in any meaningful way… Also surrender all your work to scrapers because you might block the good scrapers, thanks.’
What you are doing is basically laying in front of the partially built gates and showing your belly, saying it will be fine as there’s a gap in the bushes people can still use, and wondering why people are pushing back about it and are pointing out it is hard to find and easily sealed. Open source AI is not the better way for anything, it doesn’t address any of the problems people have and doesn’t change the fact the underlying technology is doing things people could do already, but making them worse.
Again, you need to step back and talk to normal people and get their perspectives, not people agreeing with you in emails, people on blue sky and definitely not the pro AI people from Koch funded think tanks.
Re: Re: Re:3
My theory is that, because regurgitation engine pushers’ ‘fuck copyright, I’ll just take it’ stance aligns so neatly with Masnick’s own, they got an unwarrantly sympathetic hearing from him.
Hell, if they weren’t being pushed by the worst people alive, I’d be more open to experimenting with them even. But that’s not the world we live in.
Re: Re: Re:2
Not for nothing, but what happens when the people building that better Internet make such extensive use of AI tools that they don’t build the skills needed to understand how they’re building said Internet?
One of the things that I appreciate about learning HTML and CSS “the hard way” (i.e., by effectively bruteforcing an understanding through tedious handiwork) was that it gave me an understanding of how HTML and CSS work on a fundamental level. I learned about HTML5’s semantic tags more by reading than experimenting, but having the base understanding of HTML helped me grok those changes and integrate them into my knowledge base. That’s one of the things I worry about with the whole “vibe coding” phenomenon: people using it as an excuse to not learn and understand the fundamentals of whatever code they’re having an AI write. Not that I’m saying you personally support that (I doubt that you do), but I believe it’s a valid concern.
Re: Re: Re:3
I honestly see learning how to use coding agents as a similar kind of skill. I haven’t done any programming in almost 30 years, but I’ve learned so much more about coding in the last few months, it’s insane. That’s because I have my coding agent explain to me, in great detail, what it wants to do and why. I put into some of the instruction files that it should always explain it’s thinking, offer alternatives, and explain pros and cons of different approaches. SO I’m learning a ton about various frameworks, languages, databases, and more.
I have such a better understanding of modern programming than I did just a few months ago.
The AI companies have a huge incentive to build that walled garden. They spent hundreds of billions on the LLMs, and the investors want their money back.
The companies that survive the coming shakedown will be those that manage to build that walled garden, not necessarily the ones with superior technology. Lure the user with convenience, then start extracting value.
Unless it’s technically impossible to build a walled garden, the open AI era will be just as temporary as the open Internet. Because the system we live under is and remains capitalism.
Re:
We shall see. But I think that’s not a guaranteed future. And the way to avoid it happening is for more people to realize how they can free themselves RIGHT NOW with the tools available.
So much of this is such cynical bullshit about “oh, it’ll end bad because everything ends badly so I’m not even going to try.”
The tools work right now to give you more power. And your response is “don’t use them, it’ll never last, they won’t let us ever have power” and that’s just… stupid? I literally cannot understand people who think that way.
You’re giving up before the bad stuff happens, only making your future more likely. Why? Why give them that? Why not use the powers you have available to you right now to break free from that control?
Re: Re:
Mike, don’t argue, use the open source philosophy and show your work. Share your experience of how AI can be democratizing if used correctly. That journey would be a powerful lesson in how to make the tool work for you rather than the other way around.
To the others out there, AI is just another tool at your disposal. Used judiciously, it can enhance your capabilities, relieve you of tedious activities, and speed up your work. Keep your switching costs low and you can move on when your preferred tool goes the enshittification route. Used blindly and it can make you look like a fool, lead you down dead-ends, and lock you into an ecosystem that is difficult to impossible to entangle later on. I, for one, have no problem embracing a new tool so long as I don’t get locked into using it.
Re: Re: Re:
I thought that’s what I was doing with this post!
Re: Re: Re:2
Sorry mate, I got all caught up remembering usenet and gopher that I didn’t read carefully enough. Add in references to Sun SPARC labs and loading slackware from I don’t remember how many floppies and I will slip into full on nostalgia mode.
I guess what I was looking for were specific learning points from your implementation, since I am well into trying something similar with my wine cellar. You can find dissertations about vibe coding, but real, practical lessons requires a ton of digging.
Re: Re: Re:2
Sorry mate, I got all caught up remembering usenet and gopher that I didn’t read carefully enough. Add in references to Sun SPARC labs and loading slackware from I don’t remember how many floppies and I will slip into full on nostalgia mode.
I guess what I was looking for were specific learning points from your implementation, since I am well into trying something similar with my wine cellar. You can find dissertations about vibe coding, but real, practical lessons requires a ton of digging.
Re: Re: Mike... come on.
they’re not saying this because they’re defeatist, they’re saying it because they’re looking at history and in EVERY OTHER EXAMPLE the pattern they’re describing has happened. They’re saying the only winning move is not to play and fighting back with everything they have.
Meanwhile, you – of all people! – are finding some use in the scraps of this technology today, so you’re ignoring the wider context completely and pretending it’s “defeatist” to assume the story will end badly, just because it will obviously end badly. Because the people building this tech are EXPLICITLY SAYING that they will use it to make sure things end badly.
but I guess you have a cheap and almost functional AI proofreader now?
Re: Re: Re:
It is so weird how quickly people dismiss incredibly powerful tools by pretending they’re crappy retreads. When I first wrote about my task management tool that is an incredible productivity booster, some people mocked it “oh you built a crappy spreadsheet.” Now you retort “a cheap and almost functional AI proofreader.”
It’s almost as if you’re bragging about your unwillingness to understand what kinds of tools are being built.
I like your optimism. And I hope you are right.
But I honestly don’t see how to avoid big companies from capturing everything to themselves in a big “Buy ‘n Large” (reference: Wall-E). That’s what’s been happening for a good while now and seems to be accelerating. Because those who make the rules are those who already own everything and have the money to buy everything.
Or maybe we are in yet another tipping point and we’ll bounce back to a virtuous cycle. Who knows.
Re:
I think the point is that what has been shown so far is that these AI companies don’t have the walled garden they would like you to think that they do especially in the coding LLM space. They all have everything from stack overflow scraped and the open source models are clearly distilling claude and other coding models to create their models. Basically if the closed models improve the open ones will too and eventually we get to a place running some of these models locally may only $2000 versus a max subscription at $200 a month. When you are there it pays for itself in a year and will continue to work for years afterwards with a model that is good enough to do what you want. It’s very difficult for any of these companies to stop that happening. Once it does happen the decentralization and ability to self host will make things change. Eventually it may just be open up local AI tell I want to do this. It creates it and then you can say hey I want to deploy this on this raspberry pi I just connected please take all necessary steps an then it just does it. This isn’t theoretical I can already do this with clause code and I am actively saving for hardware to do this locally with kimi and other models.
Fray Hates AI Slop
Hi, I’m the Derek that was mentioned in this article.
It’s nice that anyone remembers Fray. That was a labor of love I worked on for decades. Fray’s mission was to use the web as a canvas for a new kind of art. It was lovingly hand-crafted by me and a team of early web nerds. The sliding door story was actually created by one of our earliest contributors, Alexis Massie, and can still be seen here: https://fray.com/hope/meeting/
Since it was hand-crafted, it still functions decades later.
Fray was about melding new tech (the web) and the oldest storytelling traditions. Everything was made by hand and deeply personal, designed to bring humanity to the web.
Which is why I am horrified that you’re using it as a framing device to promote AI, which is the opposite. It is anti-human, billionare-driven, fascist-enabling slop that is destroying the open web right now.
Genuinely, how dare you.
If you want “view source” for programming, you have it already, it’s called Open Source and it’s powering most of the web to this day. But what you’re describing is not viewing source, it’s automated creation, and it’s the exact opposite of what we did at Fray.
I wanted Fray to be remembered, but not like this.
Re:
I responded to Derek on Bluesky but I’ll also respond here. I appreciate his take here and understand where he’s coming from. I am sorry that he feels I’m somehow misrepresenting the legacy of the site that he created. But the point here was not to suggest that he or The Fray support AI and I don’t think the article suggests that. It was to express the way both things made me feel. There’s a similar feeling I got to learning how to build websites from The Fray as I get from building more advanced systems now with agentic tools.
I’m not going to lie about the similarity of the feeling or misrepresent it.
I still feel that the analogy works for me, and so far many other people I’ve heard from agree to a similar. Dan Hon even pointed out that a few weeks ago he’d beaten me to the analogy by posting on Bluesky that “agentic coding is view-source for “apps” and if you were around earlier, you know how much of a big fucking deal view-source was for the web and everything that came after”
https://bsky.app/profile/danhon.com/post/3mgxsmaghvk2g
So clearly some others are feeling that too. I understand that many people just hate the technology altogether, though as I tried to explain in the piece, there are increasingly ethical ways to use the tech.
Also, all sorts of stuff we do rely on tech and automation in some forms or another. To me this again is an extension of that. These are tools that allow humans to do stuff, but all internet activities involve technology tools in some form or another.
I feel bad that Derek is upset about this because I respect him, but nothing in the piece was designed to suggest endorsement by him. It was designed to reveal the emotional relevance of the moment, and I can’t deny the similarity in the feeling.
Re: Re:
One more thing I’ll add here: I also hate AI slop. But literally nothing in what I wrote endorsed AI slop. I find it problematic that people automatically lump all AI tools as slop.
Re: Re: Re:
Unfortunately, as long as you’re promoting AI-based tools—even if they’ve been around long before generative AI took off, even if they’re not using the same models/ideas as generative AI—this is the speed bump in the street you’re going to hit every time. Most people have a visceral reaction to generative AI and its slop output, and it isn’t a positive one. That association will be hard to overcome after we’ve seen so many companies try to shoehorn largely unnecessary AI bullshit into their products, whether it’s chatbots or image generators or something else.
I don’t doubt that there is some use to the tools you’ve been talking about. But the problem with hyping up AI tools isn’t just the association with slop—it’s with the idea, intentional or otherwise, that using those tools is a replacement for (or a shortcut to) understanding what the tools are trying to do. Maybe “vibe coding” produces workable code, sure. But “vibe coders” who have no prior experience working with code won’t be able to tell you how or why the code generated by those tools works (or doesn’t work). Coding is a skill that takes time to learn and get better at; “vibe coding” shouldn’t be a replacement for the actual building of a skill. The same idea applies to generative AI: You can generate anime images all day with a generator, but that won’t give you the skill needed to make those images yourself, and there isn’t really much skill in crafting a prompt for a generator or picking out LORAs.
I’ve no beef with algorithmic tools that make life easier for people who use them—so long as the tools are treated as such. The moment people start promoting a tool as a replacement for or a shortcut to actual human skill is the moment I start having an issue with both that tool and the people promoting it.
Re: Re: Re:2
According to no less a source than Harvard Business, the primary effect of the lying engines is to create more work for everyone else — the illusion of productivity it creates is an illusion, because what you’re actually doing is foisting the work on to other people, not removing it.
Re: Re: Re:3
I mostly agree with that. But mainly because so many companies are trying to force AI on people or into products where they don’t belong. That’s never going to work.
What I’m talking about is for people who want to take back control of their own systems and tools, that’s now possible. And it’s a freeing feeling to realize “I wish this writing tool had this feature” and then 30 minutes later, it has it. That’s… way different than bosses going around telling people “you must use this AI tool” and everything thinking that the only purpose of AI is to generate crappy content.
I get the association. But I’m trying to make clear that it’s possible to separate these use cases.
Re: Re: Re:2
And herein lies another crucial problem, marketing deparments have so diluted the definition of “AI” as to make it meaningless, literally everything is now being called “AI” by the businesses pushing it, even just rebranded 40 year old tools with no ML component whatsoever, just because it’s the big buzzword right now. And the public backlash to their user experience with everything getting worse just all gets rolled into “AI” because that is the shared branding.
Re: Re: Re:2
I have to disagree with ‘most’ people. It is trendy for people to and act outraged on command because other people told them to, but there is no true depth to the criticism. Even the most ‘intellectual’ of critiques scream being rationalizations instead of reasoning, like the whataboutism wind turbines face accusing them of being bad because of being made of fiberglass and fretting about inert objects winding up in landfills.
Re: Re: Re:
“literally nothing in what I wrote endorsed AI slop”
They’re the same tools, Mike. From the same companies. Under the same “AI” name you used in the headline.
You may not like the association, but you’re the one associating with it.
Re: Re:
If you feel bad, remove the reference, and insert an apology. Maybe internalize the fact that your opinion and comparison inspired revulsion in Derek.
We often talk about delusional Trump supporters, unable to see out of their own bubble (really enjoying BlueSky, huh?), and turning into real assholes… but that could never happen to you, Mike, you’re much too smart to fall into some sort of cult of bullshit…
Re: Re: Re:
But, it’s an expression of how I feel. You would prefer I lie about how I feel? I never suggested he endorsed it. I explained my perspective of what these tools made me feel. I’m not going to lie because someone got upset.
This comment has been flagged by the community. Click here to show it.
Re:
Thank you, it’s genuinely disgusting how much Mike has been fetishizing AI slop.
Ollama and other machine-hosted LLMs still rely on a mothership that has rules.
If by “open” you mean one in which all websites are created using the same tools and software controlled by the same companies, then sure.
Missing in this, is that llms are trained on almost all the same data, and will recommend the most common things, which results in a bunch of cookie cutters built off the same things.
And if you don’t think things need security. You, are an, idiot.
” For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.”
So, it sounds like it can connect out at the very least. What prevents someone from triggering code execution via messages it reads?
Here are some serious questions for you.
Can it be connected to externally? If so is it behind a vpn to connect to? Is anything securing it?
What code packages are being used? Do you even know if you have loaded malware or spyware onto your system?
Are you using public skills? Have you read them in an editor that doesn’t support markdown to ensure they aren’t full of secret hidden commands?
You can easily go and create stuff with php and html still, or html and basic js. Hell you can do and use most of the stuff you did 20 years ago and it will probably still open in a browser. Making a website isn’t with fancy tools and systems isn’t what is blocking the web from being more open.
“here-me-out” Please correct this tupid grammer/spelling error that your AI editor did not catch. Thanks.
Re:
This is the problem when you average everything to get the blandest result, you average all the most common typos, too.
Re: Re:
The idea that this error was merely typographical is dubious.
Flash back
Reads like WIRED around 1997, perusing its readers to install Dream Weaver
This comment has been flagged by the community. Click here to show it.
The lying engines are gonna kill Wikipedia eventually, at which point their accuracy is gonna drop substantially because the only time they approach accuracy is when they quote it.
They ain’t gonna save the open web. They’re one of the forces destroying it, you utter clown.
Those “open source models” like Qwen aren’t truly open source the way open source software is. I can’t review the training data, and the model is produced by an unaccountable corporation. You get no control over how it is trained, you can’t change how it’s trained, you can’t train one yourself.
I, too, look forward to running powerful local models, but I have no delusions about them changing the incentives that dictate how the Internet gets built.
Re:
Thank you.
AI's Impact on the Open Web isn't Hypothetical
This article paints a rosy picture of what could happen with AI tools and tries to link that fantasy to the glory days of the open web, but let’s take a look at what AI tools have done to the actual open web today.
These are real harms to the open web happening right now, without even mentioning the plagiarism that built the models or the environmental and social costs of these technologies, or the way Trump and his cronies are literally using AI slop to promote a war.
The idea that “AI Might Be Our Best Shot At Taking Back The Open Web” is genuinely psychotic when it’s destroying it RIGHT NOW. And you’re helping by promoting this garbage idea that it’s like the good old days because it gave you the feels.
I’ve read you for years, Mike. Relied on you to make sense of the digital world. But this is a shockingly abominable take.
Re:
I agree with most of your complaints (though I will note that Techdirt itself is not walled off, and while we deal with crawlers, we’ve found it to be manageable), which is why I think it is so important to take back control.
But this is also why I included the bit about atproto in there (sure, ring bell), because I think getting things off of the systems controlled by Zuck & Musk & such people matters a lot.
We can’t “scream this is bad” our way back to a better internet. We have to make the tools work for us in ways in which individuals are in control, not billionaires making decisions for us.
I agree with what you’re saying about many of the current downsides of generative AI tools. That’s part of the reason I’m trying to show a better way to use the tools, in which we can take more control over our own digital lives rather than relying on companies who need to “number goes up” every 3 months.
I really do think it’s central to getting people out of digital silos controlled by the worst people.
Re: Re: But it is bad, Mike
“We can’t “scream this is bad’ our way back to a better internet”
You sure about that? We’ve done it many times. We screamed at browser makers until that adopted web standards. We mocked NFTs until they died. Public disapproval matters, and the public hates AI right now, for very good reasons.
Sometimes the public is right.
Re: Re: Re:
A feature change is not killing a tech.
NFTs never had any real utility.
Now you’re conflating the various parts of AI. The public hates slop. The public hates tools that take them away from human connection. The public hates AI being forced into products where they don’t want it.
And I agree with all that and don’t think the companies doing that will think it’s that good a strategy long term.
But the number of people voluntarily signing up to use these tools is massive. The growth in actual usage, not driven by people forcing it on them, is massive. People are finding value in the tools. I worry that they’re mainly flocking to the more problematic companies to do so but that’s why I’m making this plea, like Rick in the video above, to explore other models that you can fully control yourself.
The public keeps using this stuff. In droves. https://www.stlouisfed.org/on-the-economy/2025/nov/state-generative-ai-adoption-2025
Re:
I’d just like to point out:
This is good. People never should have believed what they read online. AI is providing an epistemic corrective.
This predates genAI by years. One of the reasons people adopted LLMs as search engine replacements so quickly is the search engines were already useless.
Re: Re:
The decline of search engines was a self-inflicted wound by sales/marketing departments that would rather show people more ads than filter out the SEO chaff, because really filtering out chaff takes too much effort, but if you filter just enough to keep people from giving up, they’ll scroll through more pages on your site and see more ads.
The same departments are pushing “AI” chat as a replacement. And making money off of the SEO people who use LLMs to generate more slop sites, which further degrades search indexes, which pushes more people to AI queries, which will eventually get poisoned by the slop sites too, but that just means they’ll spend more time feeding questions to the panopticon, which can show them more ads.
Re: Re:
And we already went through this phase with dubious television “news” networks, telemarketing scams, door-to-door salespeople, pseudoscientific quackery, and lying politicians. Among other things; and basically all of this dubious behavior was seen on the Internet before the most recent wave of computer-generated slop. (Like that site that’s been showing up in search engine results for over a decade, purporting to have the PDF manual one is looking for—actually just a spam PDF with popular search keywords and a link back to the fraudulent site.)
Long time read of this site, who had accidentally taken a break in recent times, not intentionally, it just kind of fell off my radar of regular sites I go to. But, always respected the hell out of what you all do here and appreciate you.
Recently remembered to get you back in my rotation, and I’m going to be completely honest and say I’m probably taking you back out of rotation intentionally
This recent string of pro-“AI” articles/sentiment I am seeing going on here is deeply disappointing, considering all the things you all normally seem to write/care about here (privacy, environmental concerns, tech company abuse of public good to line their pockets, etc). These companies, their tools, and the social and environmental impact of them far outweigh any benefits.
There’s been several articles here recently that seem to desperately grasp at the slivers of good while wholly ignoring all the bad that comes with supporting these companies and their fancy search engines being sold to rubes as “intelligence”.
I’ll keep an eye on it, but likely not sticking around, given what I’ve seen.
Re:
I don’t think it’s fair to say that any of the article “wholly ignore” the bad. All of them talk about it and grapple with it (as does this article) and we’ve also written many articles about the bad.
We focus on reporting both the good and the bad, so yes, if you believe their can be no good then we’re not the site for you.
This comment has been flagged by the community. Click here to show it.
Re: Re:
Of course YOU don’t think it’s fair, you’re the dominant source of the blame for Techdirt trying to spit-polish the AI turd.
Please stop openwashing AI. Even the OSI doesn’t pretend these systems are meaningfully open. https://opensource.org/ai
This is genuinely very cool. I do want to offer a few thoughts:
Anything that interfaces with outside content should be considered a potential attack vector. The risk with an app like your Bluesky scanner is something like you don’t sanitize inputs, someone manages to get RCE, and grabs your browser data/session IDs etc. If you’re local, yeah whatever, go nuts (although be careful about allowing agents to e.g. erase your harddrive, if you have stuff you care about on it). There’s also slopsquatting dependencies, even if the app itself is local at runtime.
Security is not something just for big companies. Whenever you interact with the broader internet, that is a security risk. Especially if this model gets more popular.
For what it’s worth, the chatbots can also do things like generate code. In my dabbling to keep up with it, I’ve mostly been coding with chatbots, for now. It is more hands-on than an agentic version, and much more limited, but you can do it. It’s kind of a nice middle ground, actually? The projects I’m working on I can’t commit AI code to prod, but i can still get a nice work flow where i prototype a small function, review it myself line by line, and then rewrite my own flavor.
The agentic stuff is much more powerful, but if anyone wants to dip their toes in, I can recommend it as fairly effective.
This comment has been flagged by the community. Click here to show it.
This is a joke right?
Grandpa here
As a guy closing in on 60 with frightening speed, I feel the comments of many people above. I’m also being forced to use AI by my boss. They don’t tell me how, I just need to somehow. I also see the barbaric influence greedy tech moghuls have on the world in general, and how they make everything worse for everybody.
I can be like the many of the commenters above and close myself off from anything AI. I can refuse to engage with it and hope I can keep my job until I retire. I can keep screaming at the clouds and hope it doesn’t rain.
I chose a bit different path. I’m (slowly) learning to use it. Initially it just caused me extra work, because I didn’t know how to use it. I’ve now gotten to the point where sometimes it actually gives me useful results and saves me time. I’m not spending days to spar with it, I just incrementally learn more about it.
It made me realize something. People like me (I’m an expert in my field) will be more valuable in the future, not less. LLMs can generate everything you ask for but at the end, somebody with actual wisdom (so, contextual knowledge) needs to say if what it generates makes sense. I work with much younger people, much more fluent with these new tools, and they still call me to okay things.
So, kids that are ChatGTPing their way to a degree are actually taking the wrong lesson. As mentioned in the comments several times, it can be dangerous to generate code with a black box, if you don’t understand the code it generates. Therefore people that can revise said code will be worth their weight in gold. Obviously they will use their own AI tools. They’re already PEN testing code with AI models. The same will go for other fields. Law, for example. We’ve all heard that LLMs can go terribly wrong when writing legal texts. So, a wise person needs to revise it.
This was a bit of a preamble to what I wanted to say: Just blindly rejecting everything AI is not a useful attitude. The technology is there and we somehow need to deal with it. Even if you want to fight it, you’ll still need to understand what it is and how it works, to effectively put it back in it’s place. Otherwise you’re just an old man, like me, that’s angry at the TV because he can’t figure out the remote.
And, I do think things will settle down. The current situation is untenable. The amount of money these companies are burning up is unsustainable and, if all the datacenters they’re planning actually get built, there will simply not be enough energy to power them. When LLMs have to charge realistic amounts of money for the models, the craze will die down.
In the mean time I think we should look at ways how to use the tools, while trying to negate or lessen the negative impact they have and which are undeniable. For that we need all of us working in unison and making good choices (elections matter!). Just saying NO is likely to give these people free reign to destroy the world.
Nice to see someone trying to keep a balanced approach
Nothing’s ever perfect, and it feels as if those who are willing to put in some effort into it by learning the tech and focus on ‘radical self-reliance’ will ALWAYS be able to preserve their individuality regardless of the tools they use.
The whole early Internet analogy was a welcome reminder of the world of possibilities (and scams) that await on such an open platform.
I’m pretty sure that if 1995-me was given access to local models like Qwen-Coder-7B and other reasonably useful LLMs, I would have most certainly found interesting ways to put them to work.
Yes, AI slop has definitely become a ever-growing annoyance, but there will be tools made which will enable us to filter this out. Think of a reverse-image search service like TinEye available as a plugin to inform users of the origins and likelihood of certain media being artificial, and have rules set to filter them out if desired.
It won’t take long for useful apps of the sort to start making their way into little helper add-ons which will help us curate our own online experience like an adblocker would, but for AI garbage.
Long time reader, first time commenter. Chronic lurker.
I have struggled with this sites pro-AI stance. I have struggled with the inevitability thesis. And like many of the above commenters I am seriously concerned with the actual political economy of AI, not the dream of what it could be.
However, as other’s have articulated those concerns I don’t want to retread old ground. I instead focus on your vision in and of itself.
I think this is your clearest articulation of the benefit you see in LLMs, and I find your ‘the tool should instantly fit my hand perfectly’ thesis terrifying. As much as we techies deny it, the open web is a social phenomena, and your vision of this is anathema to that. From the lurkers (like myself) to the core maintainers, we have no choice but to be socialised by the frictions of the ill fitting tool.
As a lurker, I share in your memories of hitting ‘view source’ and remixing the code I found. But in remixing we weren’t just engaging in an isolated learning how to code process, but contributing back to the community. Others could ‘view source’ on our work, and see what we had learned. Even in lurking we contributed to the grand conversation of the internet.
The open web is being squeezed from above and below. From corporations wanting a monopoly of control and higher rents, AND from below from people who are not willing to engage and be socialised into the slow political processes that negotiate the future directions. The open web faces a crunch of the lack of people willing to contribute back to ecosystems. We face so many crises born of atomisation, and your vision would create another.
Even in imagining a distributed political economy of AI, and clicking my fingers to disappear the tech giants, the same extractive practices are at the CORE of your vision. If the machine produces MY perfect tool every time, why bother contributing to a community that has a tool similar to my needs. Why bother helping with the technical debt of a project. That friction is important. That friction is at the core of the open web. Having that handle that doesn’t quite fit my hand, reminds me to seek out and help others that are also struggling with the same problem.
Centralised or distributed, AI is drying up the aquifer of the open web.
Great Grandpa here
73… been reading this site for many years. In the beginning on x.25 networks (pre html) we shared tools and code. The early “automated” web site tools (Forman Interactive Internet Creator 4) generated full sites and uploaded them for you. Selena Sol gave the world the first viable webstore with the Instant CGI book. Every magazine had free source code in it, even the old tru-tone “records” in the fold with full running applications. AI is just a new layer of tools (and AI isn’t new – we’ve been building it for 75 years now) and approaches to helping people get content published. We absolutely need to control the tools, as we always have. Human Navigator and AI Coder (agile) approach to ensuring viable output. Doom Screaming has yet to work – in either direction. Just the new way of doing the same old stuff. Figure out what works, and what doesn’t. And chill.
just a reminder that this is a major conflict of interest. mike here is a VC backer of bluesky, a super rich tech oligarch, and has absolutely zero idea what he’s on about
Re:
Lol. I am not a VC. I’m not rich. I’m certainly not a tech oligarch. I’ve invested no money in Bluesky (I don’t even have money to invest in Bluesky). I am on the board, which notably is not an AI company. And, of course, Bluesky users famously tend to dislike AI. So if Bluesky biases me… shouldn’t it bias me the other way?
And, look, you’re free to have an opinion that I have “zero idea what I’m on about” but, given that you made blatantly false statements about me in your opening, I kinda feel differently.
Also, I spent thousands of words explaining myself, which I think is pretty strongly backed up. You wrote a barely legible half sentence full of factual errors. I think I’m closer to knowing what I’m talking about than you do.
after some back and forth with an ai agent
you figured out how to connect two things in a database?
man, I have nothing but respect for the work you’ve done over the years. but reading this (and your recent ‘they just didn’t have TIME to announce a $100m investment because they were all SO BUSY” bluesky post) I’m seriously wondering if ai had just melted your mind.
This ignores that this stuff takes time / motivation and vision.
I didn’t learn to code really complicated sites in HTML because I didn’t have time to learn what each part of a very complicated code was. I used to also right click, view source and copy data. Then one day I came across this website that when you get to it, it only gives a flashlight circle that you could read with. Then when you clicked, the entire page became visible. I loved it. But it was the last time that I really tried to code. After multiple days trying to replicate or understand the code, I finally gave up. It was too large, too complicated to continue to sink time into, I was just a teen with a few hours access to the internet.
I don’t want to spend my limited free time creating more code, after I just got done with 8 hrs on a computer paying attention to pedantic bullshit. This looks distinctive like your job. Congratulations on finding a way to make writing pay, but that isn’t the reality for most of us.
Also, all y’all are using your real names, like you have no worries about some dude showing up to your house. That’s privilege, and that’s not the reality that minorities and especially women, deal with.
I just want a safe environment to vibe with like minded folk. And to do so on the modern web, that means giving up data, and having our anonymity stripped.
Making my own personal tools sounds exciting. While I have some basic coding knowledge, I don’t have the passion to spend hours trying to get small things to work. I think I’d feel most comfortable with AI running on my own hardware, not dialling home to someone though, so I’ll have to look at the open ones you mentioned sometime.
I think using AI for drafts and prototypes and single use issues can open things up to a lot of people. There a point when that should transition to professional standards and real humans if a project grows though.
I’ve seen this in creative spaces, where people start a project with very basic skills and tools, and then upgrade and hire other people when it takes off. I think AI code and creative projects could follow the same trajectory without causing harm.