AI Might Be Our Best Shot At Taking Back The Open Web

from the hear-me-out dept

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturally). Some friends I had met on Usenet were students at the University of Illinois at Urbana-Champaign, and told me to download NCSA Mosaic (this would have been early 1994). And suddenly the possibility of the internet as a visual medium became clear. I rushed down to the university bookstore and picked up a giant 400ish page book on building websites with HTML (I only finally got rid of that book a few years ago). I don’t think I ever read beyond the first chapter. But what I did do was learn how to right click on webpages and “view source.”

And from that, magic came.

I had played around with trying to build websites, and I remember another friend telling me about GeoCities (I can’t quite recall if this was before or after they had changed their name from their original “Beverly Hills Internet”) handing out web sites for free. You just had to create the HTML pages and upload them via FTP.

And so I started designing really crappy websites. I don’t remember what the early ones had, but like all early websites they probably used the blink tag and had under construction images and eventually a “web counter.”

But the thing I do remember was the first time I came across Derek Powazek’s Fray online magazine. It was the first time I had seen a website look beautiful. This was without CSS and without Javascript. I still remember quite clearly an “issue” of Fray that used frames to create some kind of “doors” you could slide open to reveal an article inside.

Right click. View source. Copy. Mess around. A week later I had my own (very different) version of the sliding doors on my GeoCities site, but using the same HTML bones as Derek’s brilliant work.

You could just build stuff. You could look at what others were doing and play around with it. Copy the source, make adjustments, try things, and have something new. There were, certainly, limitations of the technology, but it was incredibly easy for anyone to pick up. Yes, you had to “learn” HTML, but you could pick up enough basics in an afternoon to build a decent looking website.

But then two things happened, and it’s worth separating them because they’re different problems with different causes.

First, the technical barrier went up. CSS and Javascript opened up incredible possibilities to make websites beautiful and interactive, but they also meant it was a lot more difficult to just view source, copy, and mess around. The gap between “basic functional website” and “actually looks good” widened into a chasm that required real expertise to cross. Plenty of dedicated people learned these skills, but the casual tinkerer — the person who’d spend an afternoon copying Derek’s frames to make sliding doors — increasingly couldn’t keep up.

But the technical complexity alone didn’t kill amateur web building. The centralization did. While there was an interim period where people set up their own blogs, it quickly moved to walled “social media gardens” where some giant tech company decided what your page looked like. Why bother learning CSS when you could just dump text in a Facebook box and reach more people? The incentive to build your own thing evaporated, replaced by the convenience of posting to someone else’s platform under someone else’s (hopefully benign) rules.

These two problems reinforced each other. The harder it got to build your own thing, the more attractive the walled gardens became. The more people moved to walled gardens, the less reason there was to learn to build.

The rise of agentic AI tools is opening up an opportunity to bring us back to that original world of wonder where you could just build what you wanted, even without a CS degree. And here I need to be specific about what I mean by “agentic AI” — because too many people are overly focused on the chatbots that answer questions or generate text or images for you. I’m talking about AI systems that can actually do things: write code, execute it, debug it, iterate on it based on your feedback. Tools like Claude Code, Cursor, Codex, Antigravity, or similar coding agents that can take a description of what you want and actually build it.

For all those years that tech bros would shout “learn to code” at journalists, the reality now is that being able to write well and accurately describe things is a superpower that is even better than code. You can tell a coding agent what to do… and for the most part it will do it.

Let me give you the example that still kind of blows my mind. A few weeks ago, in the course of a Saturday — most of which I actually spent building a fence in my yard — I had a coding agent build an entire video conferencing platform. It built a completely functional platform with specific features I’d wanted for years but couldn’t find in existing tools. I’ve now used it for actual staff meetings. The fence took longer to build than the software.

All it took was describing what I wanted to an agent that could code it for me. And it addresses both problems I described earlier: it lowers the technical barrier back down to “can you describe what you want clearly?” while also enabling you to build your own thing rather than accepting whatever some platform offers you.

Over the last few months I’ve been finding I need to retrain my brain a bit about what we accept and learn to deal with vs. what we can fix ourselves. In the past I’ve talked about the learned helplessness many people feel about the tech that we use. We know that it’s vaguely working against us, and we all have to figure out what trade-offs we’re willing to accept to accomplish whatever goals we have.

But what if we could just fix things rather than accepting the tradeoffs?

I’ve talked in the past about how I’ve used an AI-assisted writing tool called Lex over the past few years, which doesn’t write for me, but is a very useful editorial assistant. Over the last few months, though, I decided to see if I could effectively rebuild that tool myself, fully controlled by me, without having to rely on a company that might change or enshittify the app. I actually built it directly into the other big AI experiment I’ve spoken about: my task management tool, which I’ve also moved away from a third party hosting service onto a local machine. Indeed, I’m writing this article right now in this tool (I first created a task to write about it, and then by clicking a checkbox that it was a “writing project” it automatically opens up a blank page for me to write in, and when I’m done, I’ll click a button and it will do a first pass editorial review).

But the amazing thing to me is that I keep remembering I can fix anything I come across that doesn’t work the way I want it to. With any other software I have to adjust. With this software, I just say “oh hey, let’s change this.” I find that a few times a week I’ll make a small tweak here or there that just makes the software even better. In the past, I would just note a slight annoyance and figure out how to just deal with software not working the way I wanted. But now, my mind is open to the fact that I can just make it better. Myself.

An example: literally last night, I realized that the page in the task tool that lists all the writing projects I’m working on was getting cluttered by older completed projects that were listed as still being in “drafting” mode. With other tools (including the old writing tool I was using), I would just learn to mentally compartmentalize the fact that the list of articles was a mess and train myself to ignore the older articles and the digital clutter. But here, I could just lay out the issue to my coding agent, and after some back and forth, we came up with a system whereby once a task on the task management side was checked off as “completed” the corresponding writing project would similarly get marked as completed and then would be hidden away in a minimized list.

I keep coming across little things like this that, in the past, I would have been mildly annoyed by, but needed to live with. And it’s taking some effort to remind myself “wait, I don’t have to live with this, I can fix it.” Rather than training my brain to accept a product that doesn’t do what I want, I can just tell it to work better. And it does.

And, the more I do that, the more I start to open up my mind to possibilities that were impossible before. “Huh, wouldn’t it be nice if this tool also had this other feature? Let’s try it!” I find that the more I do this, the bigger my vision gets of what I can do because the large segment of things that were fundamentally impossible before are now open to me, just by describing what I want.

It really does give me that same underlying feeling that I felt when I was first playing around with HTML and being able to “just make things.” Except, now, it’s way more powerful. Rather than copying Derek’s use of HTML frames to create “sliding doors” on a webpage, I can create basically anything I dream up.

Then, when combined with open social protocols, you can build in social features or identity to any service as well — without having to worry about getting other users. They’re already there. For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.

Now, there are legitimate criticisms of “vibe coded” tools. Critics point out that AI-generated code can be buggy, insecure, hard to maintain, and that users who can’t read the code can’t verify what it’s actually doing. These are real concerns — for certain contexts.

The thing is, most of these criticisms apply to tools being built as businesses to serve customers at scale. If you’re shipping code to millions of users who are depending on it, you absolutely need security audits, proper testing, maintainable architecture. But that’s not what I’m talking about. I’m talking about building totally customized, personal tools for yourself—tools where you’re the only user, where the stakes are “my task list doesn’t sync properly” rather than “customer data got leaked.”

There’s also a more subtle concern worth addressing: is this actually democratizing, or does it just shift which skills you need? After all, you still need to accurately describe what you want, debug when things go wrong, and understand what’s even possible. That’s different from learning HTML, but it’s still a skill. I think the honest answer is that the kind of skill needed has shifted. “Learn to code” becomes “learn to think clearly and describe things precisely” — which happens to be a superpower that writers, editors, and domain experts already have. The barrier has moved to territory that many more people already inhabit.

It’s also an area where you can easily start small, learn, and grow. I started by building a few smaller apps with simpler features, but the more I do, the more I realize what’s possible.

Also, I’d note that this is actually an area where the LLM chatbots are kind of useful. Before I kick off an actual project with a coding agent, I’ve found that talking it through with an LLM first helps sharpen my thinking on what to tell the agent. I don’t outsource my mind to the chatbot, and will often reject some of its suggestions, but in having the discussion before setting the agent to work, it often clarifies tradeoffs and makes me consider how to best phrase things when I do move over to the agent.

What gets missed in most conversations about AI and the open web: these two pieces need each other. Open social protocols without AI tools stay stuck in the domain of developers and the highly technical — which is exactly why adoption has been slow. And AI tools without open protocols just replicate the old problem: you’re building cool stuff, but you’re still trapped inside someone else’s walls.

Put them together, though, and something clicks. Open protocols like ATProto give AI agents bounded, consent-driven contexts to work in — your agent can scan your Bluesky feed because the protocol allows that, not because some company decided to grant API access that it could revoke tomorrow. And AI agents give regular people the ability to actually build on those protocols without needing an engineering team. My morning briefing tool scans Bluesky not because I wrote a bunch of API calls, but because I described what I wanted and a coding agent made it happen.

Each piece makes the other more powerful and safer.

Blaine Cook — who was Twitter’s original architect back when it was still a protocol-minded company — recently wrote a piece at New_ Public that gets at this from the infrastructure side:

My long-standing hope has been that we’re able to move past the extractive, monopolizing, and competitive phase of social networks, and into a new era of creativity, collaboration, and diversity. I believe we’re poised to see a Cambrian explosion of new ways to interact online, and there’s evidence to suggest that it’s already happening: just today, I saw three new apps to share what you’re reading and watching with friends, each with their own unique take on the subject!

In this light, LLMs may be a killer app for decentralized networks — and decentralized networks may be the missing constraint that makes LLM integrations safer, more legible, and more aligned with user interests. It’s a symbiosis, and I believe we need both pieces. Rather than trying to integrate LLMs with everything, I think that deliberately bounded, consent-driven integrations will produce better outcomes.

Cook’s framing of LLMs as a “killer app for decentralized networks” is exactly right — and it runs the other way too. Decentralized networks might be the killer app for making AI tools something other than another vector for corporate lock-in, or just another clone of an existing centralized service.

Now, I can already hear the objection, and it’s a fair one: am I really suggesting we escape dependence on giant tech platforms by… becoming dependent on giant AI companies? Companies that have scraped the entire web, that burn massive amounts of energy and water, that are built on the labor of underpaid content moderators, and that seem to want to consolidate power in ways that look an awful lot like the last generation of tech giants?

Yeah, I get it. If the pitch is “use OpenAI to free yourself from Meta,” that’s just switching landlords.

But that’s not actually where this is heading. The trajectory matters more than the current snapshot.

First, if you’re using frontier models through the API or a pro subscription, you have significantly more control than most people realize. Your data generally isn’t feeding back into training. You’re using the model as a tool, not handing over your content to a platform. That’s a meaningfully different relationship than the one you have with social media companies, where you’re feeding them data, and their business model is based on monetizing that data.

But much more importantly, you don’t have to use the frontier models at all. Open source AI is maturing fast — models like Qwen, Kimi, and Mistral can run entirely on certain hardware, no cloud required. They’re behind the frontier models, but only by a bit. Six months to a year, roughly. But for a lot of the “build your own tools” use cases I’m describing, they’re already good enough.

Musician and YouTuber Rick Beato recently showed how easy it was for him to install local models on his own machine, and why he thinks the largest AI companies will eventually be undercut by home AI usage:

I’ve been doing something similar with Ollama hosting a Qwen model locally. It’s slower and less sophisticated. But it works. And I already use different models for different tasks, defaulting to local when I can. As those models improve — and they are improving quickly — the frontier labs become less necessary, not more. If you’re a professional, perhaps you’ll still need them. But if you’re just building something for yourself, it’s less and less necessary.

This is what the “AI is just another Big Tech power grab” critics are missing: the technology is moving toward decentralization, not away from it. That’s unusual. Social media started decentralized and got captured. AI is starting captured and getting more open over time. The economic pressure from open source models is real, and it’s pushing in the right direction. But it’s important we keep things moving that way and not slow down the development of open source LLMs.

On the training data question — which is a legitimate concern whether or not you think training on copyrighted works is fair use — efforts like Common Corpus are building large-scale training sets from public domain and openly licensed materials. Anil Dash has been writing about what “good AI” looks like in practice — AI that’s transparent about its training data, that respects consent, that minimizes externalities rather than ignoring them. There are ways to do this right.

None of this is fully solved yet. But the direction is clear, and the tools to do it responsibly are improving faster than most critics acknowledge.

When you use AI as a tool (rather than letting it use you as the tool), it can give you a kind of superpower to get past the learned helplessness of relying on whatever choices some billionaire or random product manager made for you. You can get past having to mentally compensate for your tools not really working the way you think they should work. Instead, you can just have the internet and your tools work the way you want them to. It’s the most excited I’ve been about the open web since those early days of realizing I could right click, copy and then figure out how to build sliding doors out of frames.

The promise of the open web was colonized by internet giants. But the power of LLMs and agentic coding means we can start to take it back. We can build customized, personal software for ourselves that does what we want. We can connect with communities via open social protocols that allow us to control the relationship rather than a billionaire intermediary. This is what the Resonant Computing Manifesto was all about, and why I’ve argued ATproto is so key to that vision.

But the other part of realizing the manifesto is the LLM side. That made some people scoff early on, but hopefully this piece shows how these things work hand in hand. These agentic AI tools give the power back to you and me.

Thirty years ago, I right-clicked on Derek Powazek’s beautiful website, viewed the source, copied it, messed around with it, and built something new. I didn’t ask anyone’s permission. I didn’t agree to terms of service. I didn’t fit my ideas into someone else’s template. I just built the thing I wanted to build.

Then we gave that away. We traded it for convenience, for reach, for the path of least resistance — and we got walled gardens, manipulated feeds, and the quiet understanding that our tools would never quite work the way we wanted them to, because they weren’t really ours.

Today’s equivalent of right-clicking on Derek’s site is describing what you want to a coding agent, watching it build, telling it what’s wrong, and iterating until it works for you. Different mechanics, same magic. And this time, with open protocols and increasingly open models, we have a shot at keeping it.

Let’s not give it away again.

Filed Under: , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “AI Might Be Our Best Shot At Taking Back The Open Web”

Subscribe: RSS Leave a comment
93 Comments
Anonymous Coward says:

Tech-bro here. Well, programmer, to be specific.

Would it surprise you to learn that most of “learning to code” really is just “learn to think clearly and describe things precisely” … in an artificial language?

Probably not, as you learned HTML (the L stands for ‘language’).

Your AI tool for websites is to HTML/CSS/JavaScript as Python is to machine language. A way to automate the repetitive precise detailing needed, and in a language easier to understand.

Anonymous Coward says:

Re:

I’ve worked software QA for over a decade and I will say with my entite chest that programmers are absolutely not trained how to describe anything clearly or precisely, and most devs don’t even recognize that as a part of their job, because (as far as they are concerned) that’s what the BA and project managers are supposed to do.

Anonymous Coward says:

Thirty years ago, I right-clicked on Derek Powazek’s beautiful website, viewed the source, copied it, messed around with it, and built something new. I didn’t ask anyone’s permission.

Not to be a nudge, but Powazek had the copyright to the material, you copied it without his permission, so you broke the law … which sounds exactly like what content-creators are concerned about with AI.

Anonymous Coward says:

Re:

Powazek had the copyright to the material, you copied it without his permission, so you broke the law

Even if you ignore the entire concept of fair use, copyright law does not grant the privilege of preventing people from learning from copyrighted work, nor does it apply to purely utilitarian expression (which, in many cases, is what small HTML and CSS snippets would be).

Stephen T. Stone (profile) says:

Kinda stealing someone else’s comment here, but I do want to say how…darkly hilarious it is that even though they likely saw The Matrix, LLM developers still decided to use the term “agents”.

Also: Greetings, fellow GeoCities HTML learner! I gotta say, HTML5/semantic HTML coding (in conjunction with CSS) is probably a little easier to learn that table-based layout coding. 🤣

Anonymous Coward says:

Re: Re:

“Agent” as a compsci term predates The Matrix.

100% correct; some reasarch shows it at least dates back to 1995’s Artificial Intelligence: A Modern Approach (Russell & Norvig).

I can’t trace back how far it goes in science fiction at the moment, but I suspect at least back to the 1970s.

Anonymous Coward says:

Re: Re: Re:

The “autonomous agent” Wikipedia page has a citation from 1991; but, like you, I suspect the term would have been used much earlier in science fiction. And I expect that the Wachowskis knew about it.

Then again, it’s quite a literal application of the term: “One who acts for, or in the place of, another (the principal), by that person’s authority”.

This comment has been deemed insightful by the community.
Bloof (profile) says:

Nothing helps take something back quite like allowing billion dollar companies to stripmine it as they see fit then build near impenetrable walls around the crater and then grant them a perpetual license to retain complete control of every road leading there as well as all the signage and erect tollbooths every 500 meters.

Pull the other one.

You may not like copyright holders but the bright shiny AI driven future is a worse version of the web3 imagined by VC twats like Marc Andreesen. At least Web3 wouldn’t continually hammer everything else online with scrapers and openly plan to have AI generate knockoffs of webpages it seems unworthy of a direct link, you know, the thing Google have filed patents for. If people just surrender to AI the way enthusiasts wish, we won’t eventually be treated better no more than we were when trickle down economics was tried for the hundredth time, they will just wall off yet another chunk of the commons entirely

This comment has been deemed insightful by the community.
Bloof (profile) says:

Re:

Open source AI isn’t going to ‘save us’, it’s going to be a figleaf at best, likely eventually sponsored by Google or OpenAI to ward off antitrust efforts in countries that still have laws. It reminds me of all the good and ethical crypto projects people swore would redeem that bundle of scammers, best part of a decade on we’re still waiting for them to make something other than headlines and unnecessary e-waste.

Meanwhile in the real world, AI is causing untold damage to actual open source projects as slop is poured in through any opening they can find, and even if the AI generating things is open too, free range and locally sourced, the sloppy output will still be slop..? Oh, and the environmental issues haven’t even been touched in any meaningful way yet. I doubt the good AI will be building green energy and desalinisation plants to power it all, no more than Anthropic have.

Anonymous Coward says:

Re: Re:

Agreed. Not sure what world the author is living in, but there isn’t going to be an upside of openness via AI. The companies building this crap aren’t going to let that happen. It’s weird that so many tech boosters don’t see what is going on and are still falling for these pie in the sky stories.

Anonymous Coward says:

Re: Re: Re:2

I think it comes down to the fact that when we view someone as credible, reliable, and well informed, it’s hard to reconcile that when it turns out that they have a blind spot in a particular area, and so we’ll try to find explanations for what we can see as actions obviously inconsistent with our image of them.

Anonymous Coward says:

The AI companies have a huge incentive to build that walled garden. They spent hundreds of billions on the LLMs, and the investors want their money back.

The companies that survive the coming shakedown will be those that manage to build that walled garden, not necessarily the ones with superior technology. Lure the user with convenience, then start extracting value.

Unless it’s technically impossible to build a walled garden, the open AI era will be just as temporary as the open Internet. Because the system we live under is and remains capitalism.

Ninja (profile) says:

I like your optimism. And I hope you are right.

But I honestly don’t see how to avoid big companies from capturing everything to themselves in a big “Buy ‘n Large” (reference: Wall-E). That’s what’s been happening for a good while now and seems to be accelerating. Because those who make the rules are those who already own everything and have the money to buy everything.

Or maybe we are in yet another tipping point and we’ll bounce back to a virtuous cycle. Who knows.

Anonymous Coward says:

Re:

I think the point is that what has been shown so far is that these AI companies don’t have the walled garden they would like you to think that they do especially in the coding LLM space. They all have everything from stack overflow scraped and the open source models are clearly distilling claude and other coding models to create their models. Basically if the closed models improve the open ones will too and eventually we get to a place running some of these models locally may only $2000 versus a max subscription at $200 a month. When you are there it pays for itself in a year and will continue to work for years afterwards with a model that is good enough to do what you want. It’s very difficult for any of these companies to stop that happening. Once it does happen the decentralization and ability to self host will make things change. Eventually it may just be open up local AI tell I want to do this. It creates it and then you can say hey I want to deploy this on this raspberry pi I just connected please take all necessary steps an then it just does it. This isn’t theoretical I can already do this with clause code and I am actively saving for hardware to do this locally with kimi and other models.

This comment has been deemed insightful by the community.
Derek Powazek (user link) says:

Fray Hates AI Slop

Hi, I’m the Derek that was mentioned in this article.

It’s nice that anyone remembers Fray. That was a labor of love I worked on for decades. Fray’s mission was to use the web as a canvas for a new kind of art. It was lovingly hand-crafted by me and a team of early web nerds. The sliding door story was actually created by one of our earliest contributors, Alexis Massie, and can still be seen here: https://fray.com/hope/meeting/

Since it was hand-crafted, it still functions decades later.

Fray was about melding new tech (the web) and the oldest storytelling traditions. Everything was made by hand and deeply personal, designed to bring humanity to the web.

Which is why I am horrified that you’re using it as a framing device to promote AI, which is the opposite. It is anti-human, billionare-driven, fascist-enabling slop that is destroying the open web right now.

Genuinely, how dare you.

If you want “view source” for programming, you have it already, it’s called Open Source and it’s powering most of the web to this day. But what you’re describing is not viewing source, it’s automated creation, and it’s the exact opposite of what we did at Fray.

I wanted Fray to be remembered, but not like this.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

If by “open” you mean one in which all websites are created using the same tools and software controlled by the same companies, then sure.

Missing in this, is that llms are trained on almost all the same data, and will recommend the most common things, which results in a bunch of cookie cutters built off the same things.

And if you don’t think things need security. You, are an, idiot.

” For example, my task management tool sends me a “morning briefing” every day that, among other things, scans through Bluesky to see if there’s anything that might need my attention.”

So, it sounds like it can connect out at the very least. What prevents someone from triggering code execution via messages it reads?

Here are some serious questions for you.

Can it be connected to externally? If so is it behind a vpn to connect to? Is anything securing it?

What code packages are being used? Do you even know if you have loaded malware or spyware onto your system?

Are you using public skills? Have you read them in an editor that doesn’t support markdown to ensure they aren’t full of secret hidden commands?

You can easily go and create stuff with php and html still, or html and basic js. Hell you can do and use most of the stuff you did 20 years ago and it will probably still open in a browser. Making a website isn’t with fancy tools and systems isn’t what is blocking the web from being more open.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Those “open source models” like Qwen aren’t truly open source the way open source software is. I can’t review the training data, and the model is produced by an unaccountable corporation. You get no control over how it is trained, you can’t change how it’s trained, you can’t train one yourself.

I, too, look forward to running powerful local models, but I have no delusions about them changing the incentives that dictate how the Internet gets built.

Derek Powazek (user link) says:

AI's Impact on the Open Web isn't Hypothetical

This article paints a rosy picture of what could happen with AI tools and tries to link that fantasy to the glory days of the open web, but let’s take a look at what AI tools have done to the actual open web today.

  1. You can’t believe anything online anymore, not text, not photos, not video. Anything you post will not be believed by your audience. Not anymore. Not by default.
  2. Search engines are entirely broken. And not just in their AI summaries, but the regurgitated AI slop in the results, too. Our open web is filling with garbage faster that anyone can sort it. We’re going to be left with a web full of slop.
  3. Can’t start new online communities anymore because the AI bots are too good at creating fake users spitting out fake content. Digg.com just had to relearn this. After their beta relaunch failed, they said: “The internet is now populated by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can’t trust that the votes, the comments, and the engagement you’re seeing are real, you’ve lost the foundation a community platform is built on.”
  4. Established communities are dying or closing because they can’t keep up with the garbage content. Every creative subreddit clogged with inauthentic content, everyone spending endless time doing the “is this real” dance. People are giving up on their labor of love communities because of the onslaught.
  5. Did you like being able to read news sites without logging in? The open web thrived on it! Not anymore. Everything is now behind a paywall or demanding an email address. Many reasons why, but to avoid getting scraped by AI companies is certainly part of it.
  6. Communities moving off the open web to private Discords or Slacks to avoid the bots. Inevitable, perhaps, but not open, and a far cry from the open bulletin boards and forums that fostered real creative communities.
  7. Even Bluesky (rings bell) is groaning under the weight of automated inauthentic users posting robotic content.

These are real harms to the open web happening right now, without even mentioning the plagiarism that built the models or the environmental and social costs of these technologies, or the way Trump and his cronies are literally using AI slop to promote a war.

The idea that “AI Might Be Our Best Shot At Taking Back The Open Web” is genuinely psychotic when it’s destroying it RIGHT NOW. And you’re helping by promoting this garbage idea that it’s like the good old days because it gave you the feels.

I’ve read you for years, Mike. Relied on you to make sense of the digital world. But this is a shockingly abominable take.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

I’d just like to point out:

You can’t believe anything online anymore, not text, not photos, not video. Anything you post will not be believed by your audience. Not anymore. Not by default.

This is good. People never should have believed what they read online. AI is providing an epistemic corrective.

Search engines are entirely broken. And not just in their AI summaries, but the regurgitated AI slop in the results, too. Our open web is filling with garbage faster that anyone can sort it. We’re going to be left with a web full of slop.

This predates genAI by years. One of the reasons people adopted LLMs as search engine replacements so quickly is the search engines were already useless.

KelsonV (profile) says:

Re: Re:

The decline of search engines was a self-inflicted wound by sales/marketing departments that would rather show people more ads than filter out the SEO chaff, because really filtering out chaff takes too much effort, but if you filter just enough to keep people from giving up, they’ll scroll through more pages on your site and see more ads.

The same departments are pushing “AI” chat as a replacement. And making money off of the SEO people who use LLMs to generate more slop sites, which further degrades search indexes, which pushes more people to AI queries, which will eventually get poisoned by the slop sites too, but that just means they’ll spend more time feeding questions to the panopticon, which can show them more ads.

Anonymous Coward says:

Re: Re:

This is good. People never should have believed what they read online.

And we already went through this phase with dubious television “news” networks, telemarketing scams, door-to-door salespeople, pseudoscientific quackery, and lying politicians. Among other things; and basically all of this dubious behavior was seen on the Internet before the most recent wave of computer-generated slop. (Like that site that’s been showing up in search engine results for over a decade, purporting to have the PDF manual one is looking for—actually just a spam PDF with popular search keywords and a link back to the fraudulent site.)

Anonymous Coward says:

Long time read of this site, who had accidentally taken a break in recent times, not intentionally, it just kind of fell off my radar of regular sites I go to. But, always respected the hell out of what you all do here and appreciate you.

Recently remembered to get you back in my rotation, and I’m going to be completely honest and say I’m probably taking you back out of rotation intentionally

This recent string of pro-“AI” articles/sentiment I am seeing going on here is deeply disappointing, considering all the things you all normally seem to write/care about here (privacy, environmental concerns, tech company abuse of public good to line their pockets, etc). These companies, their tools, and the social and environmental impact of them far outweigh any benefits.

There’s been several articles here recently that seem to desperately grasp at the slivers of good while wholly ignoring all the bad that comes with supporting these companies and their fancy search engines being sold to rubes as “intelligence”.

I’ll keep an eye on it, but likely not sticking around, given what I’ve seen.

This comment has been deemed insightful by the community.
Arianity (profile) says:

This is genuinely very cool. I do want to offer a few thoughts:

tools where you’re the only user, where the stakes are “my task list doesn’t sync properly” rather than “customer data got leaked.”

Anything that interfaces with outside content should be considered a potential attack vector. The risk with an app like your Bluesky scanner is something like you don’t sanitize inputs, someone manages to get RCE, and grabs your browser data/session IDs etc. If you’re local, yeah whatever, go nuts (although be careful about allowing agents to e.g. erase your harddrive, if you have stuff you care about on it). There’s also slopsquatting dependencies, even if the app itself is local at runtime.

Security is not something just for big companies. Whenever you interact with the broader internet, that is a security risk. Especially if this model gets more popular.

because too many people are overly focused on the chatbots that answer questions or generate text or images for you. I’m talking about AI systems that can actually do things:

For what it’s worth, the chatbots can also do things like generate code. In my dabbling to keep up with it, I’ve mostly been coding with chatbots, for now. It is more hands-on than an agentic version, and much more limited, but you can do it. It’s kind of a nice middle ground, actually? The projects I’m working on I can’t commit AI code to prod, but i can still get a nice work flow where i prototype a small function, review it myself line by line, and then rewrite my own flavor.

The agentic stuff is much more powerful, but if anyone wants to dip their toes in, I can recommend it as fairly effective.

This comment has been flagged by the community. Click here to show it.

This comment has been deemed insightful by the community.
Narcissus (profile) says:

Grandpa here

As a guy closing in on 60 with frightening speed, I feel the comments of many people above. I’m also being forced to use AI by my boss. They don’t tell me how, I just need to somehow. I also see the barbaric influence greedy tech moghuls have on the world in general, and how they make everything worse for everybody.

I can be like the many of the commenters above and close myself off from anything AI. I can refuse to engage with it and hope I can keep my job until I retire. I can keep screaming at the clouds and hope it doesn’t rain.

I chose a bit different path. I’m (slowly) learning to use it. Initially it just caused me extra work, because I didn’t know how to use it. I’ve now gotten to the point where sometimes it actually gives me useful results and saves me time. I’m not spending days to spar with it, I just incrementally learn more about it.

It made me realize something. People like me (I’m an expert in my field) will be more valuable in the future, not less. LLMs can generate everything you ask for but at the end, somebody with actual wisdom (so, contextual knowledge) needs to say if what it generates makes sense. I work with much younger people, much more fluent with these new tools, and they still call me to okay things.

So, kids that are ChatGTPing their way to a degree are actually taking the wrong lesson. As mentioned in the comments several times, it can be dangerous to generate code with a black box, if you don’t understand the code it generates. Therefore people that can revise said code will be worth their weight in gold. Obviously they will use their own AI tools. They’re already PEN testing code with AI models. The same will go for other fields. Law, for example. We’ve all heard that LLMs can go terribly wrong when writing legal texts. So, a wise person needs to revise it.

This was a bit of a preamble to what I wanted to say: Just blindly rejecting everything AI is not a useful attitude. The technology is there and we somehow need to deal with it. Even if you want to fight it, you’ll still need to understand what it is and how it works, to effectively put it back in it’s place. Otherwise you’re just an old man, like me, that’s angry at the TV because he can’t figure out the remote.

And, I do think things will settle down. The current situation is untenable. The amount of money these companies are burning up is unsustainable and, if all the datacenters they’re planning actually get built, there will simply not be enough energy to power them. When LLMs have to charge realistic amounts of money for the models, the craze will die down.

In the mean time I think we should look at ways how to use the tools, while trying to negate or lessen the negative impact they have and which are undeniable. For that we need all of us working in unison and making good choices (elections matter!). Just saying NO is likely to give these people free reign to destroy the world.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Nice to see someone trying to keep a balanced approach

Nothing’s ever perfect, and it feels as if those who are willing to put in some effort into it by learning the tech and focus on ‘radical self-reliance’ will ALWAYS be able to preserve their individuality regardless of the tools they use.

The whole early Internet analogy was a welcome reminder of the world of possibilities (and scams) that await on such an open platform.

I’m pretty sure that if 1995-me was given access to local models like Qwen-Coder-7B and other reasonably useful LLMs, I would have most certainly found interesting ways to put them to work.

Yes, AI slop has definitely become a ever-growing annoyance, but there will be tools made which will enable us to filter this out. Think of a reverse-image search service like TinEye available as a plugin to inform users of the origins and likelihood of certain media being artificial, and have rules set to filter them out if desired.

It won’t take long for useful apps of the sort to start making their way into little helper add-ons which will help us curate our own online experience like an adblocker would, but for AI garbage.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Long time reader, first time commenter. Chronic lurker.
I have struggled with this sites pro-AI stance. I have struggled with the inevitability thesis. And like many of the above commenters I am seriously concerned with the actual political economy of AI, not the dream of what it could be.

However, as other’s have articulated those concerns I don’t want to retread old ground. I instead focus on your vision in and of itself.

I think this is your clearest articulation of the benefit you see in LLMs, and I find your ‘the tool should instantly fit my hand perfectly’ thesis terrifying. As much as we techies deny it, the open web is a social phenomena, and your vision of this is anathema to that. From the lurkers (like myself) to the core maintainers, we have no choice but to be socialised by the frictions of the ill fitting tool.

As a lurker, I share in your memories of hitting ‘view source’ and remixing the code I found. But in remixing we weren’t just engaging in an isolated learning how to code process, but contributing back to the community. Others could ‘view source’ on our work, and see what we had learned. Even in lurking we contributed to the grand conversation of the internet.

The open web is being squeezed from above and below. From corporations wanting a monopoly of control and higher rents, AND from below from people who are not willing to engage and be socialised into the slow political processes that negotiate the future directions. The open web faces a crunch of the lack of people willing to contribute back to ecosystems. We face so many crises born of atomisation, and your vision would create another.

Even in imagining a distributed political economy of AI, and clicking my fingers to disappear the tech giants, the same extractive practices are at the CORE of your vision. If the machine produces MY perfect tool every time, why bother contributing to a community that has a tool similar to my needs. Why bother helping with the technical debt of a project. That friction is important. That friction is at the core of the open web. Having that handle that doesn’t quite fit my hand, reminds me to seek out and help others that are also struggling with the same problem.

Centralised or distributed, AI is drying up the aquifer of the open web.

This comment has been deemed insightful by the community.
TheResidentSkeptic (profile) says:

Great Grandpa here

73… been reading this site for many years. In the beginning on x.25 networks (pre html) we shared tools and code. The early “automated” web site tools (Forman Interactive Internet Creator 4) generated full sites and uploaded them for you. Selena Sol gave the world the first viable webstore with the Instant CGI book. Every magazine had free source code in it, even the old tru-tone “records” in the fold with full running applications. AI is just a new layer of tools (and AI isn’t new – we’ve been building it for 75 years now) and approaches to helping people get content published. We absolutely need to control the tools, as we always have. Human Navigator and AI Coder (agile) approach to ensuring viable output. Doom Screaming has yet to work – in either direction. Just the new way of doing the same old stuff. Figure out what works, and what doesn’t. And chill.

amoshias (profile) says:

after some back and forth with an ai agent

you figured out how to connect two things in a database?

man, I have nothing but respect for the work you’ve done over the years. but reading this (and your recent ‘they just didn’t have TIME to announce a $100m investment because they were all SO BUSY” bluesky post) I’m seriously wondering if ai had just melted your mind.

Fartmcbutts says:

This ignores that this stuff takes time / motivation and vision.

I didn’t learn to code really complicated sites in HTML because I didn’t have time to learn what each part of a very complicated code was. I used to also right click, view source and copy data. Then one day I came across this website that when you get to it, it only gives a flashlight circle that you could read with. Then when you clicked, the entire page became visible. I loved it. But it was the last time that I really tried to code. After multiple days trying to replicate or understand the code, I finally gave up. It was too large, too complicated to continue to sink time into, I was just a teen with a few hours access to the internet.

I don’t want to spend my limited free time creating more code, after I just got done with 8 hrs on a computer paying attention to pedantic bullshit. This looks distinctive like your job. Congratulations on finding a way to make writing pay, but that isn’t the reality for most of us.

Also, all y’all are using your real names, like you have no worries about some dude showing up to your house. That’s privilege, and that’s not the reality that minorities and especially women, deal with.

  • You got lucky that you could find the time and motivation to get this stuff to become a paying gig.
  • You also weren’t born anto a class where to exist on the internet was inherently dangerous to you.
  • I hope you are right, but the realities of a decentralized web mean putting the onus of maintenance onto the individual.

I just want a safe environment to vibe with like minded folk. And to do so on the modern web, that means giving up data, and having our anonymity stripped.

missrao (profile) says:

Making my own personal tools sounds exciting. While I have some basic coding knowledge, I don’t have the passion to spend hours trying to get small things to work. I think I’d feel most comfortable with AI running on my own hardware, not dialling home to someone though, so I’ll have to look at the open ones you mentioned sometime.
I think using AI for drafts and prototypes and single use issues can open things up to a lot of people. There a point when that should transition to professional standards and real humans if a project grows though.
I’ve seen this in creative spaces, where people start a project with very basic skills and tools, and then upgrade and hire other people when it takes off. I think AI code and creative projects could follow the same trajectory without causing harm.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Subscribe to Our Newsletter

Get all our posts in your inbox with the Techdirt Daily Newsletter!

We don’t spam. Read our privacy policy for more info.

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...