W3C Steps Up: Wants To Create A Decentralized, Distributed Web System
from the moving-forward dept
We’ve discussed in the past how the whole Wikileaks response from governments has only helped to expose areas of internet infrastructure that should be decentralized and distributed, but are not. Of course, much of that is now being cleared up. For example, there was plenty of talk — what with the US government seizing domains and all — about setting up a distributed web system that bypasses a centralized server (and potential censorship choke point), such that it can’t easily be filtered. It appears that this may already be happening and as was just announced, it’s being undertaken by the W3C. That ought to add plenty of legitimacy to the concept, which many anti-Wikileaks folks have insisted was merely a geek pipedream.
Filed Under: decentralized, distributed, w3c, web
Comments on “W3C Steps Up: Wants To Create A Decentralized, Distributed Web System”
Alas, the hour I spent working on a similar project with other like minded individuals is now probably wasted… I think I’ll get over my disappointment.
Could it be possible to help on this project instead?
dont consider this to be a wasted effort, consider it that you were just ahead of the game, and perhaps they are in need of the help of you and your friends. you never know. whats the worst they are gonna say, no thanks?
So we’ll see this in 2030, given W3C’s HTML5 standard approval process.
By the time the W3C develops this, we’ll probably be using some entirely different communication technology altogether, with superior anti censorship capabilities (and the government will subject it to superior censorship technologies) and the Internet will be obsolete. It’s just a game of cat and mouse.
Re: Re: Re:
(or at least the Internet as we know it now will be obsolete. Maybe we’ll have some quantum Internet instead … Todays Internet may still exist for some niche purpose though, like the typewriter of today still exists in some offices and is used for narrow purposes).
It’s amazing, I’m still using a keyboard and a mouse but keyboards and mice themselves are becoming obsolete. Everything is becoming touch screen (though that’s still often inefficient, it will improve), voice recognition still has issues but it will likely improve too, etc…
Twenty years from now a traditional keyboard and mouse may only have a niche purpose, kinda like the typewriter of today, and the future generation may even have some people who have never seen a keyboard or a mouse. The future monitor may simply have a camera that simply interprets your hands motions and gestures as actual typing and moving of the mouse and inputs it into the input buffer (again, today that needs work, but in the future it will likely improve). You’ll be able to input data much quicker and more efficiently since the computer will have a wider range of hand movements to extrapolate data from compared to the much narrower range of hand movements that limit what constitutes data input on a keyboard.
Re: Re: Re: Re:
There’s one big, ugly hurdle that keyboard-less keyboards (and other virtual input devices) will have to tackle before they’ll ever replace the current set of standard input devices: haptic feedback.
It’s really difficult to flail away in the air with any precision unless you have some sort of touch-based feedback to know if you’re flailing in the correct area.
Smartphones use little bursts from the vibration motor to simulate this, but it’s still much faster to type on a hardware keyboard than it is to use a software keyboard.
You’ll get my IBM Model M when you pry it from my cold, dead hands!
Now, that’s not to say that *supplemental* input devices are a bad thing… there’s still tons of room for alternate input systems that augment rather than replace a keyboard and mouse.
Re: Re: Re:2 Re:
“It’s really difficult to flail away in the air with any precision unless you have some sort of touch-based feedback to know if you’re flailing in the correct area.”
If NBA players can shoot a basketball from a three point line and make it go in while being guarded by another NBA player (now that’s difficult) with no haptic feedback telling them that they’re shooting it exactly in the right direction with the right power, then I think people can manage data input into a computer with only visual feedback. If referees can monitor all these players and generally make accurate, high precision calls at high speeds, I think we can manage. “But not everyone is an NBA player”. Sure, not everyone is athletic, but the fastest typist thirty years ago is considered the average typist of today (if that). With sufficient practice, future generations will get more coordinated in this regard as the need arises.
The limiting factor here isn’t our ability to execute highly coordinated sequences of movements with high resolution and precision (though, like with keyboards, we can make mistakes, but with visual feedback we correct them), it’s the computers ability to resolve our input and interpret it how we want it to. That will improve as computers get more sophisticated.
Re: Re: Re:3 Re:
Ah but the NBA players are using visual feedback, not to mention muscle memory. They also use haptic feedback to know where their hands are positioned on the ball to give it correct spin, speed, angle of flight etc which they could not do without that tactile ability of the ball.
Us Humans by our very nature are tactile entities it is based on the millions of years of evolving hands to work with tools which are tactile devices. Visual stimulus plays a huge part as well (as does muscle memory), but without tactile feedback of some form the visual feedback becomes a huge component and the brain needs to be totally dedicated to the purpose of visual interpretation only.
You state that “the fastest typist thirty years ago is considered the average typist of today (if that).”.. this is correct except today they have a very specific and visual medium to offer feedback that is used moreso than it should by the average typist.
Touch typists of ages past were typing over 100 ACCURATE words per minute with full grammatical and punctuational checking occuring by them whilst not even looking at the keyboard or the words forming on the typed pages. In fact they were mostly reading off handwritten (and dependant on the originator nearly illegible) notes with the only glance done at the typewriter being when they had to place a new page on the roller or check that tabs were tabulating correctly.
You are correct about the ability to execute movement in a co-ordinated and precise way, but visual feedback is a re-active way of doing it since it by its very nature is a feedback after the event whereas tactile (haptic) is before and during the event.
This is why aircraft even though are now nearly all fly by wire still have rumblers that vibrate the joysticks, flight columns, and pedals control the aircraft when in fact their is no actual physical connection between the controls and the flight surfaces. The pilots need it because as humans themselves, we are hard wired to respond to touch more than any other sense.
I do not see a time when HCI does not involve haptic interfaces unless and until computers evolve to initiate “Brain – Computer” interfaces. Even then I could imagine the feedback from the Interface would trick our brain into believing that we have actually touched something.
Re: Re: Re:4 Re:
“You are correct about the ability to execute movement in a co-ordinated and precise way, but visual feedback is a re-active way of doing it since it by its very nature is a feedback after the event whereas tactile (haptic) is before and during the event.”
Then why is it that we have little problems accurately communicating via sign language (without visual feedback even).
All I’m describing is a form of sign language that we will use to communicate with computers. Communication is hard wired into our brains. Did you know that if you raised a bunch of deaf children together and never taught them sign language they will independently create their own sophisticated sign language to communicate with, a sign language that’s capable of communicating thoughts with about the same level of sophistication as our current more modern languages (and second generation people will communicate thoughts as fluently as our modern languages)? It’s built into us. This happens accurately with little to no visual feedback even, they’re not looking at their own hands when communicating, they’re looking at each others hands. They will even correct their own mistakes, like we do with our speech. “But the movement of their hands offers haptic feedback”. So a keyboard and a mouse aren’t needed for that then.
Re: Re: Re:5 Re:
(In fact, you really ought to look up the history of how sign language developed).
Re: Re: Re:6 Re:
(but the point is that kids don’t have to be taught to communicate. You can raise kids together and teach them absolutely nothing and even if they were deaf, they will literally create their own sophisticated sign language. and I’m not talking about a language similar to how dogs communicate, I’m talking about a language that can express about the very same sophisticated thoughts that I’m expressing to you now).
Re: Re: Re:6 Re:
To expand on the history of sign language, if you look at the history of sign language what you’ll find is that schools initially created hand signs for each letter of the alphabet. Teachers tried to teach classes of deaf students these hands signals and to communicate their sentences through these hand signal letters. It was terribly inefficient, students hated it, and they didn’t really learn very much. But what happened was that students from various schools started developing their own sign languages. When they did, the schools prohibited it and teachers disciplined any students that attempted to use any improved sign languages. But that didn’t stop deaf students from developing their own sign languages behind the teachers back. Before you knew it the students were making fun of teachers behind their backs and communicating all sorts of things that the teacher couldn’t even understand. When researchers noticed, the system changed to help deaf people develop their own (more efficient) sign language.
The reason why signaling every letter in the alphabet is an inefficient way of communicating is because using only 26 possible movements to communicate our sophisticated thoughts is too inefficient. The range of possible movements that 26 letters restricts you too is too narrow. The wider range of possible movements that modern sign languages encompass is far more efficient. Likewise, a keyboard and a mouse only captures and interprets a very narrow range of movements when compared to the range of movements that your hands can execute.
Re: Re: Re:4 Re:
“This is why aircraft even though are now nearly all fly by wire still have rumblers that vibrate the joysticks, flight columns, and pedals control the aircraft when in fact their is no actual physical connection between the controls and the flight surfaces. The pilots need it because as humans themselves, we are hard wired to respond to touch more than any other sense. “
I think there is a difference between responding to input whereby your very next response depends on you’re your current input and often times your current response depends on your immediately current input vs communication whereby you first premeditate what you want to communicate and how to communicate it before you execute a sequence of events that communicates your premeditated thoughts. By the time you communicate something you have already thought of what you want to say, vs an on the fly response to a current input isn’t premeditated because the response depends on the current input.
Re: Re: Re:5 Re:
and the question that needs to be asked when with your examples is, is the vibration communicating something that isn’t being visually communicated (well). There is only so much information that can be visually communicated efficiently at once before it becomes advantageous to add an additional communication avenue. But these vibrations don’t seem to exist on my keyboard, yet I’m typing to you just fine. Haptic feedback isn’t always necessity, people seem to be able to play first person shooter games just fine with no haptic feedback that tells them anything about what’s happening in the game. In the case of a joystick vibrating, usually it’s sending some sort of resisting force or it’s vibrating to tell you something that’s not being visually communicated (well) and that can’t be without distracting your visual focus away from the information that you’re already looking at.
Re: Re: Re: Re:
Sorry to tell you, but keyboard and mouse aren’t going anywhere for a very long time, unless you’re doing absolutely bare minimum actions (no typing, low manipulation, etc).
A keyboard can pump out 80+ words per minute for a decently proficient typist. Imagine trying to talk that fast for you Voice Recognition software…no chance at all. Motion capture? No tactile feedback, for one, which slows your actions at higher rates. Keyboards may be upgraded and improved, but they do what basically no other system can manage: Extremely fast APM with minimal physical movement.
Same with a Mouse. Minimal body movement for an extremely wide area of coverage. Touchscreens are absolutely terrible for prolonged usage, because two inches of screen distance is equivalent to two inches of physical movement. A mouse can do the exact same thing with a muscle twitch.
Essentially, the less physical actions required, the better it is, for any action that requires large amounts of inputs.
Re: Re: Re:2 Re:
“A mouse can do the exact same thing with a muscle twitch.”
That’s just because the mouse is set to move the pointer 10x (or whatever) pixels per x input signals. That can typically be adjusted in your operating system settings. You can make those same exact mouse movements with your hands with no mouse and have a computer camera pick it up and interpret it.
“But cameras don’t have the necessary input resolution. But the software of today can’t accurately and quickly interpret your hand movements with just a camera like a mouse can”.
Future advancements will fix all that.
Re: Re: Re:3 Re:
(and trust me, it’s already being worked on by companies like Microsoft, among others. It may seem implausible now, but look at how much technology has advanced in the last thirty years alone).
Re: Re: Re:3 Re:
Now while it is true that your hand can make about the same exact mouse movements without a mouse, or at least movements of equivalent resolution to provide equivalent data input resolution for each slight partial hand motion, the computer of the future won’t be simply interpreting our simulated mouse and keyboard movements on a camera (though it would be able to). The range of possible movements that a mouse and keyboard can extrapolate data from is narrow, the range of movements that a camera can extrapolate data from is far wider and hence can provide much more efficient input.
Re: Re: Re:2 Re:
“Imagine trying to talk that fast for you Voice Recognition software…no chance at all.”
Past generations almost always seem to underestimate future advancements. If anything, my predictions are probably an underestimate.
Not a big deal
I don’t see this project as being revolutionary. First off, they are just talking about putting APIs in the browser to allow for direct browser to browser communication. Applications, separate from a browser, already exist to transmit such peer to peer media streams. Also, I don’t think the addition of a P2P architecture for browser communication is meant to replace a client-server model for HTTP or HTTPS. It seems more appropriate to consider this as a complement to a client-server model to support collaboration over a browser. A great deal of web content is of transitory interest and, thus, inappropriate for distributing via a P2P architecture. As an aside, I would hate to see the effect on home routers by the consequent oversaturation of the NAT tables. That could be fixed, however, with more capable embedded routers and a transition to IPV6.
As to the issue of bypassing censorship, a P2P architecture for a browser could be used by a government to pinpoint all the users. Imagine if China was paranoid about copyright infringement. They could easily identify and jail all those infringers that, in the US, are hidden behind a leased IP address and judicial constraints on identifying the attached computer/user.
The architecture of the internet and, thus, the world wide web is already decentralized and distributed. The aspects for which control is centralized, a single DNS system and a single domain registration and IP address assignment system
need centralized control to avoid fragmentation of the internet. Governments attempt to censor, via domain seizures for example, ultimately encourage fragmentation. Attempts to counter censorship should not also encourage fragmentation.
I have not really thought this through, but if you are thinking that a P2P architecture for browser communication will completely replace a client-server model, I believe that would also encourage fragmntation. It certainly could make information harder to find. Imagine looking through Google search results to find news about a particular topic without depending on results from specific domains.
This is only one effort.
There are others. Some will succeed, some will fail, some will borrow ideas from each, something will get done.
Because those of us who built the net are not about to let the fools in corporations or governments destroy it. It is more important than any corporation or any government — in fact, it’s more important than ALL corporations and ALL governments, and we will be enforcing that.
They will stop this with “IT WILL BE USED FOR CHILD PORN!” cries, just like Tor and Freenet have been partially killed because some people use those for that.
And I turned around and said, “So you’re FOR Tyranny and Dictatorships? Okay then, I’ll dig for dirt on you, I suspect you of being a paedophile.”
Because, in my experience, those who cry the loudest against something often have the most to hide about the same thing. A few recent examples:
1) NYT coming out against Wikileaks;
2) Gleen Beck’s railing against the “Librul hivemind crayzee”;
3) The Catholic Cover-up of child abuse;
4) Banking ‘regulation’ in the US.
The ISP Problem
The problem with systems like this is that they run afoul of the “no servers” and “no P2P” rules that many ISPs have. You’ll have to solve the ISP problem before anything like this can be made to work on a wide scale, and in the US that means either getting rid of the government granted telco and cableco monopolies (probably not going to happen) or regulating them to enforce net neutrality (probably won’t work).
Re: The ISP Problem
No ISP I know of, save mobile ones, have a no p2p rule.
so that they can control it
with W3C standards? no thanks.
I envision a not-too-distant future where all web browsers are essentially bit-torrent clients. What could be more decentralized?
Content hosts put up node 0 and every viewer becomes a leech. The more viewers, like for a viral video, the more the bandwidth is distributed. That’s where the mainstream adoption will be, these high-bandwidth hosts like YouTube and Netflix. Building it into the browser will be the mainstream adoption because obviously the technology already exists.
Why should this method be used when there already exists multicast infrastructure that can do the same thing?