I so wish the EU would tell us developers to Nerd Harder on stopping mass surveillance, alongside protecting privacy otherwise. But I'm guessing they have their own intelligence companies who want technology to let them through.
To be clear I don't trust anything (at least when it comes to computers) that I can't verify for myself. Privacy is too important for anything less than paranoia. I can't verify what code Yahoo, et al are running on their computers so I don't trust what they say about it. What I would trust is if Yahoo let native clients encrypt messages in a way (say using DIME) that they couldn't do this scanning.
All I really know about the Snowdon leaks is that they are far too possible.
That said today we sometimes have to trust a company's assertions, but it's my goal in life to get away from that. Plus I've found prettier software this way, and the only inconvenience I'm facing is telling people I'm not on Facebook.
To be clear, all the companies mentioned in the PRISM (who are many of the same companies) denied it then too.
And as Christopher Soghoian of the ACLU said in response to that, either the companies are lying through their teeth OR the government has cracked into their server farms. That is if you believe the PRISM leak, like the author of this article does.
O.K. then, go ahead and reengineer the Internet to better realize this vision.
Because we've done this to the fullest degree possible with current protocols. But in the meantime IANA will continue (not take over) the handling of international top-level domains (which US corporations treat as their own) while giving every country their own top-level domain to manage (e.g. .us, .ca, .au, .nz, .jp)
Well, this someone already controls IP-address allocation.
Sure it's right to not trust whoever controls the allocation of domains, be it government, non-profit, or (even more) for-profit, and maybe if the Internet was built using modern computer science we won't be having this discussion.
But that said IANA, not the US government, always has been the ones handing out domain names and they've been doing a great job at it. They remain in the background and the Internet just seems to work.
It's a sad thing to hope for, but given how disliked the candidates are and how close the polling is I don't think we'd loose much from it. And as the researchers suggested this might be what it takes to push industry to fix the security holes throughout in the Internet's wiring, applications, & "Things".
Besides all I want out of this election is chaos, and that would bring it while showcasing an important issue.
To be clear I'm not calling on anyone to be fired, but I will argue my view of the web. Because what we've called incompetence is really a conflicting view.
As for Unpatent I see you think you're saving effort by using meteor, but in this case I posit that with the result you got you might as well have used HTML, CSS, 1 line of jQuery, and some simple server side templates (Handlebars, Jinja, AngularJS, PHP - take your pick).
It's not like the issues I mentioned don't have solutions, just that those solutions can slow down development or hold up adoption.
As for the issue of identifiers I've seen a couple of solutions, and introducing a semi-centralized translation service is certainly one of them. But given the mindset behind these projects I find QR codes are a more common one.
Still lock-in (thank you Thad, forgot to mention it) is a big issue I haven't seen be addressed well, and as for the political angle we just need to review the new protocols and code for security flaws.
I've been following the developments on rebuilding the Internet, and let's see if I can summarise why we aren't there.
Most commenters on this topic are pointing out the threat of political pressure on a redesigned Internet, but there are other issues at play.
The biggest problem (whether it's for IPv6, mesh networking, or a peer-to-peer Web built on a DHT), is that before end-users see value in running the protocol it must already be popular. As such it's actually not that hard to build a stronger alternative to the Internet, the issue is navigating the catch22 in order to get it used.
Furthermore there's an issue that any purely peer-to-peer identifier (AKA a "pubkeyhash") is inherently unreadable and harder to communicate to friends then a phone number, but an open-minded UI designer should be able to help solve this problem.
In short, we have been onto this task of building a stronger, better Internet but to some extent or other we can only do so incrementally. This is due to not only political pressure, but also marketing.
Yet another point the co-host is missing (which I think I heard touched on) is that while distributed technologies like blockchains can keep things internally consistent, they do nothing to ensure accuracy of that data.
This is an inherent restriction of any sort of database, and credit companies (badly) approximate a solution by providing someone to blame for collecting bad data. So if the companies are driven to black-market peer-to-peer technologies* the companies would still exist to say (on an anonymous identity) "yes, I guarantee this data is correct".
* Not that I have anything against peer-to-peer, done right (not like bitcoin) it's what we need to avoid this censorship.
That's certainly the big question, but I'll tell you who won't be next.
Ofcourse the US won't, they've been pushing these "deals" and are in the pockets of the corporations. And it won't be New Zealand, at least until the election next year, as John Key considers this "good economics".