Whenever someone drives down the wrong side of a highway and kills people, we don't see it being rebuilt into a zig zag pattern, speed bumps added, and mud poured in. If there's a few bad actors out there the public shouldn't pay for it by being forced to use gimped infrastructure. Private toll roads get a pass too if their roads are designed and kept up according to the same standards.
Section 512 should be dropped in favor of audits and specific guidelines on private hosting companies to make sure they're not conspiring with the infringers, terrorists, pedophiles, etc.
And then grant them full immunity from contributory cases. No notice & takedown provisions, let the courts handle that.
Publishers will look elsewhere if it's not implemented... like TBL said "back to Flash" even though they pretty much never left Flash or Silverlight. But it would be amusing to see them get what they want in HTML5 only to find out the shiny new DRM they backed is actually built to make it easier for people save full quality videos to their own computer (the shock! the horror!)
Even something that hasn't been cracked yet programmatically can be bypassed by using screen capture software, but all "useful" DRM relies on obfuscation of the decryption method and closed-source code. The Encrypted Media Extensions spec is openly published and if it ends up in an open source web browser someone could easily add an on/off switch or a download button, perhaps even officially (Mozilla are you listening???)
So a crutch for the content industry's shiny new broken-by-design DRM might actually be a powerful tool against surveillance.
Adobe has announced diminished support on certain platforms but it's not like they gutted the team and boarded up the doors yet. They still actively maintain ARM and x86 ports of Adobe AIR and Flash Player.
Flash Player for Android Ice Cream Sandwich unofficially works on Jelly Bean.
Microsoft amusingly enabled Flash Player by default in an update for Surface RT devices yesterday, probably to prevent Surface Pro devices from completely cannibalizing sales, yet like most other Windows apps there's not even a Silverlight browser plugin for RT...
Blame should be squarely on the registrar for handling the situation like they did. Taking down t.co didn't even take down the phishing site, as it was only linking to one. How many times have we heard a story like this happen before at different levels of the internet food chain (site->datacenter->registrar->government)? This will continue happening forever but it can become less annoying if there was an automated scheme in place to send browsers to an alternate location or two.
If they hacked the FBI they probably were smart enough to send the data to a server somewhere that they anonymously paid for, rather than trying to push 3TB over 7 proxies. It would have still taken a while but not more than a few days over a fiber uplink the FBI should be using.
Large upload monitoring can be thwarted by splitting the data into smaller packets. Any small leak could be damaging on it's own. If they they are trying to stop the problem at that point, they've already lost. I don't see any reason a dossier on Apple devices and their owners would need to be that accessible in the first place.
Perhaps they intercepted a plot to have man-in-the-middle attacks on unsuspecting attendees. They can enforce the ban by following the radio signals to their source. Turning off SSID broadcasting is effective in evading simple scanning but not guaranteed for more sophisticated methods they could use.
I know it's been blocked from viewing online, but the copies probably weren't deleted from their servers. Here's an example of a judge ordering a company to remove a robots.txt file from their website so historical pages could be restored and the Wayback Machine could be used for discovery purposes:
As far as I know they don't delete snapshots if someone just puts up a robots.txt file. When they launched the new Wayback Machine about a year ago I was able to access snapshots of sites that were blocked for years. My theory is that they didn't import the existing exclusion database from the classic Wayback Machine, but had each site's robots.txt recrawled. That sometimes left open a window of 5-10 minutes to browse a site that was supposed to be blocked. I think if someone wants something truly removed from their servers they need a court order and as a library they have some protections against that happening.