Solution's simple: start with the OCPO and tell them that if the corrections outlined aren't at least well under way in 90 days they'll be found to be grossly negligent in the performance of their duties and will not be permitted to resign, they'll be fired for cause and forfeit all accrued benefits (ie. no pension, no severance, the most they'll get is their unused days off paid out). If they try to resign before then, their resignation won't be accepted. They can still leave, but it'll be considered abandonment of their job and it'll still cost them all accrued benefits.
Talk is cheap. When people start getting fired (as opposed to being allowed to resign and moving on without any penalties) for failures like this you'll see the failures cleared up in a hurry.
NB: I've seen equivalent situations in private business as well, where managers and executives constantly cost the company large sums of money on projects that weren't wanted by customers or who allowed employees to screw up and harm the business repeatedly without doing anything about them. I'd say that, as a portion of the total company, the problem's probably worse in private business than in government. So while this is definitely a problem for government agencies, it's not a problem of government agencies in particular.
*sigh* If it can be abused, it will be abused. A good test is how the perpetrator reacts to the inverse: if they have no plans to abuse something, they won't mind removing the parts that can be abused. If they object to that or back-peddle, you can bet they've got plans to abuse it to within an inch of it's life and they don't want to admit to them.
I wonder what'd happen if a properly-licensed ham operator set up an access point? Normally unlicensed users have to give way and avoid interfering with a licensed operator in bands the operator's licensed to use, and when last I looked the higher grades of ham license allowed operation in the 2.4GHz and 5GHz WiFi bands.
Anyone who's worked in the corporate world knows what happens when the "right to be forgotten" exists: management makes a dumb decision, are told why it's dumb, and insists on having it's way anyway. Then when the inevitable train wreck happens they disavow any knowledge of their original decision, delete all evidence of it and try to blame the people they ordered to follow the dumb decision for following their orders. And management hates it when it turns out I've kept a copy of the e-mails documenting the entire chain safely tucked away in a file folder and immune to the effects of Exchange's recall-message and delete-message functionality and the normal time-based purging of older messages (I delete messages when they're no longer relevant, not just because an arbitrary amount of time has passed). I file "right to be forgotten" right there alongside corporate "records retention" policies: their primary purpose is to give an excuse for the destruction of evidence of wrong-doing or willful stupidity before it can be used against the guilty parties.
I think the complex part of Baseball Quick's system isn't the "delete the parts that aren't the game" idea but the algorithms and processing needed to identify [i]which[/i] parts aren't the game. That's where any valid patents would lie.
That's going to be a problem for all external content regardless, as content moves or disappears as people maintain/update their own sites. If you care about keeping links up-to-date, you'll catch this in your regular checks for broken links and it's actually a lot easier to fix than most (you just need to update the protocol, rather than having to puzzle out the new path to the content or confirm that it's no longer there).
If not RF, at the very least if a patent's to be included in a standard the patent-holder should agree to a fixed set of licensing terms and rates and they should be specified in the standard. Then there'd be no ambiguity. Everybody would know exactly what the licenses would cost going in. The only question later would be whether a licensee had paid the rate and abided by the conditions stated in the standard, and if they had then there's no infringement period.
Yes, there's that vulnerability. But you're only vulnerable if the attacker intercepts you before you've ever visited the site. Most of the time the attacker won't be able to manage that since it'd involve having started the MITM before they knew they wanted to do an MITM on that site, and it's certainly better than the current situation where an MITM can be initiated at any time without any indication to the user.
Site pinning solves most of the problem. A site can create it's own private CA and issue it's own certificates for use on it's servers, specifying that the site's own CA certificate is the only one permitted to sign certificates for it's domain. Then establish a norm of the first time you visit a site you accept it's CA certificate and pinning information. Once that's done no other CA can be used for a MITM attack because of the pinning, and since the site itself is the only one with the private key for it's CA certificate there's nobody any government can give an order to disclose the key to except for the site itself (and presumably they don't want to do that, otherwise they'd go directly to the site with the order in the first place rather than using an MITM attack).
For places that don't have an IT staff capable of dealing with OpenSSL, I can already sketch out the hardware and software for a small turnkey box that's a dedicated certificate authority capable of generating it's own root and intermediate keys and certificates, generating server and client keys and certificates and writing them out to an external flash drive for transfer, and managing a historical database of generated certificates (it wouldn't retain client or server keys for obvious reasons). The most expensive part of it would probably be the display, the rest shouldn't cost more than your average home WiFi router. Sure it's not going to be as secure as Verisign can manage, but conversely it's not going to be nearly as attractive a target as Verisign would be either and it wouldn't need network connectivity which right there severely limits possible attacks.
Let's face it, in most non-corporate situations you don't care about the absolute physical identity of the entity running the Web site. You care mostly about whether you're talking to the same site every time and that nobody's hijacked you to a bogus server. The biggest risk there isn't validating the site's server certificate, it's that someone'll have used a malicious link to take you to a completely different site in it's own domain without you realizing it.
What I don't get is why the appeals court didn't drop the hammer on the source of this by referring the prosecutor involved to the bar association for suborning perjury. Even if the bar association doesn't do anything, just having the referral on your record and having to inform every judge you go before of it would be enough of a career-ending move to make prosecutors think twice about this kind of stunt.
The police ought to start acknowledging reality (eg. swatting) and take it into account. If you really have an active shooter, for instance, in this day and age of cel phones everywhere you're going to have multiple reports of it coming in. If you get only a single call about it, it's almost certain you do not in fact have an active shooter. If the cops don't have the sense to figure this out and handle the two cases differently, they need their toys taken away until they go through training again and pass a test on comprehension of basic principles.
The trick is to be simple at step 2: "I need to cancel my service.". Then don't let them get to step 4. If they try, just keep repeating "I need to cancel my service.". After a couple of repeats, get their name if you haven't already and proceed to "Look, I am cancelling my service, effective as of such-and-such date. If you aren't willing to assist me in this, consider this notification of cancellation. I'll follow up with a registered letter to your office confirming the cancellation. Any service after that date will be unauthorized and I will not be responsible for any charges you might incur because of failure to cancel my service." and hang up. Note the date and time and who you spoke to, and send that letter return receipt requested so you have a record of when you sent it and when they got it.
If possible, record your entire call with the CS rep including the automated message when you call them. I figure you don't need to mention it, they already stated that the call may be recorded so they can't be unaware that the call may be recorded.
I was thinking a shared calendar and a link to it on the internal Web site along with all the other important links to important Web apps and info new hires need. That way you'd literally have to lose everyone at once to even stand a chance of losing track of the calendar.
A calendar program with reminders and a nice bright cheery red for critical renewal dates on it. I hear Microsoft, Apple and Google all make nice ones that aren't tied to a particular user or their machine. They can even be integrated with your e-mail program so everything's in one place. Reminders from the registrar are nice, but I make sure I keep track of when all my domains are due for renewal myself.
Before I'd trust a fork, I'd want an idea of why the original developers considered it insecure in the first place. I'd think if they just didn't have the resources or interest to maintain it, they'd say they were ceasing to maintain it rather than make an ambiguous statement about security. And there's more than one security risk. If it were something like they were ordered to hand over copies of the private key used to sign binaries, rendering TrueCrypt vulnerable to government-created "official" versions, that can be dealt with in several ways. If it's a case of TrueCrypt being unable to protect the data against interception within Windows on it's way to the application, there's nothing anyone can do about that and it has to be mitigated against in other ways. And if finally there really is some obscure and fatal flaw in the basic design or coding of TrueCrypt that makes it inherently vulnerable, we'd need to know what it is so we know any new maintainers have in fact fixed it before we could trust the new fork.
I'd note one interesting indirect attack: use methods that'll cause the most secure projects to declare themselves at risk without letting them say why, letting paranoia push users into switching to software maintained by less scrupulous companies who'll stay quiet about their software being compromised until forced by outside discovery of the compromise.
There are, believe it or not, standard best practices already out there. Storing hashes of your passwords instead of the cleartext passwords, for instance. Certainly there's a lot of fuzziness about just where the line between being exploited and being negligent lies, but there's also a lot of area where there's no ambiguity at all. It's much like other areas: there may be some ambiguity about whether glancing down to read the incoming call message on the screen of your cel phone is negligent or not, but that doesn't somehow translate to it maybe possibly not being negligent to have both hands off the wheel and your head down digging through a bag on the passenger seat completely oblivious to what's going on as you barrel down the freeway at 95mph.
I'm really annoyed at the patently false argument that if anything's ambiguous then everything's ambiguous. On maps the idea of a disputed border's simple enough, and the fact that some part of the border's disputed doesn't stop other areas from clearly belonging to one country or another.