Ben Wittes has (had?) a blind faith in the inherent "goodness" of the US Government, based on a vastly different set of starting assumptions.
Now, he's being forced to revisit some of his first principles. This is a good thing, because he's respected in his communities in ways that groups like this one are not, which means in theory he has an ability to influences said communities.
Expect some fairly sharp changes in mentality from pundits in the next couple of years. Hopefully they don't come too late to make a difference, although I expect that they have.
But fundamentally, if we want anything resembling a secure IoT, we're going to have to figure out a way to make it more expensive for companies to ship a vulnerable product than it is for them to fix it first, because the attack surface isn't going to get smaller.
here's a more solid start, based on use of MITRE's CVE system.
Assume Samsung is selling IoT enabled toasters, because why not. Everything's better with a network stack. Anyway, MSRP on this toaster is $100usd and Samsung releases the product Jan 1, 2017, and ships 1000 toasters.
Now, if there are no open CVE's on any component of the IoT stack on this toaster in the 90 days before Samsung ships, they're effectively insulated from liability. Oh, and in that world, the sky is Fuscia.
But, If there _is_ an open CVE was announced >= 90 days before Samsung launches the product, _and_ it gets exploited, Samsung is the hook for 5% of the MSRP for each unit sold of said product for every 90 days of age on the CVE.
Example: Samsung begins selling their IoT enabled toaster (MSRP == $100usd) on Jan. 1, 2017. And they sold 1000 of them on day 1. Said toaster has a vulnerability that was announced on Aug. 15, 2016 (just outside the 90 day grace period). If one of these toasters gets exploited and causes trouble, Samsung is going to write a check for (5% of $100) == $5 for each of the 1000 toasters sold as of the date of the CVE being exploited, plus the same fine going forward for each non-patched unit they sell.
Now, pretend that vuln wasn't released on Aug. 1, 2016, it was release on Aug. 1, 2016. Same ship date, same quantity. Except now instead of 5% per toaster, it's 10%. Add 5% for every 90 day interval of CVE age. Also, allow the total penalty per unit to exceed 100% of MSRP with no upper bound. So, you release an IoT enabled toaster with a 12 year old ssh vuln, and it gets exploited? assume qty 4-90 day periods / year to make it easy, now your penalty is (48 * $5) = $240 * 1000 = $240k in fines for each $100MSRP toaster you sold.
And why use MSRP as the basis for the penalty? Well, because it's both easy to validate and publicly verifiable.
No grace period, no appeal, cut a check to a high school to fund a secure coding class, because CVE's are public and theres no way the organization "couldn't have known".
Oh, and multiple CVE's? 5% per CVE, and scale it out.
If you can verifiably patch these toasters 100% then you restart the clock from the time the patch was pushed to the toaster. If you can't patch them, well, eventually you'll get to write a check big enough to make the board pay attention.
Bonus: Specifically disallow said penalties as a loss for tax purposes.
As to your other question: It's a Samsung toaster running a google code, Samsung pays. It's their label. If Samsung wants to go back and fight it out with Google based on contract terms, that's fine, Samsung can attempt to recoup their (already paid) losses from Google.
(yeah, I know. There's no chance this or anything like it will ever happen.)
Kill someone remotely from 25 feet and you can be a long way away before it's even realised that the insulin pump didn't simply malfunction, but was manipulated.
Assuming it can be determined the pump was manipulated. Which isn't a given.
Insulin pumps have two delivery modes:
Bolus, which is used to deliver a large dose of insulin - for example to correct for high blood sugars or to dose for carbs in a meal;
Basal, which is a slow, continuous dosage intended to keep blood sugars level over time. _and_ which, on this model of pump, can be automatically by adjusted based on time of day.
So, all you realistically would need (in theory) would be line of sight, since the 25' limitation is a bluetooth spec limitation and not a hard and fast physical limitation, and to know what time the person typically goes to bed.
I would think a hacker with murderous intent would be much more likely to use a weapon, not a computer.
A weapon is a state of mind, not an object. You can be beaten to death with the (trivially) detachable seatbelt on an airplane if you put your seatmate in a mind to do so.
An insulin pump is no different. It would, however, be damn near impossible to prove or identify after the fact. There's no such thing as "insulin poisoning", there's just "hypoglycemia, resulting in unconsciousness, followed by death" if not caught in time.
As for the caliber of engineer required, considering this isn't "write an OS" but rather "remove or disable a 10 counter" it's likely that the work could be done by a junior - or someone out of the country for that matter. It's not the highest of high end jobs."
From the order:
(1) it will bypass or disable the auto-erase function whether or not it has been enabled; (2) it will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE and (3) it will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.
Arguably, (1) and (3) might be fairly simple, although given that I haven't seen the IOS source code, I can't say for certain.
(2) on the other hand, seems fairly unlikely to be currently implemented - although it may be implemented in debug code that can be turned enabled elsewhere in the code.
All of the above - regardless of how the requirements are implemented - would need to be validated and survive regression testing and quality control before the code could be loaded onto the phone.
"Essentially, if Apple's employees refuse to do the work, Apple would likely have to fire them with cause. End of benefits, end of vested shares, end of it all. It's unlikely that any engineers would take that risk (unless they got very, very bad legal advice)."
Software engineers capable of doing this type of coding at Apple's scale are in high demand. In all likelihood, no engineer who quit Apple over this would be unemployed for longer than they chose to be.
Similarly: Because of the caliber of software engineer required, it would quite likely be difficult to replace them on short notice.