Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive

from the unconstitutional-camel-noses dept

Earlier this year the White House put out a document articulating a National Cybersecurity Strategy. It articulates five “pillars,” or high-level focus areas where the government should concentrate its efforts to strengthen the nation’s resilience and defense against cyberattacks: (1) Defend Critical Infrastructure, (2) Disrupt and Dismantle Threat Actors, (3) Shape Market Forces to Drive Security and Resilience, (4) Invest in a Resilient Future, and (5) Forge International Partnerships to Pursue Shared Goals. Each pillar also includes several sub-priorities and objectives as well.

It is a seminal document, and one that has and will continue to spawn much discussion. For the most part what it calls for is too high level to be particularly controversial. It may even be too high level to be all that useful, although there can be value in distilling into words any sort of policy priorities. After all, even if what the government calls for may seem obvious (like “defending critical infrastructure,” which of course we’d all expect it do), going to the trouble to actually articulate it as a policy priority provides a roadmap for more constructive efforts to follow and may also help to martial resources, plus it can help ensure that any more tangible policy efforts the government is inclined to directly engage in are not at cross-purposes with what the government wants to accomplish overall.

Which is important because what the rest of this post discusses is how the strategy document itself reveals that there may already be some incoherence among the government’s policy priorities. In particular, it lists as one of the sub-priorities an objective with troubling implications: imposing liability on software developers. This priority is described in a few paragraphs in the section entitled, “Strategic Objective 3.3: Shift Liability for Insecure Software Products and Services,” but the essence is mostly captured in this one:

The Administration will work with Congress and the private sector to develop legislation establishing liability for software products and services. Any such legislation should prevent
manufacturers and software publishers with market power from fully disclaiming liability by
contract, and establish higher standards of care for software in specific high-risk scenarios. To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services. This safe harbor will draw from current best practices for secure software development, such as the NIST Secure Software Development Framework. It also must evolve over time, incorporating new tools for secure software development, software transparency, and vulnerability discovery.

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions. And such a proposal raises a number of concerns, including in both the method the government would use to prescribe how software be coded, and the dubious constitutionality of it being able to make such demands. While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

In terms of method, while the government isn’t necessarily suggesting that a regulator enforce requirements for software code, what it does propose is far from a light touch: allowing enforcement of coding requirements via liability – or, in other words, the ability of people to sue if software turns out to be vulnerable. But regulation via liability is still profoundly heavy-handed, perhaps even more so than regulator oversight would be. For instance, instead of a single regulator working from discrete criteria there will be myriad plaintiffs and courts interpreting the language however they understand it. Furthermore, litigation is notoriously expensive, even for a single case, let alone with potentially all those same myriad plaintiffs. We have seen all too many innovative companies be obliterated by litigation, as well as seen how the mere threat of litigation can chill the investment needed to bring new good ideas into reality. This proposal seems to reflect a naïve expectation that litigation will only follow where truly deserved, but we know from history that such restraint is rarely the rule.

True, the government does contemplate there being some tuning to dull the edge of the regulatory knife, particularly through the use of safe harbors, such that there are defenses that could protect software developers from being drained dry by unmeritorious litigation threats. But while the concept of a safe harbor may be a nice idea, they are hardly a panacea, because we’ve also seen how if you have to litigate whether they apply then there’s no point if they even do. In addition, even if it were possible to craft an adequately durable safe harbor, given the current appetite among policymakers to tear down the immunities and safe harbors we currently have, like Section 230 or the already porous DMCA, the assumption that policymakers will actually produce a sustainable liability regime with sufficiently strong defenses and not be prone to innovation-killing abuse is yet another unfortunately naïve expectation.

The way liability would attach under this proposal is also a big deal: through the creation of a duty of care for the software developer. (The cited paragraph refers to it as “standards of care,” but that phrasing implies a duty to adhere to them, and liability for when those standards are deviated from.) But concocting such a duty is problematic both practically and constitutionally, because at its core, what the government is threatening here is alarming: mandating how software is written. Not suggesting how software should ideally be written, nor enabling, encouraging, nor facilitating it to be written well, but instead using the force of law to demand how software be written.

It is so alarming because software is written, and it raises a significant First Amendment problem for the government to dictate how anything should be expressed, regardless how correct or well-intentioned the government may be. Like a book or newspaper, software is something that is also expressed through language and expressive choices; there is not just one correct way to write a program that does something, but rather an infinite number of big and little structural and language decisions made along the way. But this proposal basically ignores the creative aspect to software development (indeed, software is even treated as eligible for copyright protection as an original work of authorship). Instead it treats it more like a defectively-made toaster than a book or newspaper, replacing the independent expressive judgment of the software developer with the government’s. Courts have also recognized the expressive quality to software, so it would be quite a sea change if the Constitution somehow did not apply to this particular form of expression. And such a change would have huge implications, because cybersecurity is not the only reason that the government keeps proposing to regulate software design. The White House proposal would seem to bless all these attempts, no matter how ill-advised or facially censorial, by not even contemplating the constitutional hurdles any legal regime to regulate software design would need to hurdle.

It would still need to hurdle them even if the government truly knew best, which is a big if, even here, and not just because the government may lack adequate enough or current enough expertise. The proposal does contemplate a multi-stakeholder process to develop best practices, and there is nothing wrong in general with the government taking on some sort of facilitating role to help illuminate what these practices are and making sure software developers are aware of them – it may even be a good idea. The issue is not that there may be no such thing as any best practices for software development – obviously there are. But they are not necessarily one-size-fits-all or static; a best practice may depend on context, and constantly need to evolve to address new vectors of attack. But a distant regulator, and one inherently in a reactive posture, may not understand the particular needs of a particular software program’s userbase, nor the evolving challenges facing the developer. Which is a big reason why requiring adherence to any particular practice through the force of law is problematic, because it can effectively require software developers to make their code the government’s way rather than what is ultimately the best way for them and their users. Or at least put them in the position of having to defend their choices, which up until now the Constitution had let them make freely. And which would amount to a huge, unprecedented burden that threatens to chill software development altogether.

Such chilling is not an outcome the government should want to invite, and indeed, according to the strategy document itself, does not want. The irony with the software liability proposal is that it is inherently out-of-step with the overall thrust of the rest of the document, and even the third pillar it appears in itself, which proposes to foster better cybersecurity through the operation of more efficient markets. But imposing design liability would have the exact opposite effect on those markets. Even if well-resourced private entities (ex: large companies) might be able to find a way to persevere and navigate the regulatory requirements, small ones (including those potentially excluded from the stakeholder process establishing the requirements) may not be able to meet them, and individual people coding software are even less likely to. The strategy document refers to liability only on developers with market power, but every software developer has market power, including those individuals who voluntarily contribute to open source software projects, which provide software users with more choices. But those continued contributions will be deterred if those who make them can be liable for them. Ultimately software liability will result in fewer people writing code and consequently less software for the public to use. So far from making the software market work more efficiently through competitive pressure, imposing liability for software development will only remove options for consumers, and with it the competitive pressure the White House acknowledges is needed to prompt those who still produce software to do better. Meanwhile, those developers who remain will still be inhibited from innovating if that innovation can potentially put them out of compliance with whatever the law has so far managed to imagine.

Which raises another concern with the software liability proposal and how it undermines the rest of the otherwise reasonable strategy document. The fifth pillar the White House proposes is to “Forge International Partnerships to Pursue Shared Goals”:

The United States seeks a world where responsible state behavior in cyberspace is expected and rewarded and where irresponsible behavior is isolating and costly. To achieve this goal, we will continue to engage with countries working in opposition to our larger agenda on common problems while we build a broad coalition of nations working to maintain an open, free, global, interoperable, reliable, and secure Internet.

On its face, there is nothing wrong with this goal either, and it, too, may be a necessary one to effectively deal with what are generally global cybersecurity threats. But the EU is already moving ahead to empower bureaucratic agencies to decide how software should be written, yet without a First Amendment or equivalent understanding of the expressive interests such regulation might impact. Nor does there seem to be any meaningful understanding about how any such regulation will affect the entire software ecosystem, including open source, where authorship emerges from a community, rather than a private entity theoretically capable of accountability and compliance.

In fact, while the United States hasn’t yet actually specified requirements for design practices a software developer must comply with, the EU is already barreling down the path of prescriptive regulation over software, proposing a law that would task an agency to dictate what criteria software must comply with. (See this post by Bert Hubert for a helpful summary of its draft terms.) Like the White House, the EU confuses its stated goal of helping the software market work more efficiently with an attempt to control what can be in the market. For all the reasons that an attempt by the US stands to be counterproductive, so would EU efforts be, especially if born from a jurisdiction lacking a First Amendment or equivalent understanding of the expressive interests such regulation would impact. Thus it may turn out to be European bureaucrats that attempt to dictate the rules of the road for how software can be coded, but that means that it will be America’s job to try to prevent that damage, not double-down on it.

It is of course true that not everything software developers currently do is a good idea or even defensible. Some practices are dreadful and damaging. It isn’t wrong to be concerned about the collateral effects of ill-considered or sloppy coding practices or for the government to want to do something about it. But how regulators respond to these poor practices is just as important, if not more so, than that they respond, if they are going to make our digital environment better and more secure and not worse and less. There are a lot of good ideas in the strategy document for how to achieve this end, but imposing software design liability is not one of them.

Filed Under: , , , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Move Over, Software Developers – In The Name Of Cybersecurity, The Government Wants To Drive”

Subscribe: RSS Leave a comment
107 Comments
Anonymous Coward says:

Despite some equivocating language, at its essence it is no small thing that the White House proposes: legislation instructing people on how to code their software and requiring adherence to those instructions.

Prediction:- that will create bureaucratic procedure that can only be properly by by large software houses, and effectively outlaw individual development of internet connected software. Also, that will not make the software any more secure, just more expensive t develop, with safe harbor ensuring that if all boxes are ticked, the company escapes liability for security failures.

This comment has been flagged by the community. Click here to show it.

Greg says:

Nah.

Jail the stupid software developers.

It’s been warned that software developers need to self regulate otherwise they will be regulated for years.

The industry and programmers themselves have ignored the need. Just look at software problems with cars.

I’m honestly surprised after the Boeing mess and car software failures that it’s taken this long.

I also disagree with the first amendment issue. Yes things can be done differently. You can also build a plane differently, a house differently, ect and what you are arguing is that say building a plane that would kill anyone who attempted to fly it is a matter of speech and thus the government shouldn’t be able to impose safety regulations. Just because code is text doesn’t make it speech anymore than slapping together wood is speech.

Not to defend this as the right or correct answer. But the tech world has been given the option to self regulate like any other professional trade and they have refused.

Anonymous Coward says:

Re:

Jail the stupid software developers.

Hahahahaha!

Hahahaha!

Heh heh…

Sorry. You misunderstand the guidelines the article is talking about.

1) Someone writes software
2) later, someone finds a flaw in that software
3) Pols and the mob immediately descend upon the author – “liability” lawsuits, civil/criminal penalties.

Note that there are no explicit times in that sequence. It could be months or years or decades between the software being written and someone’s life being ruined because of stupid laws.

Note also that the software may have been written with all of the then-current standards, methods, libraries, etc.

Nobody in the history of software development has ever written software with the intent of it getting hacked. (Note: Not saying that backdoors have not been written…)

What Ms Gellis is saying is that this is an excuse to scapegoat software developers.

Which is exactly what you are rooting for.

PS:

Just because code is text doesn’t make it speech anymore than slapping together wood is speech.

Ever listen to marimba music?

Greg says:

Re: Re:

Some idiot builds a building. 20 years later it falls and kills everyone.

Some idiot writes software for the building and 20 years later it bugs out and kills everyone.

What you seem to think is “but software” where that somehow magicly changes things.

Again though I don’t think this outcome is a good one or the right one. But the tech world and software “engineers” wanted to play the wild west.

This has been forcast by major founders in the programing world for years now.

Anonymous Coward says:

Re: Re: Re:

Some idiot builds a building. 20 years later it falls and kills everyone.

Some idiot writes software for the building and 20 years later it bugs out and kills everyone.

What you seem to think is “but software” where that somehow magicly changes things.

Well, yes. In building a building, there are specific tests you can perform on the elements, certain ways you can prevent failure… you can certify that a specific part will not fail under specific loads.

In software, there are no specific loads. No specific strengths. You can’t calculate how your construction will react to physics, because it’s not physics… it’s someone with too much time on their hands and lots of curiosity.

Building architects aren’t made liable if some random person on the other side of the world figures out that one, solitary, very specific set of steps will bypass all of their safeguards and cause the building to fail in one specific way that might or might not cause harm.

If software “bugs out” and has the capacity of “killing everyone”, then surely those designing the physical hardware (without which, the software could have killed no one) have at least some responsibility.

Mamba (profile) says:

Re: Re: Re:2

You’re missing the point. NOBODY tests complete buildings. It’s just as unfeasible, maybe more so, than completely testing software. Codes occasionally require things to be designed to events that have never occurred before. And guess what. No inspector, No jury, and hopefully no owner, would accept a product without some warranty of suitability. Because they never work perfectly, either. And failures do happen, especially with new materials or processes. There’s a new move towards larger and bigger wood structures. That seems to be working well with buildings, but not with bridges. Several wood bridges have failed early…and guess who’s getting sued?

They way safety is achieved is by carefully building it up from the bottom, from the very smallest component. Assemble them in known, documented ways. There are ASTM standards for nuts. That are pages long. UL has testing standards for gypsum board. Safety factors are built in.

Sometimes, at an immense expense, scale models are built for things like shake table. Or you implement 100% inspection and testing, along with chain of custody. I’ve seen a million dollar SS weldment get scrapped because the vendor couldn’t locate the documentation associated with 2 feet of welds. No rework was allowed.

And.youre absolutely wrong about the liability engineers, architects, and designerS. If one product, one place, has a known failure, then certainly other customers will hold them responsible for fixing it if they are exposed the same way. For example, recalls occured long before software was even a thing. And a lot of time it’s just to inspect to see if the product is affected. Food recalls are another.

Product bulletins are a usual vehicle in the construction world. And it really sucks to figure out that the anchors you used on a product have a known failure point….but are embedded in Feet of concrete and never to be seen again. You can’t inspect them. You can’t test them. So you get to install post tension anchors at huge costs and home your insurance manages to get some damages out of the manufacturer.

Anonymous Coward says:

Re: Re: Re:3

You’re missing the point. NOBODY tests complete buildings.
They way safety is achieved is by carefully building it up from the bottom, from the very smallest component. […] Safety factors are built in.

No, you are missing the point. Buildings only have to contend with the unchanging laws of physics, and there’s no such thing for software.

There’s no such thing as “safety factors” for computer code. You can’t destructively test an if statement to see at what load it will fail at – and even if you could, that would go out of the window as soon as it was combined with other code. At most, code can be tested against specified cases – but none of that testing matters if an adversary thinks up something new that wasn’t specified.

And.youre absolutely wrong about the liability engineers, architects, and designerS. If one product, one place, has a known failure, then certainly other customers will hold them responsible for fixing it if they are exposed the same way.

Your analogy is just bad. Software doesn’t fail the way buildings do, and buildings aren’t constantly being subjected to new and previously unknown forces and under attack by intelligent actors.

Rocky says:

Re: Re: Re:4

No, you are missing the point. Buildings only have to contend with the unchanging laws of physics, and there’s no such thing for software.

The laws of physics doesn’t change, but the environment certainly does – more on this later.

There’s no such thing as “safety factors” for computer code. You can’t destructively test an if statement to see at what load it will fail at – and even if you could, that would go out of the window as soon as it was combined with other code.

Oh? I do it on a regular basis but not on well understood operations that have a explicitly verified and defined behavior. Your argument is kind of disingenuous for the simple reason that not everything that goes into a building is destructively tested. Or do you actually believe that builders destructively test every 2×4? Of course they don’t because it is clearly understood what load and environment 2×4 can handle.

At most, code can be tested against specified cases – but none of that testing matters if an adversary thinks up something new that wasn’t specified.

Just like how building is evaluated and its separate components are tested and how an adversary can think up something new to do nefarious deeds to a building (the most common just so happens to be arson).

Your analogy is just bad. Software doesn’t fail the way buildings do, and buildings aren’t constantly being subjected to new and previously unknown forces and under attack by intelligent actors.

Burglars, saboteurs, terrorists and run of the mill criminals (plus the odd tenant doing stupid stuff) come to mind. Plus, rot and bad upkeep, undetected soil conditions, sinkholes, unprecedented weather-conditions for the region and wrong usage may well destroy a building.

Or do you actually believe that buildings never experience previously “unknown forces”? On average 8 buildings collapse per year which results in several hundred deaths. If it has escaped you, everything is exposed to an environment that has elements that is detrimental. That the nature of a digital environment is different from a physical one isn’t actually relevant, just how you deal with them.

Anonymous Coward says:

Re: Re: Re:5

Burglars, saboteurs, terrorists and run of the mill criminals (plus the odd tenant doing stupid stuff) come to mind. Plus, rot and bad upkeep, undetected soil conditions, sinkholes, unprecedented weather-conditions for the region and wrong usage may well destroy a building.

And how many of those factors are the building’s architects held liable for?

Was every architect required, at their own expense, to make buildings safe against airplane strikes after 9/11?

Anonymous Coward says:

Re: Re: Re:6

Was every architect required, at their own expense, to make buildings safe against airplane strikes after 9/11?

Considering that a B-25 crashed into the Empire State Building and the BUILDING SURVIVED…

I think they already do factor in certain black swan events.

It’s extremely rare that something like 9/11 happens, and even then, it was also revealed that some corners were cut as well…

Anonymous Coward says:

Re: Re: Re:7

Considering that a B-25 crashed into the Empire State Building and the BUILDING SURVIVED…

I think they already do factor in certain black swan events.

Oh, well, it’s nice that you think that since one building under differing circumstances happened to survive, all the buildings except the ones that failed must be able to survive, and only buildings that “cut corners” would ever fail.

But I’m not entirely sure how that very silly assumption has any bearing on whether the architects would be liable for making sure all the buildings can survive every circumstance, rare or not.

Rocky says:

Re: Re: Re:6

And how many of those factors are the building’s architects held liable for?

You are conflating architect with a builder.

Was every architect required, at their own expense, to make buildings safe against airplane strikes after 9/11?

Just like how every software architect, at their own expense, has to make software safe against known CVE’s.

Anonymous Coward says:

Re: Re: Re:7

And how many of those factors are the building’s architects held liable for?

You are conflating architect with a builder.

Since programmers can be analogous to both, I’m sure we’re perfectly ok with you answering how many of those factors either one is held liable for. You didn’t just bring up the distinction to distract from the actual point, right?

Just like how every […]

I’ll stop you right there, because you didn’t answer the question. How many architects (or builders, or whatever) were held liable for making sure every existing building can survive every possibility of the kind of failure that happened on 9/11?

Rocky says:

Re: Re: Re:8

Since programmers can be analogous to both, I’m sure we’re perfectly ok with you answering how many of those factors either one is held liable for. You didn’t just bring up the distinction to distract from the actual point, right?

No, a programmer isn’t analogous to both but a person can have both roles, just like how a builder can also be an architect. If you want to conflate different roles, go ahead, but if you want use that to attack me and my argument it only tells me you have no real counter argument.

I’ll stop you right there, because you didn’t answer the question. How many architects (or builders, or whatever) were held liable for making sure every existing building can survive every possibility of the kind of failure that happened on 9/11?

I don’t need to answer it, because the question is actually utterly irrelevant. If you think it is relevant you better explain why it matters that a 3rd party with no involvement in a terrorist act should be held liable. You know, a legal theory that actually makes sense for this context.

Anonymous Coward says:

Re: Re: Re:9

you better explain why it matters that a 3rd party with no involvement in a terrorist act

You’re the one who brought terrorists into it in the first place! (and burglars, saboteurs, tenants, dry rot…) Why are you demanding that I explain your own argument to you?

Oh, right. That’s why. We’re still waiting for you to answer how the conditions that YOU brought up have any relevance.

To quote someone, recently:

you have no real counter argument.

Rocky says:

Re: Re: Re:10

You’re the one who brought terrorists into it in the first place! (and burglars, saboteurs, tenants, dry rot…) Why are you demanding that I explain your own argument to you?

I asked you because I wanted confirmation that you understood the context, but it seems you are totally incapable of that. Let me actually rewind to what you said:

Your analogy is just bad. Software doesn’t fail the way buildings do, and buildings aren’t constantly being subjected to new and previously unknown forces and under attack by intelligent actors.

To which I answered:

Burglars, saboteurs, terrorists and run of the mill criminals (plus the odd tenant doing stupid stuff) come to mind. Plus, rot and bad upkeep, undetected soil conditions, sinkholes, unprecedented weather-conditions for the region and wrong usage may well destroy a building.

Ie, buildings are subjected to new and previously unknown forces, and intelligent actors too. Buildings aren’t exempt from the reality of human imperfection.

Oh, right. That’s why. We’re still waiting for you to answer how the conditions that YOU brought up have any relevance.

See above and do learn how context work.

Rocky says:

Re: Re: Re:12

So, you brought up a bunch of things that, by your own admission, are not relevant in the context of builders being held liable for failures in buildings.

No, you asked a dumb question about liability that has no relevance to your prior argument of how buildings aren’t “being subjected to new and previously unknown forces and under attack by intelligent actors” for which I listed things that can actually affect a building.

If you can’t keep track of what being discussed, and as I suggested earlier, go do something else.

Anonymous Coward says:

Re: Re: Re:7

Just like how every software architect, at their own expense, has to make software safe against known CVE’s.

Interesting. Finally, a concrete statement. However, it is only of limited value.

CVEs are developed on software that is in use, not software that is in development. Not all software is maintained. And no, you can’t force them to maintain it. If they want to continue selling/licensing their product, sure, they’ll fix it. It might take some public outcry, or activist hackers creating mods, or whatever. But if you find a flaw in 20 year old software, the author could be dead by the time you start looking to blame them. Good luck with your Ouija Board.

Ultimately, if a piece of software has reported CVEs against it, and the author of the software does not fix it, the entity responsible for any failures is … the one using it. Surprise! Don’t sit on a tack and whine about the pain.

Anonymous Coward says:

Re: Re: Re:6

Was every architect required, at their own expense, to make buildings safe against airplane strikes after 9/11?

Status of NIST’s Recommendations Following the Federal Building and Fire Investigation of the World Trade Center Disaster; August 8, 2011:

Recommendation 1. NIST recommends that: (1) progressive collapse be prevented in buildings through the development and nationwide adoption of consensus standards and code provisions, along with the tools and guidelines needed for their use in practice; and (2) a standard methodology be developed—supported by analytical design tools and practical design.
Affected Standards: ASCE-7, AISC Specifications, and ACI 318. These standards and other relevant committees should draw on expertise from ASCE/SFPE 29 for issues concerning progressive collapse under fire conditions. Model Building Codes: The consensus standards should be adopted in model building codes (i.e., the International Building Code and NFPA 5000) by mandatory reference to, or incorporation of, the latest edition of the standard. State and local jurisdictions should adopt and enforce the improved model building codes and national standards based on all 30 WTC recommendations. The codes and standards may vary from the WTC recommendations, but satisfy their intent.
Research Outcome: (NIST) Best Practices Guideline published February 2007.
Code Outcomes: (IBC) Provides minimum structural integrity for framed and bearing wall structures through continuity and tie-force requirements for buildings over 75 ft. in height that represent a substantial hazard to human life in the event of failure (e.g., buildings with occupant loads exceeding 5,000) and essential facilities, such as hospitals.) This code change is intended to enhance overall structural integrity but is not intended to prevent progressive collapse in structures. (IBC) Clarifies the definition of secondary structural members by including roof construction that does not have direct connections to the building columns.

So, in summary: kind of. New large buildings need to be designed (by architects and engineers, presumably at the buyer’s expense) to be safer against airplane strikes. Smaller-scale accidents of that type were already concerns when the WTC was built; quoting NCSTAR 1:

5.3.2 Aircraft Impact
The accidental 1945 collision of a B-25 aircraft with the Empire State Building sensitized designers of high-rise buildings to the potential hazards of such an event. However, building codes did not then, and do not currently, require that a building withstand the impact of a fuel-laden commercial jetliner. A Port Authority document indicated that the impact of a Boeing 707 aircraft flying at 600 mph was analyzed during the design stage of the WTC towers. However, the investigators were unable to locate any documentation of the criteria and method used in the impact analysis and were thus unable to verify the assertion that “…such collision would result in only local damage which could not cause collapse or substantial damage to the building and would not endanger the lives and safety of occupants not in the immediate area of impact.” Since the ability for rigorous simulation of the aircraft impact and of the ensuing fires are recent developments and since the approach to structural modeling was developed for this Investigation, the technical capability available to The Port Authority and its consultants and contractors to perform such an analysis in the 1960s would have been quite limited.

Anonymous Coward says:

Re: Re: Re:5

Or do you actually believe that builders destructively test every 2×4? Of course they don’t because it is clearly understood what load and environment 2×4 can handle.

And that information came from destructive testing of lots of 2x4s. That is destructive testing can be carried out by increasing the load for physical objects, while for software, the nearest equivalent, try all possible inputs is not usually feasible. Also, building are over specified as to loads, which is also not possible for software interfaces. Also, an update to a library can result in previously safe input becoming a security risk.

Rocky says:

Re: Re: Re:6

And that information came from destructive testing of lots of 2x4s

And you don’t think specific operators has an extremely well understood behavior?

That is destructive testing can be carried out by increasing the load for physical objects, while for software, the nearest equivalent, try all possible inputs is not usually feasible.

So why aren’t you proposing that we do destructive testing on whole buildings? Because that’s the argument you are making about software. The law of unintended consequences affects buildings as much as software.

“Oh, this circuit was overloaded so the building caught fire and burnt down killing everyone it it, why didn’t they destructively test the electrical system?” – that’s your argument when translated to buildings.

Also, building are over specified as to loads, which is also not possible for software interfaces.

And that statement would be relevant if every building was perfectly constructed using perfect materials and techniques – but that’s not the reality, is it now? Buildings fail various ways even though most of the components going into them was well specified and understood, just like a piece of software.

Also, an update to a library can result in previously safe input becoming a security risk.

Just like how a replacement of a roof HVAC can lead to leaks in the roof, electrical fires or the roof collapsing due to it being overloaded.

The simple fact is that humans make errors, regardless if it’s making software or building a house, a plane or a car. Yet you persists in thinking that building physical things is an entirely different affair than building software. The failure modes are different, but the failures are mostly due to human errors somewhere in the process.

Anonymous Coward says:

Re: Re: Re:5

Or do you actually believe that builders destructively test every 2×4? Of course they don’t because it is clearly understood what load and environment 2×4 can handle.

How on earth do you think it came to be clearly understood what load and environment a 2×4 can (and cannot) handle?

And why would you assume that talking about destructively testing an if statement would be analogous to destructively testing 2x4s in active use, rather than testing other 2x4s to learn something about the ones you will actively use?

I’ll tell you why: because the analogy is bullshit, and you’re struggling to try to make it work when it just doesn’t. You can’t test one if statement in order to learn something about how a different if statement might fail. Software is not buildings.

Rocky says:

Re: Re: Re:6

I’ll tell you why: because the analogy is bullshit, and you’re struggling to try to make it work when it just doesn’t.

Please be specific in what way my analogy is bullshit.

And why would you assume that talking about destructively testing an if statement would be analogous to destructively testing 2x4s in active use, rather than testing other 2x4s to learn something about the ones you will actively use?

The basic premise was that an input to a program can make an if-statement “fail”, which is analogous to a 2×4 failing inside a structure. Unless you can test all loads and conditions, even unexpected ones, on a 2×4 inside a building you can’t demand that an if-statement inside an application must be tested in such manner. If you do, you are setting up unfair conditions to bolster your argument.

Anonymous Coward says:

Re: Re: Re:7

Unless you can test all loads and conditions, even unexpected ones, on a 2×4 inside a building

But, wait, you said:

it is clearly understood what load and environment 2×4 can handle

There is a reason for that, and there is a reason you cannot say the same in general about if statements. You wouldn’t be ignoring those reasons just to set up unfair conditions, would you?

Please be specific in what way my analogy is bullshit.

It’s right there in the parts you decided not to quote. Ignore it if you like, but that won’t make the analogy of software to buildings any less bullshit.

Rocky says:

Re: Re: Re:8

But, wait, you said:

You don’t really understand context, do you?

it is clearly understood what load and environment 2×4 can handle
There is a reason for that, and there is a reason you cannot say the same in general about if statements. You wouldn’t be ignoring those reasons just to set up unfair conditions, would you?

Are you suggesting that you don’t understand how an if statement behaves under clearly defined parameters? Perhaps you should go do something else than discuss something you don’t understand then.

It’s right there in the parts you decided not to quote. Ignore it if you like, but that won’t make the analogy of software to buildings any less bullshit.

You could have had an easy win here buy actually explaining why my analogy is bullshit. Instead you doubled down on vagueness to yet again proclaim it’s bullshit. It’s kind of telling.

Rocky says:

Re: Re: Re:10

With human provided input there are no clearly defined parameters

So no specification what the input should be then?

Just try writing a clear and unambiguous definition of what a persons name can be.

That’s in the specification and how user input error should be handled.

You could of course also ask if everything a human does, including constructing a building, have clearly defined parameters that guarantees a known outcome.

Anonymous Coward says:

Re: Re: Re:11

So no specification what the input should be then?

More a case that the specification can change, and not all impacted programs are found. I.e. a report generator will be written for the current name field length of the database, and that can be changed and either cause a crash, or terminated by invalid data message.

Anonymous Coward says:

Re: Re: Re:12

Software should never crash because its input data didn’t meet the specification. That’s just lazy. You reject the data with an error, and the specification needs to allow for that. And, if it were being developed in an “engineering” sort of way, the developers would know the interdependencies, and would know to adjust the consumer(s) when a relevant Engineering Change Order were applied.

Anonymous Coward says:

Re: Re: Re:14

Does it make any difference if a program dies through a crash or with an error message that is only meaningful to programmers?

Yes, a big one: an attacker can often exploit a crash to run arbitrary code, whereas an explicit error-exit has well-defined behavior. For example, as of 1988, the BSD “finger” daemon simply assumed its input would fit into a fixed-size buffer, which was how the Morris worm spread. Also see the more recent Heartbleed; the error message, had one been triggered, would’ve only been of interest to programmers anyway: no legitimate user would’ve triggered that code path.

Do you really want to add validation code to every little report generating program?

Want? That doesn’t really matter. It kind of needs to be done. But it benefits the programmer more than you might think. I’ve seen several “this will never happen” cases get triggered in my career.

Often, it’s not even that hard to do. For example, the finger utility could’ve written some helper like ngets(char *s, int len) to replace gets(). Call read(), return NULL if it fails or the buffer lacks a newline; callers of gets() have to handle NULL already, because that’s returned when the connection is dropped.

Anonymous Coward says:

Re: Re: Re:9

Are you suggesting that you don’t understand how an if statement behaves under clearly defined parameters?

I have a 2×4 in my house. Barring any damage particular to it, it’s going to behave like any other 2×4. As you, yourself, said, “it is clearly understood what load and environment 2×4 can handle”. Without so much as looking at them, an engineer could give ranges of load that should be safe, tell different ways they could fail under different kinds of stress, and lots of other information… based on destructive testing of other 2x4s, which have pretty well explored the space of inputs a 2×4 is at all capable of receiving.

I have 100 if statements on my notepad. I dare anyone to provide even the barest whisper of a guess about what any one of them can handle, how any of them can fail, or make any positive predictions about them as a group, without any other information.

It’s kind of telling.

That you couldn’t respond to the part I pointed out that you didn’t quote, and tried to pretend didn’t exist? Yes, it’s very telling.

Rocky says:

Re: Re: Re:10

I have a 2×4 in my house. Barring any damage particular to it, it’s going to behave like any other 2×4. As you, yourself, said, “it is clearly understood what load and environment 2×4 can handle”. Without so much as looking at them, an engineer could give ranges of load that should be safe, tell different ways they could fail under different kinds of stress, and lots of other information… based on destructive testing of other 2x4s, which have pretty well explored the space of inputs a 2×4 is at all capable of receiving.

So, we have clearly defined parameters under which they will operate. So why aren’t you extending that reasoning to an if-statement?

I have 100 if statements on my notepad. I dare anyone to provide even the barest whisper of a guess about what any one of them can handle, how any of them can fail, or make any positive predictions about them as a group, without any other information.

I dare you to tell me how 100 2×4’s would behave in a building without any further information.

Rocky says:

Re: Re: Re:12

So, are you wrong now, or were you wrong then?

Neither, it’s you who just don’t get it.

You put up a scenario about 100 if-statements, then claim no one can now the result of them without any other information. I suggested you do the same for 100 2×4 in a building and how they will behave, without any other information.

Mamba (profile) says:

Re: Re: Re:6

Ugh. You can’t just assume every 2×4 shaped piece of wood is going to perform the same. There are standards that define them, and every product run will have a certain amount tested to meet the specification. If new fixation methods are developed, the someone (standards body, manufacture, etc) need to go back and test the old materials with the new.

On high risk projects, every batch of concrete might be slump tested before pouring. Then, coupons poured for strength testing a 30 days.

Anonymous Coward says:

Re: Re:

Code can be speech, but it’s usually not. A function that takes a number as an input and determines whether it is prime is not “speech” in any meaningful way. The main reason it’s considered as speech is so it can be shoehorned into copyright protection.

Nobody buys Microsoft Office because they like the creativity of the source code. Users don’t even have access to it, it’s probably been run through an obfuscator, and the TOS probably explicitly disallows trying to decompile the program to even try to look at it.

Anonymous Coward says:

Re:

We could just as well say that unbreakable bridges don’t exist. But, even though there’s no theoretical way they could ever exist, they’re not collapsing left and right (hey, nobody could’ve predicted a striped yellow-and-silver van would’ve driven across!). Meanwhile, software, which could in principle be written and proven to be secure, is so bad that people sometimes find security holes by accident, like by holding the “enter” key too long.

I think it’s a stretch to say that software liability is against the First Amendment. People can be sued for erroneus financial reports, for example, and even go to prison for it. And, arguably, if your dehumidifier burns your house down, the actual flaw might’ve been in the “speech”—the plans for that device—but we don’t give blanket immunity to the engineers who draw those plans. Once money’s attached, we no longer treat things as “just” speech. (Which is a little strange given the Citizens United ruling; I’m surprised people haven’t been using that to overturn other market regulations.)

Anonymous Coward says:

Re: Re: Re:2

Indeed, when I wrote “in principle” I was well aware that phrase was doing a hell of a lot of work. For a software example, see What is Proved and What is Assumed (about seL4):

Formal proofs can be tricky. They prove exactly what you have stated, not necessarily what you mean or what you want.
Our proof statement in high-level natural language is the following:
The binary code of the ARM and RISCV64 versions of the seL4 microkernel correctly implement the behaviour described in its abstract specification and nothing more. Furthermore, the specification and the seL4 binary satisfy the classic security properties called integrity and confidentiality.

The linked page documents certain assumptions the proof relies on. Notably, about 1500 lines of code are unverified and the hardware has to work correctly. Nevertheless, that’s much, much better than most of the code we rely on. In most cases, nobody’s even tried to prove software correct. We just kind of hope it won’t fail too badly, like the “good old days” when we built bridges by nailing some pieces of wood together till it looked like it’d hold.

Anonymous Coward says:

Re: Re:

the plans for that device—but we don’t give blanket immunity to the engineers who draw those plans

Generally speaking, we do actually do that. If you find a plan lying around and manufacture the object on the plan, and it fails, then the person who drew the plan has no liability to anyone involved. The act of drawing a plan does not result in an inherit warrant of suitability for any purpose. Liability applies to the manufacturer of a product for sale if that product is not suitable for its purpose.

Which is why, when manufacturers actually decide on plans, they always write up a contract with the drawer of said plans (and the suppliers of raw and processed materials, and any secondary manufacturers they make use of, and pretty much everyone else involved).

The “liability” is only a contractual obligation with the one producing the product for sale, not an independent tort. They have a contractual obligation to defend the manufacturer should any legal liability arise to the manufacturer due to failures in the provided design.

Anonymous Coward says:

Re: Re: Re:

Generally speaking, we do actually do that. If you find a plan lying around and manufacture the object on the plan, and it fails, then the person who drew the plan has no liability to anyone involved.

If you want to get technical about it, it’s the act of an engineer or architect putting their professional seal on the plans that establishes liability. That’s them saying they take responsibility for it. If anyone were dumb enough to construct a plan from someone who refused to do that, they’d almost certainly be found liable for neglect if it harmed anyone.

Meanwhile, governments and private entities regularly buy software from companies who require the buyers to agree to absolve the companies from any and all liability. It’s not considered negligent of them to use software obtained in that way.

So, that’s another possible way forward. A flaw in Windows causes your computer to leak private data? Well, you had a responsibility to protect the data, and since you agreed that Microsoft had no responsibilities whatsoever, the liability falls to you. Maybe you should’ve bought something certified. We could start by saying governments will no longer buy uncertified software.

TKnarr (profile) says:

TBH as a software engineer I’m not too worried about regulation by liability. On the commercial/professional side, requirements for due care as a matter of law come with a fair amount of protection for me. The biggest problem hasn’t ever been the engineers, it’s the managers and executives who want corners cut. Having a requirement to not cut corners enshrined in law gives me more leverage both in doing it right in the first place and in avoiding retaliation for balking at their requests. HR and Legal don’t care about delivery deadlines and they’re rightly allergic to the idea of ordering an employee to break the law when there’ll absolutely be a record of it and of terminating an employee shortly after having balked at that when there’ll absolutely be a record of that too.

On the open-source side, liability law already has provisions in place for how to sell something as-is and disclaim liability and open-source software already meets all of the criteria. Even more so given that open-source software isn’t actually sold so if it’s not fit for purpose the customer hasn’t suffered any loss there.

This comment has been deemed insightful by the community.
TKnarr (profile) says:

Re: Re:

That’s because in most fields of engineering there’s not only liability imposed for faults in the product, the laws go so far as to say that only an engineer can make certain decisions and that it’s a felony to try to overrule them. I’d love to see software development in an environment where a manager even suggesting that the developer won’t get a raise if they don’t become “more responsive to business’s needs” instead of taking the time to do it right or implementing proper security instead of making convenience the only priority would get that manager thrown in jail.

Mamba (profile) says:

“Nobody in the history of software development has ever written software with the intent of it getting hacked.”

While that may be true, the amount of software that is created with absolutely no effort made to prevent it, the distinction is academic.

Also, the rest of your complaints are also a bit weak sauce… considering that’s what the table stakes are for PEs. Negligent Homicide is a real possibility if I deliver a design without consulting standards, adhering to codes, or using best practices and someone dies. I also want to point out that the National Electric Code is managed by firemen, because they got tired of whole cities burning down.

And licensing and/or third party testing is an easy end run around any first amendment issues. Just like, well, everything else it would beasy to require that any custom software be stamped by a PE, and any productized software have a third party (UL like) certification.

Also, the government already has it’s hands in cyber security in many places. NERC CIP for example. And yes, they can be a pain in the ass. Prior to their existence, utilities were pretty notorious for doing things like putting protective relays directly on the internet, with no firewall or NAT…while leaving then password set to default (80% of NA relays, that’s otter/tail, so not even hard to remember).

So now there are rules around it.

All of this is to say, if the industry doesn’t improve itself…others will do it for them. Which is never fun.

Anonymous Coward says:

Re:

“Nobody in the history of software development has ever written software with the intent of it getting hacked.”

While that may be true, the amount of software that is created with absolutely no effort made to prevent it, the distinction is academic.

The distinction is not academic, and shows how little you know how software is made for the (checks notes) last 60+ years. Yes, going back to at least the punchcard era.

Like hardware, software is not built out of whole cloth, brand new, right out of the box. Even projects that developers term “green field”, ones that are “new” are not new. They’re built from third-party libraries, language standard libraries, even language features that have been developed for decades. Nothing in any program you use is written whole cloth; everything is an abstraction on the level below it. Everything from techniques to algorithms to software directly have been built up from the early days of computing.

There exists no software that is completely bug free; to require this is to greatly underestimate the complexity of code nor the history of software. There are millions of lines of code in your operating system of choice; built on top of third-party libraries with their own unknown exploits; built on top of languages which may have their own vulnerabilities, running on hardware with their own vulnerabilities. Even if you could freeze time and go thru those countless lines of code, adding one new feature, or one line, or even fixing a vulnerability, can have unseen consequences to the software, due to the way that complex software interacts. Let me repeat that; fixing even a vulnerability can even lead to more vulnerabilities.

You cannot guarantee software with any level of complexity will be bug free. To claim so is to be ignorant of the entire history of software development.

TKnarr (profile) says:

Re: Re:

There exists no software that is completely bug free; to require this is to greatly underestimate the complexity of code nor the history of software.

As noted earlier, we can’t build bridges that are guaranteed to never ever collapse either. That doesn’t mean we have bridges regularly collapsing just because someone drove a model of car that didn’t exist when the bridge was built across them. You don’t need to guarantee bug-free software to impose rational standards for writing it in a way that prevents the all-too-obvious bugs we see every day (see for example the LastPass compromise, or this week’s total crash of Britain’s air traffic control system caused by someone entering an invalid value in a flight plan).

I wonder what’d be involved in making a baseball bat with a titanium outer shell over a tungsten carbide core?

Anonymous Coward says:

Re: Re: Re:2

In a nutshell, writing secure software is equivalent to solving the halting problem.

That ain’t no result I ever heard of. Where’s the proof they’re equivalent? And how do we square that with software that’s been proven correct, such as seL4 (under certain assumptions)?

Even where the halting problem is actually relevant, it can usually be worked around. As with Berkeley Packet Filter, one can simply reject programs for which the halting problem is not easily decidable, in addition to those known to not halt. The class of programs that can be proven to halt is large enough to be useful.

Anonymous Coward says:

Re: Re: Re:4

The class of programs that can be proven to halt does not include operating systems.

As noted elsewhere, seL4 has been formally verified to “correctly implement the behaviour described in its abstract specification and nothing more”. The “nothing more” rules out any unspecified halting or infinite loops. Granted, that’s only one component of an “operating system”, but there’s no reason to think the work couldn’t be extended. It’s just a shitload of work.

The halting problem is largely irrelevant to security because we rarely need to prove the security of arbitrary and pathological programs supplied by untrusted parties. When that is necessary, we either force non-Turing-completeness (as in Berkeley Packet Filter) or use simple time/cycle bounds (as in WebAssembly and Ethereum). Otherwise, if we happen to write a non-provable program, we rewrite it to make the prover’s job easier.

Anonymous Coward says:

Re: Re: Re:3

Even where the halting problem is actually relevant, it can usually be worked around.

No it can’t, because until a proof of a proposition is found, there is no guarantee that a proof generator or other search of the infinite space of proofs has a proof for that proposition. Mathematicians are familiar with this from of the halting problem, as that is where open problems exist, they do not know if it possible to find a proof.

Formal proof of program correctness for all real programs is where the halting problem is definitely in play.

Anonymous Coward says:

Re: Re: Re:4

Formal proof of program correctness for all real programs is where the halting problem is definitely in play.

As noted above, we’re not trying to verify an arbitrarily-selected set of existing programs. BPF, WASM, and Ethereum all have workarounds, as do real projects like seL4. It’s not the halting problem per se that’s being worked around; it’s the undecidable cases.

A sufficiently clever person will be able to write a program that sends a verifier off into “infinite space”. We can simply reject such programs. If the verifier takes too long or uses too much memory, reject the program, even if it hasn’t been proven harmful. The goal, after all, is to write useful programs that can be proven non-harmful, and the halting problem doesn’t preclude that.

Anonymous Coward says:

Re: Re: Re:5

Oh dear, you have just rejected the majority of useful programs, like operating systems, word processors, photo editors, video editors, 3d modelling programs etc. etc. You re also overlooking that you cannot prove a program correct, only that analyses aspect of the program are correct. The same applies to software testing, fails to ask the right question, and a flaw remains hidden from analysis and or testing.

Interestingly, failure ta ask the right questions at design time allowed the Tacoma bridge and the Lockheed Electra to suffer from failures, which could have been avoided, or were fixed by minor design detail changes. (the P3 Orion was the military version of the Electric and went on to have a long and reliable service history.) .

Anonymous Coward says:

Re: Re: Re:

That doesn’t mean we have bridges regularly collapsing just because someone drove a model of car that didn’t exist when the bridge was built across them.

caused by someone entering an invalid value in a flight plan

If people regularly drove cars that were the equivalent to an “invalid value” over bridges, you’d see a lot more of them collapsing. And I bet the architects wouldn’t have much liability for it, either.

TKnarr (profile) says:

Re: Re: Re:2

Oh, people do drive vehicles that are equivalent to an invalid value across bridges all the time. All that means is that someone drove a vehicle too heavy for the rated max load for the bridge across it. But they do so having ignored the signs on the approach to the bridge stating what the max safe load is and prohibiting overweight vehicles, and if the bridge collapses it’ll be the driver of the overweight vehicle held responsible.

That doesn’t happen very often because the people who build bridges have to follow standards for what the minimum max safe load needs to be based on predicted traffic (eg. if it’s on a road rated for tractor-trailer rigs, then the max safe load needs to accommodate vehicles up to at least 80,000 lbs GVW and as many as will fit on the length of the bridge nose-to-tail in all lanes). If you try to spec your bridge for less than that, the civil engineer who has to sign off on the plans will refuse to sign off on them and you don’t get to build your bridge.

The equivalent would be what I do routinely in software, and what the writers of the UK ATC software didn’t do: include a last-ditch error handler that catches all errors the code didn’t prevent or handle and handles them in a way that keeps the system from simply crashing. There’s an analogous rule for input to keep invalid values from getting in in the first place, which the UK ATC software also didn’t follow. If the software engineers knew they’d be held liable for failing to follow well-known rules like those, they’d’ve taken the time to do it right and the UK air traffic system wouldn’t’ve gone dead.

Anonymous Coward says:

Re: Re: Re:3

include a last-ditch error handler that catches all errors

That assume that the error cause an exception, and in most languages an unhandled exception simply halts the program. That does not prevent exploitation of the error to break into a system, as the rely on errors that do not cause an immediate exception.

From the reports that I have seen, the problem was caused by a flight plan including a way point way off the route they actually intended to fly. That is individually valid sets of data, but an invalid combination.

Anonymous Coward says:

Re: Re: Re:3

Oh, people do drive vehicles that are equivalent to an invalid value across bridges all the time. All that means is that someone drove a vehicle too heavy for the rated max load for the bridge across it.

That’s not “all that means”. It’s just a single way the vehicle can be too much for the bridge, and rather conveniently it’s one that you can claim the bridge builders already take into account.

Both valid software users and attackers are WAY more creative than that. If you’re going to try to make the analogy work, you’re going to need to up your creativity also.

TKnarr (profile) says:

Re: Re: Re:4

I suggest you look up the Tacoma Narrows bridge collapse. Software users and attackers are no more creative than users and attackers of anything else. Software is merely one of the very few fields where the engineers aren’t required to work to any particular standards. The Tacoma Narrows bridge is a classic example: nobody anticipated that particular resonance problem, but the bridge still didn’t collapse immediately. It stood up to the effects more than long enough to completely evacuate the span because the engineers who designed it were required to build not just to the minimum spec but to include a safety margin specifically to address unexpected problems and loads. And that failure resulted in new standards being issued that required bridges to be designed and built to a) minimize the effects of wind on the span and b) take the worst-case resonance effects into account when determining the loads on the bridge.

Anonymous Coward says:

Re: Re: Re:5

How do you build safety margins into software, as it has no equivalent of over specify components for strength. That is in mechanical engineering, the loads on a component can be calculated, and the component designed to carry 25, 50 or 100% more load, depending the field of engineering. A software routine either works as expected or does something that damages software on the machine. Forget a bolt and the bridge still stays up, miss a validation check and the software has a vulnerability which cause damage should invalid data be used.

Rocky says:

Re: Re: Re:6

How do you build safety margins into software, as it has no equivalent of over specify components for strength. That is in mechanical engineering, the loads on a component can be calculated, and the component designed to carry 25, 50 or 100% more load, depending the field of engineering. A software routine either works as expected or does something that damages software on the machine.

You add validation for all inputs and a robust exception handling. If it’s really critical, you use multiple computers with software from different vendors written to the same specification and use a majority voting system if they behave differently to determine which of the computers are faulty.

Plus, you have some very strange notions about software and how it works. Normally, a software routine doesn’t damage software on a machine if it doesn’t work as expected, it’s like you don’t know anything about modern architecture that has things like privilege levels, process separation, ACL’s, memory protection and hypervisors – and all those things usually means software experiencing a fault will be terminated. Are there edge cases when that isn’t the case, sure – just like how there are edge-cases where bridges fail when they should keep standing.

Forget a bolt and the bridge still stays up, miss a validation check and the software has a vulnerability which cause damage should invalid data be used.

Use the wrong bolts or mount them wrongly, the bridge may stay up or it may not. There are numerous examples of bridges failing because of improperly installed bolts or other elements, just because an engineer has designed in safety-margins doesn’t mean he did it correctly. There are numerous bridges that have failed for various reasons related to their construction and design. If you want to peruse a very long list of failed bridges you can head over to wikipedia: https://en.wikipedia.org/wiki/List_of_bridge_failures

This whole discussion is silly and seems to be rooted in a severe misunderstanding of how software works, properly designed software can handle wrong inputs and errors just like how a properly designed and built bridge can handle what its specification says. If either of these systems aren’t properly designed and built they will fail under a circumstance the specification say they should handle. When it comes to bridges, such a failure tends to lead to a loss of lives, for software it tends to lead to inconvenience and swearing while the computer/application is restarted.

Rocky says:

Re: Re: Re:

There’s nothing simple about designing software for a car – they are chock-full of computer systems and lectronics, it’s a fucking mess. Old technology, proprietary systems, harsh electrical environment, byzantine security features and very little commonality in hardware between models.

That cars works as well as they do these days is a fucking miracle.

Anonymous Coward says:

Re: Re: Re:2

And how many recalls are for mechanical issues. Also, car software live in a relatively stable environment as far as its inputs are concerned, and outside of itself driving and entertainment systems, is relatively trivial running on simple well tested OS’s and networks. Track testing also give very good testing coverage to engine management and control systems software software.

There is a significant difference between developing software of a few hundred to a couple of thousand lines of code, including the OS parts used, and developing a few thousands of lines of code on a base of millions of lines of imported code.

Rocky says:

Re: Re: Re:3

And how many recalls are for mechanical issues.

A fair amount actually. What you don’t see if you don’t go looking for it is all the service bulletins that gets taken care of when you take your car to service.

Also, car software live in a relatively stable environment as far as its inputs are concerned, and outside of itself driving and entertainment systems, is relatively trivial running on simple well tested OS’s and networks. Track testing also give very good testing coverage to engine management and control systems software software.

And ordinary software doesn’t?

There is a significant difference between developing software of a few hundred to a couple of thousand lines of code, including the OS parts used, and developing a few thousands of lines of code on a base of millions of lines of imported code.

The software for a modern car consists of 50-100 millions lines of code. Once upon a time when cars got computerized they perhaps had a code base in the hundreds or thousands lines of code.

Anonymous Coward says:

Re: Re: Re:4

Engine management software, (note i excluded self driving and entertainment system software) does not reach the millions of lines of code mark, as the micro controllers cannot support large code bases. All inputs will be ranged checked, as that is how sensor failure is detected. Also, it is not subject to updated libraries and operating systems without a recompile and the running of tests, nor does it have to co-exist with other applications.

Satnav, self driving or driver assist and infotainment often runs on a Linux computer, but it can all be turned off without breaking the car, and obviously has a high line count, and more likely to contain serious flaws. It is also written with vaguely defined inputs, such as destination address, where there if no definite definition of length, number of text lines, or the contents of those lines.

Rocky says:

Re: Re: Re:5

Engine management software, (note i excluded self driving and entertainment system software) does not reach the millions of lines of code mark, as the micro controllers cannot support large code bases.

If we ignore infotainment and such, do you really believe the software for the engine, gear box, ABS, instrument cluster, passenger security and possibly the suspension are counted in a few thousands lines of code? Depending on the engine and manufacturer the image-size is 50kb to 250kb these days, for one module. Most of the code is either C or assembler (and apparently some Rust these days). I’ll admit it’s a bit stupid to talk about number of source code lines, but for general comparison it’s somewhat relevant in judging the software complexity since we don’t have a handy metric for that.

And in regards to microcontrollers and storage, even 10 years ago they had megabytes of flash-storage (lead time for introducing new electronics into cars are ~5-6 years so I thought 10 years as a cutoff seemed reasonable).

Also, it is not subject to updated libraries and operating systems without a recompile and the running of tests, nor does it have to co-exist with other applications.

The automotive industry have somewhat relied propriety systems and security through obscurity but that’s coming to an end. Just watch how people are stealing KIA’s and Hyundai’s willy-nilly by jacking into the CAN-bus through the headlight-HIDs. And that means if they aren’t subject to “updated libraries and operating systems” it’s a bitch to fix.

For those in the know, there have always been hacked control modules that you could jack into cars to steal them – largely due to the fact that manufacturers couldn’t actually update the system without themselves swapping out all control modules to new one to mitigate the security problems.

Anonymous Coward says:

Re: Re: Re:2

guess what car buyers have protecting them? The lemon law

The lemon law is, in my view, a minor thing. Even without that, car producers have legal liability. They can’t waive it by making the buyer click “OK” on some legalese before the car will turn on.

Yeah, “Fight Club” gave a rather cynical view of automaker liability and recalls. The system doesn’t always work well; but, unlike software, a system at least exists. Even Tesla, who seem to love living dangerously (and passing that danger to consumers), are occasionally forced by regulators to fix stuff when enough people die.

Anonymous Coward says:

Not that leak...

Auditor: Your system has a hard-coded backdoor that leaks data
Dev: Oh that, MI6 insisted we put that in
Auditor: And this one?
Dev: Government mandated intercept.
Auditor: Which one?
Dev: I’m legally prohibited from telling you.
Auditor: This one?
Dev: Needed to facilitate client-side CSAM filter
Auditor: And…
Dev: Let me save you some time. We leak data all over the place because we’re legally compelled to.

Anonymous Coward says:

Consumer software should have the same rules as physical consumer products. If your extra-small baby monitor is a choking hazard you’re liable; if your baby monitoring software allows strangers to view your customers’ houses you should also be liable. You can’t simply put a disclaimer on the physical product that says you aren’t liable, and I see no reason why software should be different.

Mamba (profile) says:

I get what you’re saying, I just don’t care for the excuses. In the aftermath of the Hyatt Regency hotel collapse, the engineers tried to claim they were not responsible for killing 100 people when they approved a shit design from the contractor without evaluating it…because that’s what the industry did. It didn’t work out well for them.

I’m also well aware of software supply chain vulnerabilities. It’s why the last company I was at licensed, audited, and trimmed a realtime OS for their new embedded platform, to 2000 lines. Previously they had licensed a protocol stack, and it sucked, so they had to clean room it from scratch and offer free updates. A proof of concept for a new federal regulation is causing the industry to start the process of hunting down the source of every single licensed, purchased, or open source bit of code used in Critical Energy Infrastructure products.

I think you are just incredibly uniformed about depth and breadth of software out there. It’s not just OSs, DBs, word processors running in. I’m talking shit like: industrial control systems with no access control. At all. On the network? Complete access.

Something like 80% of all utility applications have bad SSL/TLS configurations.

Just because shit attention to security for decades has been accepted, doesn’t mean it will be accepted going forward.

ECA (profile) says:

Cant happen

Expecting the corps to protect THEIR DATA??
AFTER THE FACT? WHY?
They want to have Fool proof ID. and the only way to do that if to PROVE that LOST data will ruin the system they have.
If they have THEIR way, they want everyone to have PROVEN ID. to show WHO they are. We would all have Facial, DNA, CHIPPED and Tattooed.

To Think? That the Credit card agencies have NOT reported being hacked? HOW and WHY NOT? Everyone else been. How many banks have been hacked in the past 20 years? once or twice?

MOST of us know how complicated and EASY BASIC protection IS.
#1. NO DIRECT internet connection to main system. You can goto the Dummy system and ask for data, but you are Monitored, and have no access to the MAIN DATA.
#2 IF the STATE thinks your grade school internet site is OK, THEY ARE WRONG.
There is lots more to deal with but that should help lots.

Anonymous Coward says:

While with this strategy document itself the government is not yet prescribing a specific way to code software, it contemplates that the government someday could. And it does so apparently without recognizing how significantly shaping it is for the government to have the ability to make such demands – and not necessarily for the better.

The Federal Gov’t can, and does, make demands of companies wishing to do business with said government. From that springs, usually, an industry ethos that says *”why should we develop to one standard for the government, and another for everyone else – that’s too expensive. Since the government is where we want to make money, then it’s cheaper to just do the same for everyone else”. One example of this is the Posix language standard that allows many of us to use almost any computer in a capable fashion, regardless of manufacturer.

… dubious constitutionality of it being able to make such demands.

I’m sure you’ve taken note that each and every State has incorporated the U.C.C. into their laws, despite the mandate of Art 1, Sec 10, Cl 1 (No State shall pass any law …. impairing the Obligation of Contract). The Feds “encouraged” States to adopt the U.C.C. as a way to exercise police power over unfair contracts, and the rest is history, as they say.

Essentially, the term Public Safety and Health (another guise of policing powers) empowers governments at nearly every level to get around that Constitutional clause. The same thinking will work here as well to get past the Constitution, if for no other reason than the Safety and Security of The Public and The Nation.

ke9tv (profile) says:

Re:

This concept has made me wonder – what about if I give it away, or someone else steals it? The language appears to address developers, not vendors. Di I need to work for a huge government contractor, with a QA department, a legal department, and about a zillion compliance officers verifying this and that, before I can write a line of code? It appears that under this regime, yes, if someone could possibly misappropriate that line and cause harm with it.

Alex says:

Until the permanent government decides to retake our sovereignty from the rest of the world, software developers have no reason to actually give a shit.

All CPUS are backdoor’ed by Israel’s intelligence services.
All internet services are backdoor’ed by running on AWS/Azure/GCP which all co-operate with American and Five Eyes (all of whom share intelligence with Israel’s security apparatus).

The internet is unsafe and insecure by design.

What the US is actually saying is that it’s stranglehold on the planet is slipping and it needs help holding on for a few more years until the parasites running things figure out which country will be its next host, now that Russia and China are revealing to be “problematic”.

Eli Array Minkoff says:

Re: What "parasites"

You mentioned “the parasites running things”, and presented Israeli intelligence services as being at the center of your concerns.

News flash: every powerful country tries to further its own power and control, and even then, Israel is more akin to an American proxy than a global power in its own right. ts intelligence services are not pulling the strings.

One can and should criticize Israel for many things, but the Israeli government is not secretly pulling the strings. While there are privacy and security concerns about things like Intel Management Engine, there is no reason to believe that there are backdoors specifically placed by Israeli intelligence services. If anything, it’s more likely to have been placed by the NSA, but frankly, even that’s unlikely to have gone undetected with the amount of scrutiny that it’s (rightly) gotten and continues to receive.

So tell me, who are the “parasites running things”?

Rich (profile) says:

I read this differently

This reads more like the government wants to pass some liability laws to make it possible to do what this site is always talking about, which is the hold some of these data collection whores liable for their shitty security practices, or total disregard for any type of security. The twist being that if you actually adhere to the proposed set of guidelines the government believes are the best practices for secure data handling, then you would be shielded from liability.

In other words, they want you to follow their programming standards, and if you do, you can be protected from liability if your software gets hacked. It actually seems pretty reasonable, if that is the correct interpretation.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...