The Big Question: When Did The NSA Know About Heartbleed?

from the inquiring-minds... dept

It's not too surprising that one of the first questions many people have been asking about the Heartbleed vulnerability in OpenSSL is whether or not it was a backdoor placed there by intelligence agencies (or other malicious parties). And, even if that wasn't the case, a separate question is whether or not intelligence agencies found the bug earlier and have been exploiting it. So far, the evidence is inconclusive at best -- and part of the problem is that, in many cases, it would be impossible to go back and figure it out. The guy who introduced the flaw, Robin Seggelmann, seems rather embarrassed about the whole thing but insists it was an honest mistake:
Mr Seggelmann, of Munster in Germany, said the bug which introduced the flaw was "unfortunately" missed by him and a reviewer when it was introduced into the open source OpenSSL encryption protocol over two years ago.

"I was working on improving OpenSSL and submitted numerous bug fixes and added new features," he said.

"In one of the new features, unfortunately, I missed validating a variable containing a length."

After he submitted the code, a reviewer "apparently also didn’t notice the missing validation", Mr Seggelmann said, "so the error made its way from the development branch into the released version." Logs show that reviewer was Dr Stephen Henson.

Mr Seggelmann said the error he introduced was "quite trivial", but acknowledged that its impact was "severe".
Later in that same interview, he insists he has no association with intelligence agencies, and also notes that it is "entirely possible" that intelligence agencies had discovered the bug and had made use of it.

Another oddity in all of this is that, even though the flaw itself was introduced two years ago, two separate individuals appear to have discovered it on the exact same day. Vocativ, which has a great story giving the behind the scenes on the discovery by Codenomicon, mentions the following in passing:
Unbeknownst to Chartier, a little-known security researcher at Google, Neel Mehta, had discovered and reported the OpenSSL bug on the same day. Considering the bug had actually existed since March 2012, the odds of the two research teams, working independently, finding and reporting the bug at the same time was highly surprising.
Highly surprising. But not necessarily indicative of anything. It could be a crazy coincidence. Kim Zetter, over at Wired explores the "did the NSA know about Heartbleed" angle, and points out accurately that while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts. The whole issue with Heartbleed is that it "bleeds" chunks of memory that are on the server. It's effectively a giant crapshoot as to what you get when you exploit it. Yes, it bleeds all sorts of things: including usernames, passwords, private keys, credit card numbers and the like -- but you never quite know what you'll get, which makes it potentially less useful for intelligence agencies. As that Wired article notes, at best, using the Heartbleed exploit would be "very inefficient" for the NSA.

But that doesn't mean there aren't reasons to be fairly concerned. Peter Eckersley, over at EFF, has tracked down at least one potentially scary example that may very well be someone exploiting Heartbleed back in November of last year. It's not definitive, but it is worth exploring further.

The second log seems much more troubling. We have spoken to Ars Technica's second source, Terrence Koeman, who reports finding some inbound packets, immediately following the setup and termination of a normal handshake, containing another Client Hello message followed by the TCP payload bytes 18 03 02 00 03 01 40 00 in ingress packet logs from November 2013. These bytes are a TLS Heartbeat with contradictory length fields, and are the same as those in the widely circulated proof-of-concept exploit.

Koeman's logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 193.104.110.12 and 193.104.110.20. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.

EFF is asking people to try to replicate Koeman's findings, while also looking for any other possible evidence of Heartbleed exploits being used in the wild. As it stands now, there doesn't seem to be any conclusive evidence that it was used -- but that doesn't mean it wasn't being used. After all, it's been known that the NSA has a specific program designed to subvert SSL, so there's a decent chance that someone in the NSA could have discovered this bug earlier, and rather than doing its job and helping to protect the security of the internet, chose to use it to its own advantage first.

Reader Comments (rss)

(Flattened / Threaded)

  •  
    icon
    sehlat (profile), Apr 10th, 2014 @ 4:17pm

    Heartbleed approximates Box of Chocolates

    It's effectively a giant crapshoot as to what you get when you exploit it.


    Not as tasty, though.

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 10th, 2014 @ 4:21pm

    Poor guy

    I bet Robert Seggelmann is not feeling too great the last few days.

    Hopefully he's not directly subjected to some of the ridiculous vitriol I've seen on some sites regarding this bug.

    The fact is, FOSS is a community effort. The concept isn't that every individual developer writes bug-free code - but that enough people are reviewing it, or constantly scrutinizing it that the bugs will be found and eradicated quicker than they do in closed source software.

    Therefore, everyone who develops and/or uses OpenSSL is partially to blame here. Security-related software is only as good as the weakest link, and it's the job of all involved to make sure that those links are located and strengthened.

    I've read some pretty damning stuff about OpenSSL's development practices in the last couple days - and hopefully they've taken some of this to heart, and will be reflecting on how this occurred, and how it could have been prevented. This is how you turn mistakes into opportunities - opportunities to prevent such things from happening again.

    So, I just wanted to let Seggelmann know (I'm sure he'll never read this comment) - I feel ya bud, this is the shits. I've been there, and seen my "handiwork" destroy data, or cause failures. It's a shitty feeling, but it's part of life, and it will pass. Hang in there buddy.

    On the other hand, if you did this knowingly, burn in hell ;)

     

    reply to this | link to this | view in chronology ]

  •  
    icon
    Mason Wheeler (profile), Apr 10th, 2014 @ 4:27pm

    A better question: when did the programming community know about the problem?

    The answer? Over a quarter-century ago. In 1988, the Morris Worm brought the Internet to its knees, taking down about 10% of all existing servers at the time. It got in through a buffer exploit in a piece of system software written in C.

    That should have put the programming community on notice. The C language should have been dead by 1990, because this class of security hole (buffer exploits) is inherent in the design of the language and can't be fixed. Some people say "you just have to be careful and get it right," but to err is human, and it's an easy mistake to make. This means that the language is at odds with reality itself. Something has to give, and it's not going to be human nature.

    They say those who don't learn from history are doomed to repeat it. Well, here we have it again, a major buffer exploit in a piece of software written in C, affecting between 10% (there's that figure again) and 66% of all servers on the Internet, depending on which estimate you listen to.

    We know better than this. We have known better than this since before the Morris Worm ever happened, and indeed for longer than most people reading this post have been alive. I quote from Tony Hoare, one of the great pioneers in computer science, talking in 1980 about work he did in 1960:

    A consequence of this principle [designing a language with checks against buffer overruns built in] is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.


    Maybe now that it's happened again we'll finally wise up and give this toxic language its long-overdue funeral?

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      Wesha (profile), Apr 10th, 2014 @ 4:36pm

      Re:

      Actually, there *is* a way to solve this issue that does not require rewriting any existent code (only recompiling), and I've proposed it about ten years ago: make compilers utilize SEPARATE stacks for data and return addresses.

       

      reply to this | link to this | view in chronology ]

      •  
        identicon
        Anonymous Coward, Apr 10th, 2014 @ 5:18pm

        Re: Re:

        Wesha while your proposed fix would deter a lot of code injection attacks, it would do nothing for Heartbleed. The Heartbleed bug is a pure "using untrustworthy data" attack. It's a close cousin to a SQL injection attack, wherein a piece of user supplied data is explicitly used to control some functional aspect of the code. In Heartbleed, it's how much memory to return in response. In a SQL injection, it's the SQL to be executed on the DB.

         

        reply to this | link to this | view in chronology ]

        •  
          identicon
          Anonymous Coward, Apr 11th, 2014 @ 12:33am

          Re: Re: Re:

          Could you not simply write a method to effectively 'zero out' the memory stack before allocation to eradicate Heartbleed?

           

          reply to this | link to this | view in chronology ]

          •  
            identicon
            Anonymous Coward, Apr 11th, 2014 @ 4:39am

            Re: Re: Re: Re:

            The bug is caused by no bounds checking on the length parameter that is sent in the heartbeat packet.

            It causes the code to read memory areas that it should not be, zeroing the requested area would cause a seg-fault, so I guess that would eradicate the heartbleed but at the expense of server stability.

             

            reply to this | link to this | view in chronology ]

            •  
              identicon
              Anonymous Coward, Apr 11th, 2014 @ 5:52am

              Re: Re: Re: Re: Re:

              Getting a SIGSEGV instead of leaking information is far more preferable in my eyes. It signals a problem immediately which can be investigated. It's unfortunate that this problem was so silent.

               

              reply to this | link to this | view in chronology ]

              •  
                icon
                Mason Wheeler (profile), Apr 11th, 2014 @ 11:22am

                Re: Re: Re: Re: Re: Re:

                Exactly. It's still a C buffer overrun exploit; it's just that this involves buffer reading rather than buffer writing.

                 

                reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 11th, 2014 @ 2:35am

      Re:

      Meh. C has served us well for decades, while other languages that were going to take over the world have languished and died. (I still remember having argument after argument in the mid-80's with a clueless moron who insisted that the entire world would be running Ada by 1990.)

      The problem is not the language. The problem is programming practices. I've seen the same mistake made in Python and Java and C++ and Ruby and Perl and Javascript and Fortran. I've also seen other well-known mistakes made across all those languages, sometimes because I was the one making them.

      Switching languages is not the cure-all for programming problems, although advocates of the flavor-of-the-month often claim that is so. Careful coding and peer review -- LOTS of peer review -- is the best we can do. This of course why open source is inherently superior to closed source, which cannot be independently peer reviewed. But that only works if people actually do it, which in this case, not enough people did.

      Given the criticality of OpenSSL to so many operations, this would be a good time for a lot of the big players to pony up $50K or a developer's time for six months in a collaborative effort to audit all the code and identify the other bugs that are no doubt lurking. (Note that this is probably less than they've spent this week dealing with the fallout.)

       

      reply to this | link to this | view in chronology ]

      •  
        identicon
        Anonymous Coward, Apr 11th, 2014 @ 4:40am

        Re: Re:

        C is very powerful, and you know what they say "With great power comes great responsibility".

        As you have said no amount of 'programming language change' can stop human errors.

         

        reply to this | link to this | view in chronology ]

        •  
          icon
          Mason Wheeler (profile), Apr 11th, 2014 @ 11:24am

          Re: Re: Re:

          As you have said no amount of 'programming language change' can stop human errors.


          Yes, but it can mitigate the damage they do. Tony Hoare knew how to make this sort of thing impossible waaaay back in 1960: design the language so that if someone tries to go outside the bounds of an array, the program crashes instead.

           

          reply to this | link to this | view in chronology ]

      •  
        identicon
        Anonymous Coward, Apr 11th, 2014 @ 7:07am

        Re: Re:

        ...and pay for software quality assurance

         

        reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 11th, 2014 @ 6:02am

      Re:

      Personally I don't see C as being the main culprit. That's not to say that C isn't a part of the landscape, but I think mono-culture is the larger culprit. I read some stat that Heartbleed affected roughly 17% of sites on the internet. That's a pretty significant figure for a single bug. And it's all because of a single SSL implementation.

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 10th, 2014 @ 4:33pm

    The NSA is now the scapegoat anytime a flaw or bug is found. The eventual blow back from all this non-overseen spying will come back to haunt them again and again. It is far from over.

    They have no one to blame but themselves.

     

    reply to this | link to this | view in chronology ]

  •  
    icon
    Wesha (profile), Apr 10th, 2014 @ 4:34pm

    This is complete BS.

    If you are a C programmer, you learn, about five years into your career, to never, /never/, NEVER forget to check the bounds.

    strcmp burn in hell; strncmp rules and the like.

    I place my bets on malice. He was made an offer he could not refuse.

    P.S. The guy is now toast. He will have really, REALLY hard time finding a new job now.

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      ChurchHatesTucker (profile), Apr 10th, 2014 @ 6:02pm

      Re:

      Meh. If it was malice, why all the drama about Lavabits SSL keys?

       

      reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 10th, 2014 @ 6:24pm

      Re:

      Buffer overflows are ridiculously common. The NSA's reach would have to be unimaginable to be behind all of them.

       

      reply to this | link to this | view in chronology ]

    •  
      icon
      John Fenderson (profile), Apr 10th, 2014 @ 9:51pm

      Re:

      "If you are a C programmer, you learn, about five years into your career, to never, /never/, NEVER forget to check the bounds."

      True. However, it's also a very common stupid mistake. I've seen a LOT of both commercial and open source code over the decades, including mainstream, trusted commercial software from major companies. I've seen this problem in almost every source set somewhere. Some is worse than others.

      Given that, malice would be the last thing that I suspect. Carelessness would be the first.

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 10th, 2014 @ 4:42pm

    The denial means nothing

    Either the bug was added on purpose, or it was added by mistake.

    If it was added by mistake, the author will deny doing it on purpose.

    If it was added on purpose, the author will deny doing it on purpose.

    The author of the code denying adding it on purpose gives us zero information. The denial would be exactly the same in either case.

     

    reply to this | link to this | view in chronology ]

    •  
      identicon
      Socrates, Apr 10th, 2014 @ 7:59pm

      Compromised development enviorments?

      And if it was maliciously added by someone else, the author will deny doing it on purpose.

      Assuming someone had gained access to your development system, would you detect if a bug were truly your own or injected by someone that closely mimics your style?

      With the vast effort to compromise the foundation of security this question is more relevant now than ever.

       

      reply to this | link to this | view in chronology ]

      •  
        identicon
        Anonymous Coward, Apr 11th, 2014 @ 4:22am

        Re: Compromised development enviorments?

        No, that would lead to a different answer.

        If it was maliciously added by someone else, the author will say "that's not my code!"

        So I guess I was wrong, the denial does give us some new information. It confirms the code's authorship.

         

        reply to this | link to this | view in chronology ]

    •  
      identicon
      John Hinsdale, Apr 11th, 2014 @ 9:27am

      Re: The denial means nothing

      No, you left out the possibility of "no comment"

      If it was added by mistake, the author will deny doing it on purpose

      If it was added on purpose, the author will either say "no comment" or deny doing it on purpose.

      If the author denies doing it on purpose, it makes it slightly more probably that the author did not do it on purpose.

       

      reply to this | link to this | view in chronology ]

  •  
    icon
    william (profile), Apr 10th, 2014 @ 4:44pm

    Some of you may know that Canada Revenue Agency (basically Canadian IRS) has shut down all online submission and query for filing tax return for 2014.

    What you may not know is that during an interview today, the spokesperson, with a bit of hesitation, said that they "had to shut down because they cannot be sure if any illegal organization (slight pause) and intelligence agency are able to get to the private information of Canadian citizens..."

    I am just unsure whether to laugh or cry when I hear that sentence.

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Anonymous Coward, Apr 10th, 2014 @ 5:18pm

    Reading Code

    Robert Glass makes a salient point (see Facts and Fallacies of Software Engineering) about how software language is learned. All other languages are taught by teaching the learner to read it first, and then to speak and/or write it. Software languages, the reading part, is rarely taught. Reading something one never learned to read might be really difficult. From Glass's research (book is from 2002) reading of software finds bugs. Bugs congregate. People make the same mistakes often. Look for a mistake often made by a developer, then search near that mistake in the code.

    I do not know how much of the above might apply to this situation, but it might.

     

    reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 11th, 2014 @ 6:13am

      Re: Reading Code

      As a developer I'm not sure I agree. Regardless of the language, programming represents state logic. You could argue that more time should be spent on logic and especially state transformation logic, but I don't consider that "reading". In this case, the error was due to a missing length validation. That has less to do with the specific language and more to do with a failure to realize the consequences of a particular action (applying untrusted input without prior validation). And this is a common error that affects those both new and veteran to the profession. The fix is to change the reasoning process of the developer so that secure practices are like muscle memory. This is where we are lacking, in my mind.

       

      reply to this | link to this | view in chronology ]

      •  
        icon
        John Fenderson (profile), Apr 11th, 2014 @ 8:26am

        Re: Re: Reading Code

        Yes, this. Programming languages are not like human languages in most ways. They are more like mathematical languages. Don't let the word "language" confuse thing.

        "The fix is to change the reasoning process of the developer so that secure practices are like muscle memory."

        Spot on. In the old days, programmers used to speak of using C "idioms" -- common constructs that were memorized to perform common tasks. Using idioms allowed good programming practices to become so habitual that they felt instinctual.

        I've noticed a trend in the newer generations of programmers. The ones who write in C or C++ tend to be more careless about their use of the language. I believe that it's because they cut their teeth on languages that hold the programmer's hand more (Java, etc.) and never developed the basic, good, paranoid practices that are essential when using the more powerful languages that let you shoot yourself in the foot, such as C/C++.

         

        reply to this | link to this | view in chronology ]

  •  
    identicon
    John Hinsdale, Apr 10th, 2014 @ 5:21pm

    Seggelmann is the author of the heartbeat RFC

    Seggelmann is the author of the heartbeat RFC(!) He is hardly an obscure guy. The idea he is working for the NSA or some agency is ludicrous. See:
    http://tools.ietf.org/html/rfc6520

     

    reply to this | link to this | view in chronology ]

  •  
    icon
    jimb (profile), Apr 10th, 2014 @ 8:25pm

    I think the NSA knew about this flaw from its inception, as any creator would know. The NSA, of course, denies this. There is no proof, and probably no way to prove it. As would be the case with any other security backdoor found or "an unknown unknown". I believe the NSA, just as I believe them about everything else they have said in response to the Snowden leaked documents. They wouldn't lie to the American public, would they. So all these accusations, they're lies, all lies, except what Clapper and Alexander tell us.

     

    reply to this | link to this | view in chronology ]

  •  
    icon
    Alana (profile), Apr 10th, 2014 @ 8:52pm

    KNOW about it? Do you not think they were the ones who PUT it there? They did admit to fuckin' around with encryption and installing backdoors, what if Heartbleed was one of them?

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      Mike Masnick (profile), Apr 10th, 2014 @ 11:45pm

      Re:

      KNOW about it? Do you not think they were the ones who PUT it there?

      We discuss that possibility in the post. It seems unlikely.

       

      reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 11th, 2014 @ 12:38am

      Re:

      See, this is one of the things the Snowden revealations has done - they have forced thee aware to mistrust every technological error as possibly being malicious, rather than a more plausible error of a mistake.

      I'm almost positive that the NSA knew about it, but what's far more interesting is that the coder in this case is saying all the right things; that he screwed up, and that, whilst the fix was trivial, the consequences were not.

       

      reply to this | link to this | view in chronology ]

      •  
        identicon
        Anonymous Coward, Apr 11th, 2014 @ 6:18am

        Re: Re:

        We can never know, but here are a few things we do know:

        1) OpenSSL is an open source project with a public commit history.

        2) The NSA employs people that have a skill set that may allow them to monitor certain important development projects
        looking for potential vulnerabilities.

        3) The NSA is not interested in disclosing vulnerabilities.

         

        reply to this | link to this | view in chronology ]

  •  
    identicon
    FM Hilton, Apr 10th, 2014 @ 9:32pm

    Furthermore

    If the NSA is capable of 'compelling' software companies to open up the backdoors to their programming, it is not a far cry for them to use against the rest of us this 'bug'.

    Sure, they say they had nothing to do with it-we can't prove that they did, but it sure is interesting that the NSA is assumed of being associated with it and we're expecting them to be.

    After all, there isn't anything they aren't capable of-that much we all know.

    It's just not provable yet.

    On the other hand, how come it took so damned long to find it, anyway? From what I've read, it's been around 2 freaking years-you'd think that someone would have caught this long before now and corrected it.

    Which makes it a cascading mistake with enormous consequences. I don't feel sorry for the programmer. He is supposed to be able to do his job correctly, and check the code before it's released along with the others he's working with. Nobody caught it and now we're paying the price for one 'mistake'.

    Apologies don't cut it.

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 11th, 2014 @ 12:29am

    Even just knowing about it would be as bad as having put it in there themselves.

    Remember, you are paying these people to protect you.

    If they knew about a security hole as bad as this one, and decided to make use of it instead of warning you all, it's criminal negligence at the very least, possibly bordering on direct treason.

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      John Fenderson (profile), Apr 11th, 2014 @ 8:28am

      Re:

      "Remember, you are paying these people to protect you."

      At this point, I think it's very clear that we are not, in fact, paying them to protect us. We're paying them to spy on us (and everybody else).

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    My Name Here, Apr 11th, 2014 @ 12:55am

    Clearly

    Clearly the NSA, TSA, and other letter agencies are responsible for every bad thing on the planet. Get a flat tire? NSA is doing is so they can get their covert garage guys to install tracking devices on your car. Stub your toe? It's that tracking band aid that will get you ever time.

    I can't wait to see the new Techdirt logo, the one all covered in tin foil. It's getting weird in here!

     

    reply to this | link to this | view in chronology ]

    •  
      identicon
      Anonymous Coward, Apr 11th, 2014 @ 3:25am

      Re: Clearly

      Given the revelations from the Snowden docs, can you really blame people for thinking the worst? Because everything that has been revealed has shown that the NSA has directly engaged in Acts of War under Geneva - and yet, Snowden and everyone else are either:

      a) traitorous scum; or
      b) swivel-eyed loons who know not what they are talking about.

       

      reply to this | link to this | view in chronology ]

    •  
      icon
      John Fenderson (profile), Apr 11th, 2014 @ 8:29am

      Re: Clearly

      This is a hilarious comment, given that the post itself concludes (correctly, in my opinion) that the NSA is likely not behind this.

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 11th, 2014 @ 1:24am

    while the bug is catastrophic in many ways, what it's not good for is targeting specific accounts
    Wait, isn't the main problem with the NSA that they aren't targeting specific people, and are in fact scooping up as much data as possible on as many people as they can? Breaking into random Yahoo accounts sounds right up their alley.

     

    reply to this | link to this | view in chronology ]

    •  
      icon
      John Fenderson (profile), Apr 11th, 2014 @ 8:30am

      Re:

      I think that sentence was poorly worded. What I think it meant to say was "what it's not good for is targeting specific information."

       

      reply to this | link to this | view in chronology ]

  •  
    identicon
    Weird coincidence, Apr 11th, 2014 @ 1:52am

    A fact

    The NSA and the NATO cyber spies in Estonia are customers of Codenomicon.

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 11th, 2014 @ 7:30pm

    Remember how we found out that the NSA was holding onto "suspicious data" indefinitely, and that encrypted data was inherently suspicious? With this bug, the NSA can get the private keys to the encrypted data they've stored for years. Now all they have to do is decrypt that data. Regardless of whether or not they knew about this bug before this week, they'd already ensured they could make great use of it.
    https://www.schneier.com/blog/archives/2014/04/heartbleed.html

     

    reply to this | link to this | view in chronology ]

  •  
    identicon
    Anonymous Coward, Apr 13th, 2014 @ 10:24pm

    977linux... A Ph.D computer Science major from Deutschland who has coded bug fixes in the past for openSSL, drafted RFC6520 heartbeat and cleverly prefixed the SSL private key with the heartbeat payload to render it visible via classic buffer overrun which is ubiquitous to junior coders. Those conditions above beg one to pose the question, how one can treat the coding for RFC6520 in openSSL like a first semester programming class of adding two numbers to print the sum? the answer is simple. It was designed to behave precisely as it manifested and it was a sneaky try to backdoor all SSL traffic to be visible to the 5 Eyes monster. Next question is. When was Seggelmann was ordered by Bundesnachrichtendienst whose was ordered by NSA to introduce RFC6520 and the coding to implement it which is useless anyway?

     

    reply to this | link to this | view in chronology ]


Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here
Get Techdirt’s Daily Email
Save me a cookie
  • Note: A CRLF will be replaced by a break tag (<br>), all other allowable HTML will remain intact
  • Allowed HTML Tags: <b> <i> <a> <em> <br> <strong> <blockquote> <hr> <tt>
Follow Techdirt
Advertisement
Essential Reading
Techdirt Deals
Techdirt Insider Chat
Techdirt Reading List
Advertisement
Recent Stories
Advertisement
Support Techdirt - Get Great Stuff!

Close

Email This

This feature is only available to registered users. Register or sign in to use it.