Senate Given The Go-Ahead To Use Encrypted Messaging App Signal

from the feinstein,-burr-will-continue-to-use-AOL-chatrooms dept

Certain senators have repeatedly pushed for encryption bans or encryption backdoors, sacrificing personal security for national security in a move that will definitively result in less of both. Former FBI Director James Comey’s incessant beating of his “Going Dark” drum didn’t help. Several legislators always managed to get sucked in by his narrative of thousands of unsearched phones presumably being tied to thousands of unsolved crimes and free-roaming criminals.

It will be interesting if the anti-encryption narratives advanced by Sens. Feinstein and Burr (in particular — although others equally sympathetic) continue now that senators can officially begin using an encrypted messaging system for their own communications.

Without any fanfare, the Senate Sergeant at Arms recently told Senate staffers that Signal, widely considered by security researchers and experts to be the most secure encrypted messaging app, has been approved for use.

The news was revealed in a letter Tuesday by Sen. Ron Wyden (D-OR), a staunch privacy and encryption advocate, who recognized the effort to allow the encrypted messaging app as one of many “important defensive cybersecurity” measures introduced in the chamber.

ZDNet has learned the policy change went into effect in March.

If this isn’t the end of CryptoWar 2.0, then it’s at least a significant ceasefire. Senators are going to find it very hard to argue against encrypted communications when they’re allowed to use encrypted messaging apps. It’s not that legislators are above hypocrisy. It’s just that they usually allow a certain amount of time to pass before they commence openly-hypocritical activity.

This doesn’t mean the rest of the government is allowed to use encrypted chat apps for official communications. Federal agencies fall under a different set of rules — ones that provide for more comprehensive retention of communications under FOIA law. Congressional communications, however, generally can’t be FOIA’ed. It usually takes a backdoor search at federal agencies to cut these loose. So, members of Congress using an encrypted chat app with self-destructing messages may seem like the perfect way to avoid transparency, but it’s the law itself that provides most of the opacity.

If encryption’s good for the Senate, it’s good for the public. There’s no other way to spin this. Even Trump’s pro-law enforcement enthusiasm is unlikely to be enough to sell Congress on encryption backdoors. With this power in the palm of their hands, they’re more apt to see the benefits of leaving encryption un-fucked with.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Senate Given The Go-Ahead To Use Encrypted Messaging App Signal”

Subscribe: RSS Leave a comment
128 Comments
Anonymous Coward says:

Interesting article. Having done a little work in the area of encryption, I have always been suspicious of the government’s role, even in things like DES and AES. After DES was “broken”, the group that promoted it basically said “yeah we knew that”, but much after the fact. That is, they knew it was insecure, but promoted it as secure. Why do people believe that AES is secure? The argument is that IF all the details of an encryption scheme (symbol size, Galois Field primitive polynomial definition, encoding group size, substitution table non-linear values, etc.) are publicly disclosed, AND no one publicly shows how to break it, THEN it is secure. I am skeptical. They said the same about DES, then basically laughed about it. Personally, I think it likely that the government ALREADY HAS the backdoors for ALL public encryption standards. Just my opinion, but it is backed up by pretty solid history in this area.

Anonymous Coward says:

Re: Re: Re: Why do people believe that AES is secure?

Well, I would just point you to the history of DES. Take a look. In my recollection, the government influenced the design to be weaker than it had to be. Then, years later, it was proven to be too weak. How did that happen and for what purpose? I got the impression they knew exactly what they were doing all along, years before anyone else did. Could it happen again? That’s my point.

Anonymous Coward says:

Re: Re: Re:4 Why do people believe that AES is secure?

Well to be fair, maybe I could make my point more clearly.

The DES standard was proposed by IBM with a large key length, but it was shortened by government (at least that’s the rumor). IBM had (secretly, I heard) already developed both linear and differential cryptanalysis, which was still unknown to the public. Doesn’t it seem pretty clear that the government weakened the key in order to be able to secretly crack DES, and that was easier with a shorter key? It seems to follow in my way of thinking.

Do you think the government really wants a public standard that foreign government can use and they can’t break? That seems unlikely. It seems more likely that both DES and AES were carefully selected and positioned to be acceptable to the “masses” while already cracked by the NSA. It happened before, and I would expect that it will happened again and again.

Anonymous Coward says:

Re: Re: Re: Why do people believe that AES is secure?

uh oh, kidult alert…
the gummint doesnt have to have the smarts to do anything, all they have to do is extort, bribe, or otherwise threaten the companies or individuals involved to put in the backdoors or else that kiddie pron The They planted on their computer is found…
kampers, you naive propaganda victims need to get this straight : the puppetmasters are eee-vil, they DO SHIT like that ALL THE TIME, it shapes your world, but you cant believe it… the psychopaths count on your ignorance…

Anonymous Coward says:

Re: Re: Re:2 Why do people believe that AES is secure?

AES is open source, and has an International community behind it. So, if like the elliptic curve, they can find some subtle weakness,to exploit, they may get have partial success in weakening the system.
A deliberate backdoor on the other hand is going to be almost impossible to get in, as they cannot find out who all the expert watchers of the development are, never mind subvert them. Besides, good luck with subverting someone like Moxie Marlinspike.

Anonymous Coward says:

Re: Re: Re:3 Why do people believe that AES is secure?

But you would agree, wouldn’t you, that in the area of cryptography, very surprising things can happen. And people who are truly gifted in this area are few. So the real question about any open source encryption standard is when will it be broken, not if. I’ll bet your friend Maxie would vouch for that. As another example, the Quantum stuff is interesting in this area, according to the NSA. How do you assess that risk to open source encryption? I would still say “naked” open source encryption is a risky proposition at best, and may well lead to automated recovery of your private data in the future. Private encryption is not subject to automated encrypted, you need a Maxie. Not many of them. 🙂

Anonymous Coward says:

Re: Re: Re:4 Why do people believe that AES is secure?

How to totally miss the point, Moxie Marlingspike is a e
real cryptographer and security expert, and is the man behind the signal protocol and open whisper system, which are open source.

As to quantum computers, all the es perts know that they are a theoretical threat, but other than the one time pad, they are also required to build system that will withstand their use to attack systems.

Anonymous Coward says:

Re: Re: Re:5 Why do people believe that AES is secure?

Perhaps I did not make my point clearly. My point was that “open source” encryption systems (especially widely used ones) can be broken once, and then there is an automated way to uncover the messages of everyone who uses them. Open Source AES, for example, one break, and everyone is compromised. Closed source encryption systems, especially UNIQUE closed source systems (closed to the attacker, not the user) do not suffer this vulnerability. Imagine, for example, an automated encryption service that produces a private encryption system. Pay some small fee, and bango, you get a UNIQUE encryption JUST FOR YOU. This is not hard, and can easily be layered ON TOP OF (not INSTEAD OF) an existing system, like AES. So, you get all the benefits of the public review, but none of the weakness of a system used by ANYONE else. I think even Maxie would say this is totally feasible, maybe you could ask him for me. How about it, Maxie? Custom Encryption, layered ON TOP OF open source/public standards. Easy peasy, right?

Anonymous Coward says:

Re: Re: Re:6 Why do people believe that AES is secure?

The thing that really eludes me is how this false narrative of Open Source Encryption being MORE secure was ever accepted by anybody outside of the Open Source community. I mean, I understand that you guys reinforce each other with your (nearly) religious belief about the goodness of Open Source. There are some good things about Open Source. But SECURITY is NOT one of them!

Fact: IBM and the US Government have been decades ahead of ANYTHING ever done by the Open Source community in the area of encryption. How do we know that? Documented history. DES. Walt Tuchman. IBM. Common sense.

Fact: The US Government is serving the interests of the US Government in endorsing (and shaping) encryption standards. How do we know that? History and common sense.

Fact: There is only so much work ANYONE is willing to do for FREE. Yes, religion, including Open Source religion, does inspire some people to work for free (at least a little). Sometimes even incredibly smart people (Linus is a Saint). But EVEN THEY are either (a) independently wealthy or (b) actually use their time to make money. There really are no other choices. So, where does more work get done, in the religious community, or the commercial community. The Commercial Community. How do we know that? History and common sense.

How the argument that Open Source encryption is BETTER protection for your data survives is another one of the great FAKE NEWS stories of our time. Both IBM and the Government have a long documented history of LEADING in this area with SECRET SOLUTIONS. Leading by a lot. Because they pay for it. How do we know? History.

Free/Open Source Encryption is Weaker than EVERY Commercial Alternative, by public demonstration, and think about it, common sense. It really is IMPOSSIBLE to pay for a better scheme? Of course not. They’re all better (provably), or they wouldn’t sell (billions of $).

Anonymous Coward says:

Re: Re: Re:7 Why do people believe that AES is secure?

Here is the advice of a widely known Simpleton in the field (me). Be suspicious of constants that are baked into your encryption algorithm. Think S-Box. Be more suspicious of constants that are repeatedly stamped on intermediate results. Be even more suspicious when the stamping increases as the key size increases, it looks bad. Repeatedly applying fixed constants is the definition of how to weaken a cipher. You want the key to configure as many parameters as possible, and avoid the use of fixed constants. Cryptology 101.

Anonymous Coward says:

Re: Re: Re:7 Why do people believe that AES is secure?

The thing that really eludes me is how this false narrative of Open Source Encryption being MORE secure was ever accepted

There is a very good reason why the cryptology community have gone for standard and open source encryption, it is because there is a long history of individuals and small groups designing encryption systems that were all too easily broken.

As the old adage goes, it is easy to design an encryption system that you cannot break, but much harder to design a system that others cannot break.

The Wanderer (profile) says:

Re: Re: Re:6 Why do people believe that AES is secure?

My point was that "open source" encryption systems (especially widely used ones) can be broken once, and then there is an automated way to uncover the messages of everyone who uses them. Open Source AES, for example, one break, and everyone is compromised. Closed source encryption systems, especially UNIQUE closed source systems (closed to the attacker, not the user) do not suffer this vulnerability.

Eh? That doesn’t make sense.

Whether the system is open or closed, once a way to break in through it has been found, anyone using the now-broken system is vulnerable.

Assuming the fact of the vulnerability isn’t disclosed somehow, the odds of its being found by the people who have ability and access to fix it would presumably correspond roughly to the number of such people who exist – which probably would mean that the edge would go to the open system.

Once the vulnerability is known, the odds of a fix actually being created depend on how many people with the ability and access to fix it actually care to do so. There are different factors affecting that in open and closed contexts, so this one could be argued case-by-case, and may be a wash.

But once a fix has been created, it has to be gotten out to the users.

With open software, the users can (generally speaking) get the fix for free, the same way they (generally speaking) got the original software. That means there’s little obstacle to their getting it.

With closed software, the users may very well need to pay to get the fix – especially if "being paid" is one of the reasons the providers of the closed software bothered to create a fix in the first place. That means there is an obstacle in the way, which makes users less likely to actually get the fixed version.

Even if the providers of the closed software make the fix available for free to anyone who already has the unfixed software (including people who pirated it?), there may still be other obstacles; consider the number of people who turn off Windows Update because they don’t trust Microsoft not to break things they like, much less the number of organizations which turn it off because they know updating will break things. The same consideration does apply with open software to some extent, but IMO less so, since in the worst case the users can still avoid any undesired changes by forking.

Imagine, for example, an automated encryption service that produces a private encryption system. Pay some small fee, and bango, you get a UNIQUE encryption JUST FOR YOU. This is not hard, and can easily be layered ON TOP OF (not INSTEAD OF) an existing system, like AES. So, you get all the benefits of the public review, but none of the weakness of a system used by ANYONE else.

This does not necessarily hold. Although I do not fully understand the details or recall my source for this just offhand, I am given to understand that in some cases, adding additional mathematical manipulation to the math which constitutes a given form of encryption can actually make it easier to reverse the process and extract the original cleartext from the ciphertext.

(Using the same data twice in the process is one thing which can have this result; for example, while using the cleartext itself as the seed for your RNG to produce an encryption key might seem like a good idea, it means that the number which the cleartext represents has been used twice in producing the ciphertext, and that in turn may make the net mathematical transformation less complex.)

Anonymous Coward says:

Re: Re: Re:7 Why do people believe that AES is secure?

“Security by obscurity is not very good security at all, it might stop pimple faced kids in mommies basement but it will not stop knowledgeable and motivated personnel.” (from another post in this article)

Obscurity being “a thing that is unclear or difficult to understand”.

So, translating the statement, Making your encoded message difficult to understand is a bad thing. This could only be promoted and accepted by the Open Source community. I think even a casual observer would see this presumption as obviously ridiculous. The comment about “pimple faced kids” just make it more ridiculous.

Not that you said this, I am just venting about the logic often (and publicly) displayed by the open source community about encryption.

You can think of every component of your encryption machine being an attack surface. The more you expose, the more opportunity you give the attacker. EVERY nuance of your encryption machine is exposed in open source, you have the largest possible attack surface. This is precisely why people serious about encryption do not expose the details of their encryption machines, and do not publish their source. By serious, I mean highly paid professionals, not open source saints like Linus (and he is a legitimate saint).

I can agree with you about the economy of open source. It is, after all, free, pretty economical. I could also agree that someone without cryptographic training would probably produce a solution weaker than AES. So, I could agree that open source solutions are better than most amateur solutions. But assuming that the choice is either public open source or amateur hour is a false choice.

“You get what you pay for” is usually more true than false. Obscuring your encryption technique, by hiding it in closed source, is more secure, not less secure. You might have to pay to analyze it and verify it, but money will work. This is demonstrated publicly and repeatedly. Just ask the pros, not the priests.

The Wanderer (profile) says:

Re: Re: Re:8 Why do people believe that AES is secure?

"Security by obscurity is not very good security at all, it might stop pimple faced kids in mommies basement but it will not stop knowledgeable and motivated personnel." (from another post in this article)

Obscurity being "a thing that is unclear or difficult to understand".

No – in this context, "obscurity" means "being little-known". I.e., if your security relies on not many people knowing about you, you’re not really very secure.

It’s the difference between "everyone knows there’s a combination lock here, but not many people know the combination, and it’s hard to figure out and "the combination to this lock is easy to figure out, but not very many people know that this combination lock exists in the first place". The latter is "security by obscurity"; the former is not.

In simple analogy, an encryption algorithm is like a lock, and an encryption key is like the combination to that lock. Keeping the combination secret is not security by obscurity; keeping the algorithm secret is.

Both can increase security, technically (just as having a hidden combination lock with a hard-to-figure-out combination is technically more secure than a non-hidden lock with the same combination) – but keeping the algorithm secret is short-term security at best (just as the hidden combination lock will eventually be discovered), and because of all the ways a privately-devised encryption algorithm could have unknown weaknesses, is more likely to reduce net security (vs. using a known and well-studied one) than increase it.

You can think of every component of your encryption machine being an attack surface. The more you expose, the more opportunity you give the attacker.

That depends on what you mean by "expose".

If you mean "put in a place which is accessible to be attacked", then sure; that’s true of any software. However, if there’s a hole somewhere else in the software, you may unexpectedly find that an interface which you thought was internal-only may suddenly be reachable by an external attacker – and is therefore exposed, for this purpose.

If you mean "make known to the attacker", then no – because you cannot guarantee that the attacker will never know a given detail; even in the absolute best-case scenario, much less a real-world plausible scenario, binary disassembly and decompilation are things which exist.

Anonymous Coward says:

Re: Re:

Perhaps it’s the case that there could exist an algorithm that can efficiently solve a math problem but that algorithm itself is difficult to solve. Once that algorithm is solved, however, then solving related math problems becomes easy. Maybe there is a way to efficiently factor the multiple of two large prime numbers, for instance, we just don’t know how to do it yet.

But the chances that the government knows how to do it but the public doesn’t are pretty low.

One of the things about cryptography is that no encryption algorithm should be created and used in house without public scrutiny. All algorithms should go through a long period of public scrutiny before being approved for use. Standard algorithms, not non-standard in-house, algorithms are considered safer exactly because they went through a much more thorough testing process that involves a whole lot more very intelligent people before they got approved. It’s why the government, IIRC, now uses encryption standards in opposed to stuff that they made in house. Exactly because their in house ciphers later turn out to be garbage.

Given the fact that the public can much more thoroughly scrutinize a cipher than the small group of people working for the government (and, remember, it’s not like the government is composed of the most intelligent meritorious people. They’re the government, classic example of lazy people that take your money and don’t have the merit to make their own money by actually working. It’s the private sector of individuals that are much more intelligent) all it takes is for one person to find a flaw in the cipher, publicly present it, and everyone will know its weakness. Then new ciphers will be worked on. and that’s exactly how cryptography advances. Older ciphers become obsolete and get replaced by newer, better, ciphers that don’t have the same weaknesses as the older ones. One day AES may also get replaced as weaknesses are found but, in the meantime, it’s unlikely that there is a secret esoteric weakness that only our dumb government knows about but the many very smart people that scrutinize these ciphers can’t yet figure out.

Anonymous Coward says:

Re: Re: Re:

Personally, I don’t think it wise to depend upon the poor intelligence of the government. What they lack in intelligence they more than make up for in resources (that they got from us). One of my mentors as a young man was Dr. Walter Tuchman of IBM fame, and although quite brilliant himself, he had a LOT of respect for the government based on first hand experience. I would never discount their abilities, and it is usually healthy to question their motives. Abide by the law, no doubt, but question their motives and not their ability. They’re smarter than they look. That’s part of the “intelligence” show in that sector.

Anonymous Coward says:

Re: Re: Re: Re:

Mathematics, which us what we are talking about when it comes to cytology, is not dependent on resources other than human minds that are dedicated to understanding it. Also it is not a field where you can demand results, but rather have to allow the practitioners to follow their own hunches.

For computational problems where a loosely coupled system is useful, anybody who can build a community of supporters can gain use of more computing power than the largest supercomputer. In he Internet age, inspirational leadership, and a willingness to work in a very open fashion is the kkey to obtaining massive computational and even human resources.

Anonymous Coward says:

Re: Re: Re: Re:

“What they lack in intelligence they more than make up for in resources (that they got from us).”

If you are referring to something like Stuxnet it should be noted that there is a huge difference between being able to find a specific software vulnerability and exploit it to install a worm and being able to crack what underlies a huge percentage of all computer security.

That specific vulnerabilities exist here and there is no surprise. Big deal. There isn’t a huge widespread effort to crack every little vulnerability that every computer system may have and expose it and computer systems are constantly evolving as software changes and so new vulnerabilities are always being introduced.

On the other hand being able to crack AES or RSA would be huge as it would render so many of our security systems vulnerable. So there is a much larger widespread effort trying to crack these and to ensure that they are secure.

Anonymous Coward says:

Re: Re: Re:2 Re:

Right, I agree with that. For example, you remember Turing and Enignma, right? My point is that for the same reason Turing did not reveal his findings to the Germans, the government (or pretty much anyone else outside academia) WOULD NOT reveal their cracking of (pretty much) any public encryption standard. Why would they? They use private (secret) encryption for their valuable stuff anyway.

Anonymous Coward says:

Re: Re: Re:3 Re:

“They use private (secret) encryption for their valuable stuff anyway.”

As stated this is very discouraged. Private, secret, encryption ciphers have a high probability of being weak because they didn’t undergo the test of time and the test of widespread public scrutiny. If they want to use their own private encryption ciphers they can do so at their own risk but I would much rather stick with something true and tried. Trust me, the government would be wise to do the same and they very well know it. They learned from their mistake when it came to DES which is why now they adopt AES, a true and tried cipher.

Will weaknesses later appear? Possibly. But because it’s a widespread standard chances are when a weakness does appear someone will notice it and publicize it and it will be widely known that it’s time to upgrade to something new. The fact that there are many eyeballs scrutinizing it is what makes it ideal.

Anonymous Coward says:

Re: Re: Re:4 Re:

I thought that DES was intentionally weakened by the government, right? They recommended a 56 bit key when a 64 bit key (stronger) was much more natural. Also, the fact that they didn’t actually use it for THEIR most sensitive data (they used a private system) I think was also telling. My point is that they insisted on the weakness. Why? Lastly, who exactly is “discouraging” secret ciphers? The point is to keep secrets, right, not let them out? Why not just go ahead and use AES, then obscure it? You get all the benefits and avoid the problem that EVERYONE else will have when it’s cracked. Which is a “when” not “if” question, right?

Anonymous Coward says:

Remember the movie “The Imitation Game” about Alan Turing? Very interesting from many angles. Did he tell the world that he cracked the German’s code at the time he did it? Of course not. I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government. Does anyone else find that incredible?

Lawrence D’Oliveiro says:

Re: I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

No need to trust the US Government, or any other Government–just look at what the encryption experts in the open-research community themselves are using–they recommend AES-128.

Anonymous Coward says:

Re: Re: I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

Right (sarcasm). They recommend disclosing a LOT of VERY USEFUL information to ANY ATTACKER, and then say TRUST ME IT’S REALLY UNBREAKABLE even if you HELP ME BREAK IT. If, for example, you made simple modification of the symbol size (say 4 bits or 16 bits instead of 8 bits), you could not argue it was less secure, it could only be more secure. In fact, the more DIFFERENT your scheme is from the PUBLICLY KNOWN scheme (as long as the mods are mathematically sound) the more secure it is by definition, right?

Anonymous Coward says:

Re: Re: Re: I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

Actually, sorry for the sarcastic tone, that’s a bad habit of mine. My point is that people that don’t use AES will NEVER tell you they are not using AES. If there IS a back door to AES, they will NEVER tell you there is. My secondary point is that using the “open-research” community, which conjures up images of well meaning professors with academic interests and funny glasses on the end of their nose, to validate and certify the best funded, most aggressive and successful espionage services in the world (all of them) seems risky, at least.

ThaumaTechnician (profile) says:

Re: Re: Re:2 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

No need to apologize for your sarcastic tone, I was reading your comments and adding a lot of sarcastic comments of my own.

If you knew enough about cryptography to be able to make comments worth paying attention to, you wouldn’t be making the comments that you are making.

Anonymous Coward says:

Re: Re: Re:3 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

Well, you’ve done a pretty good job of hiding your meaning, which is…what? I’m an idiot? I would just say, compared to who? You? Einstein? Turing? Do you actually have anything to say? Spit it out.

OldMugwump (profile) says:

Re: Re: Re:4 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

A pretty classic crypto mistake is to invent one’s own algorithm, thinking that by using a secret algorithm instead of a published one, you’re more secure.

Unless you’re a world-class crypto expert (and maybe even then), you can’t possibly come up with a scheme that’s more secure than one that has been vetted by dozens of true crypto experts (many of whom do not work for your adversary).

There are lots of techniques for cracking crypto, which, unless you’re an expert, you’ve never heard of.

Anonymous Coward says:

Re: Re: Re:5 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

Yes, I understand your points. My point is that when the government (or anyone else) says “trust me”, I immediately become suspicious. Their certification MUST be motivated by self interest, right? Or would you propose that they REALLY want people to hide information? That seems unlikely to me. I am not saying “invent your own algorithm”. I am saying “even small changes may be big improvements”, if indeed AES (or other schemes) already have been secretly broken.

ThaumaTechnician (profile) says:

Re: Re: Re:6 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

“My point is that when the government (or anyone else) says ‘trust me’, I immediately become suspicious”

There’s no reason to have to ‘trust’ anyone. The AES encryption specification is completely and fully public, it was chosen among many in an open competition some years ago whose goal was to choose the next encryption standard.

There’s nothing stopping you from learning how AES works – there are tons of resources on the ‘Net, nothing stopping you from learning the math, from learning how 8-, 16-, 32-, and 64-bit CPUs generally work and how they differ, and why this was one of the reasons AES was chosen.

And there’s nothing stopping you from learning and (hopefully) understanding why pretty much all cryptographers (even those who are criminals and anarchists) think that AES is a good scheme. Cryptographers have been pounding on, digging into, scratching, and trying to break AES for years.

Not using AES because people who wear suits and ties use it is a really poor basis for decision-making about, well, anything.

Anonymous Coward says:

Re: Re: Re:7 I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

I did study a little about how AES works. I also studied “NSA Suite A Cryptography”. NSA Suite A Cryptography is NSA cryptography which “contains classified algorithms that will not be released.” “Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market).”

Seegras (profile) says:

Re: Re: I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

That very same government defined AES to protect itself from adversaries. Because not only the spooks need encryption, but also every other government body, the military, hospitals, police, power grids, power stations, and so on.

Do you really think they could secure all “their” infrastructure with something “more secure” without the whole world knowing?

Here’s how it’s done:
https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle

Anonymous Coward says:

Re: Re: Re: I just find it an incredible argument to go to your adversary (the government) to define the encryption scheme (AES) to protect yourself from that same government.

Then what is this? NSA Suite A Cryptography is NSA cryptography which “contains classified algorithms that will not be released.” “Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market).”

Anonymous Coward says:

Re: Re:

For sure, a show, and the nature of the show is to not show the show. Welcome.

For what it’s worth, here is my vision of a secure world:

Pretty much every processor now has a SIMD unit, even tiny little processors on cheap phones and such.

These SIMD units can encrypt and protect data INSIDE the CPU (before it travels anywhere) and only write ENCRYPTED DATA and ECC to memory. Then, this encrypted and protected data chunk can travel wherever it likes. It can be used, abused, corrupted, whatever. However, in the future, when you need it again, you retrieve whatever you get, decrypt it, validate it, and use it, knowing it is correct data with a verifiable measure of certainty.

Encryption for everyone, everywhere, all the time, for almost no cost. Well programmed, these SIMD units, inside the CPU, burn almost no resources, because they are so inherently parallel and optimized to do just this.

A protected world.

Amen. 🙂

Anonymous Coward says:

Re: Re: Re:

Michael Masnick, would you consider offering some free advice to show the generous side of your nature? It’s not convenient yet for me to have lunch with you, but perhaps you could just give me a small part of your opinion regarding the following question.

Say, for example, that GWiz and I whipped up a kernel driver for Linux that essentially encrypted and protected both the DRAM memory system and the external storage, all the time, with no reasonable performance impact. That is, you would gain the benefits of ECC memory and Erasure Coded RAID using standard memory and standard storage on everything from cell phones to servers.

The question is: Do you think there is some type of hybrid Open Source + Pay for Something mode that could work in this market segment? For example, offering weaker encryption or protection for free systems, and stronger encryption and protection for pay for systems? Or something like that?

I really am interested in your opinion, and could well consider lunch with you in the future.

Anonymous Coward says:

Re: Re: Re: Re:

And to celebrate the fact that GWiz is not an “Insider” (and also generally well liked), and with respect to the group that does represent itself with a badge declaring they are a “TechDirt Insider”, I propose the name “Insider” for this product. It actually runs Inside the CPU, which is a key component of the protection it offers. It keeps information secret, as all Insiders do (I mean, by the definition of the word). And you can use it in everyday life easily – “Does this system have an Insider?” and everyone will know what you mean. I like it. I’m thinking both Linux and Windows versions (already written), that’s a huge market. Thanks to you guys, I think I have a great name. What do you think?

Anonymous Coward says:

Re: Re: Re:3 Why do you need one?

Choosing a key is a separate activity (right?) from encoding or decoding. Key selection is not what I am describing. I am describing the permutation (encryption/protection) and recovery of data, even partial data, at very high speed and with a known level of correctness. If you want to use a random number generator, that’s ok, but “real” random number generators are REALLY hard to come by.

Anonymous Coward says:

Re: Re: Re:4 Why do you need one?

And since you seem like a smart guy, maybe you could answer a question of mine. Every encryption scheme needs to map a number (in the end) to another number. For example, imagine a list of naturally ordered numbers from 0 to 2^128 – 1. For each number, there is a counterpart “encrypted” number. That is, there is EXACTLY ONE counterpart between the “plain text” and the “encrypted text” for a fixed field size, and you can think of encryption as moving from one ordering scheme to another. I think what that means it that if you start with the naturally ordered list (on the left), and move to the same position in the encrypted list (on the right), you cannot end up in the same place, right? Otherwise there would be no encryption. So, if you move back from the encrypted space (on the right) to the natural order space located at the encrypted value, you also cannot end up in the same place, right? Does this provide a reliable mechanisms for a “pseudo” random number generator? That is, to basically “encrypt” the key, and use that in the place of a random number generator. Or does that technique carry some inherent weakness?

Anonymous Coward says:

Re: Re: Re:6 Why do you need one?

Here is another view of the same question – if you re-encrypt the encrypted text through the same encryption engine, I believe you will traverse the entire numeric space, visiting each value exactly once. So, if you want to disguise your key, you could follow this trail for some unknown number of entries. Wouldn’t that be provably as or more secure than any pseudo-random number generator applied to the plain-text key? And used in place of a standard, wouldn’t it only be as good or better, and never worse?

Joel says:

Re: Re: Re:7 Why do you need one?

I’m not the guy you asked, but I can tell you that you can indeed use an encryption scheme you know to be indistinguishable under a chosen plaintext by a polynomially bound attacker (IND-CPA) as a pseudorandom generator. Essentially the IND-CPA already tells you that it can’t be distinguished from real randomness, so it’s pretty much just as good for a pseudorandom generator.

Some people like this because it goes back to the proofs of the encryption scheme to prove the pseudorandomness. Others prefer to base their PRNGs on proofs specifically made for random number generators.

Don’t forget that you still need to seed any PRNG from a good source of initial randomness.

Anonymous Coward says:

Re: Re: Re:8 Why do you need one?

Thank you for that, you said it much better than I could have. Really random numbers are really hard to produce, in my memory of such things. And, I am a big fan of recursive definitions when it comes to managing the complexity, and proving the correctness, of systems like this. Big Fan. 🙂

AEIO_ (profile) says:

Re: Re: Re:8 Re7: Why do you need one?

“Don’t forget that you still need to seed any PRNG from a good source of initial randomness.”

Of course — exactly right! And here you go:

https://xkcd.com/221/
http://dilbert.com/strip/2001-10-25

And there’s also this gem:

Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.
— John von Neumann

Anonymous Coward says:

Re: Re: Re:9 Re7: Why do you need one?

OK, I have to admit, your cartoons are good. But I am not sure I understand your point. If you have an encryption scheme that can transform the plain-text key into something indistinguishable from real randomness, why do you need another pseudo-random-number generator? Or are you saying you need “real” randomness, which is Really hard?

Anonymous Coward says:

Re: Re: Re:3 These SIMD units can encrypt and protect data INSIDE the CPU (before it travels anywhere) and only write ENCRYPTED DATA and ECC to memory.

In what way? Here’s another question, then, maybe you can help answer. AES using a combination of linear and non-linear transformations to achieve it’s result. But in the end, no matter how many “rounds” are employed, all you have really done is move a number from one numeric ordering to another numeric ordering with a 1-1 mapping. Wouldn’t it be simpler and faster to focus on this basic re-ordering, done in as few steps as possible, to achieve the result? The definition of AES (and DES) made it easy to describe and implement in hardware, but it’s a software world now. Shouldn’t we take advantage of the flexibility and obfuscation that software provides? For example the use of different symbol sizes or primitive polynomials. Or do you think it always better to “trust” the US government to provide the encryption that you use to protect yourself from the US government? At a high level, that sounds kind of crazy to me.

Anonymous Coward says:

Re: Re: Re:4 These SIMD units can encrypt and protect data INSIDE the CPU (before it travels anywhere) and only write ENCRYPTED DATA and ECC to memory.

Lawrence D’Oliveiro:
“Sounds good. Do you have a trustworthy source of random numbers?”

To which you replied:
“Why do you need one?”

This is why I said you lack knowledge about how present day encryption is performed. One needs a random number to seed what ever algorithm is being used.

Anonymous Coward says:

Re: Re: Re:5 These SIMD units can encrypt and protect data INSIDE the CPU (before it travels anywhere) and only write ENCRYPTED DATA and ECC to memory.

It seems to me that using the same encryption technique that encodes the data to encode the key completely removes the requirement for “random” numbers. “Some people like this because it goes back to the proofs of the encryption scheme to prove the pseudorandomness.” Using the plain-text key as the seed gets you where you want to go, no?

Anonymous Coward says:

Re: Re: Re:6 These SIMD units can encrypt and protect data INSIDE the CPU (before it travels anywhere) and only write ENCRYPTED DATA and ECC to memory.

And in fairness, I would say that your intuition is correct. I consider myself a student of applied cryptography, not a theoretical expert. My interest is in the practical applications to the real world, like “encrypt everything all the time everywhere”. I think that would be good, overall, for people in the world. For example, what goes on in our heads is private, unless we show it. Why not make our computers operate the same way? Does this require the MOST secure encryption, or will a “simpler” scheme solve the practical problem in a way that produces a better outcome. That kind of thing.

Anonymous Coward says:

Re: But but but.....we are special

Either signal is secure, or it has a backdoor in it. The problem is that those congress critters are arrogant enough to believe that their messages will not be collected along with everybody else’s, after all they are not everybody but rather somebody.

Anonymous Coward says:

Open Source Security Question

One of the things I have always found hard to understand is why people believe that Open Source systems are as secure as Closed Source systems. Here is a simple example. In which case are you more secure: the case where the attackers has your source code that you use for encryption and decryption, or the case where he does not? It seems to me that at least the intuitive answer is the case where he does not, i.e., closed source. I understand that there are a lot of public testaments to how secure public and open source encryption standards are. There are a lot of talking points. But I just can’t quite see, in the end, that giving your attacker the actual source you use is good for you. Right?

Anonymous Coward says:

Re: Open Source Security Question

If what I say above is true, then I think I have the answer to the question I posed to Michael Masnick – that is, what would be the product definition and market segmentation for “The Insider – Real Time Data Encryption Engine”. One version would be for the Open Source market, GWiz and I could do it in our spare time. The closed source version would be for more money, but would also be more secure, as closed source is. Everybody wins. Free people win a little, pay for people win more. Does this fit? I think it follows the usual premise of a product spanning these two market segments, no?

Anonymous Coward says:

Re: Open Source Security Question

The history of DRM cracks gives lie to the idea that closed source is a useful security feature.

The basic adage of encryption is to assume that the attacker knows all details of the system, and that only the key is secret. Unless you can ensure that the attacker cannot get access to a working system, and/or exfiltrate the source code, they will have details of how the system works.

In particular with respect to encryption, peer review is essential, and open source gets more peer review than closed source because the peers choose themselves. This usually means that their are people pounding on the code long before it gets widespread use, while with closed source, this pounding usually takes place after it get into widespread use.

While as ever, no approach is perfect, the open source approach increases the chances of flaws being found before there is widespread use. Further, when a live exploit is in use, finding the bug being exploited is the hard part, and within open source their are many more people available to go looking for it. This is part of the reason why open source reaction times to exploits are measured in hours, rather tan months.

Anonymous Coward says:

Re: Re: Open Source Security Question

Again, I’m not trying to be argumentative, I’m trying to understand the argument about open source encryption. Say, for example, that I use open source encryption, but I lie about it, and say it’s not, and make it look like it’s not. For example, say that I use standard AES, but mask the result with a fixed constant, and hide that fixed constant inside my own (either purchased or developed) closed source. Since my attacker only knows that I am NOT using open source, haven’t I improved, in a guaranteed way, the encryption strength of the solution? That is, I have all the benefits of the open source solution (as you mention) but an additional tool to confound my attacker, right?

Anonymous Coward says:

Re: Re: Re: Open Source Security Question

The security of an encryption system should only rely on the secrecy of the key, or in the case of public key systems, the private key, and the amount of time needed to break it, and physical and electronic security of the equipment, and the security of the operating system .

With modern open source encryption systems, you do not need to worry too much about the encryption, but rather much more about keeping software up to date, and managing your keys in a secure fashion. Currently the biggest threat is not a compromised encryption system, but rather a compromised operating system letting spyware in.

Anonymous Coward says:

Re: Re: Re:2 Open Source Security Question

Yes, I understand that point of view, but it is not the only point of view. For example, even though this article is a little dated, it does reinforce my point about what we know and what we don’t know about the government’s ability to crack AES:

Meanwhile, over in Building 5300, the NSA succeeded in building an even faster supercomputer. “They made a big breakthrough,” says another former senior intelligence official, who helped oversee the program. The NSA’s machine was likely similar to the unclassified Jaguar, but it was much faster out of the gate, modified specifically for cryptanalysis and targeted against one or more specific algorithms, like the AES. In other words, they were moving from the research and development phase to actually attacking extremely difficult encryption systems. The code-breaking effort was up and running.

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

In addition to giving the NSA access to a tremendous amount of Americans’ personal data, such an advance would also open a window on a trove of foreign secrets. While today most sensitive communications use the strongest encryption, much of the older data stored by the NSA, including a great deal of what will be transferred to Bluffdale once the center is complete, is encrypted with more vulnerable ciphers. “Remember,” says the former intelligence official, “a lot of foreign government stuff we’ve never been able to break is 128 or less. Break all that and you’ll find out a lot more of what you didn’t know—stuff we’ve already stored—so there’s an enormous amount of information still in there.”

Anonymous Coward says:

Re: Re: Re:3 Open Source Security Question

The biggest protection that open source in general has against nasties being introduced into the code, is that nobody knows how many copies of the repositions exist, or how they track the official development path. While the development model is no guarantee of perfectly clean code, at least the code is open to review, and as the code distribution is source code, there is no way to introduce a backdoor into every binary without putting it in the source code.
Could another key weakening trick, like the promotion of selected elliptic curves happen,. Wellyes of course it could, but specific suggestions like that will be viewed with more suspicion going forward. Elliptic curve cryptography is still used, it now known that some curves make iit easier to attack, but then all cryptography based on more complex maths ay turn out to have such a weakness. Such attacks however are hard to find, and so only turn up rarely. Also, they tend to nbe of limited use, by bringing the time to decode a message to level where it is useful for selected messages, but nowhere fast enough for geberal surveillance.

Is open source encryption invulnerable to introduced weaknesses, no, but they will have to be subtle and hard to find, in the mathematical sense, and found by someone who will keep them secret, rather than publishing for academic glory. Also code bugs will occur, but here the open source community can usually respond with a patch to fix the issue within hours.

With a proprietary binary software model, even if you can examine the source under an NDA, there is no way to check that it is the code running on your system.

Anonymous Coward says:

Re: Re: Re:4 Open Source Security Question

Well, I understand your argument about some of the merits of open source, and I can agree that there are some merits to open source and to opening up code to third party review. From a practical perspective, there are also risks to opening the code to third parties, if the third party is your adversary, right? It just seems to be that building ON TOP of open source with a “secret” mod sounds like it would improve security, right? It would cost more, either in dollars or effort (nothing is really free), but I think even you would agree it is POSSIBLE that it would produce a more secure solution to modify open source, right? My real ambition is to identify faster, more efficient techniques that provide the majority of the benefit while minimizing cost, so that everyone everywhere could enjoy the same privacy they have between their ears on their cell phones, computers, servers and such. Only show what you want to, and keep everything else private. That just sounds inherently good to me.

Anonymous Coward says:

Re: Re: Open Source Security Question

Well, access to the source code does not INCREASE the level of security, right? And it seems pretty clear that it MIGHT decrease the level of security, because you are supplying a LOT of information about the system to the attacker, right? Even a slight obfuscation of data on top of a known good system makes it stronger and not weaker, right?

Anonymous Coward says:

Re: Re: Re: Open Source Security Question

“access to the source code does not INCREASE the level of security”

Of course not, but that is a silly thing to say.

“Even a slight obfuscation of data on top of a known good system makes it stronger and not weaker, right?”

Wrong. There have been many papers written on this subject, you don’t need me telling you this.
Security by obscurity is not very good security at all, it might stop pimple faced kids in mommies basement but it will not stop knowledgeable and motivated personnel.

Anonymous Coward says:

Re: Re: Re:2 Open Source Security Question

Ok, well, with dozens of papers, then it must be easy to explain. System A has standard Open Source encryption, say, using AES. System B has closed system encryption, which is the same AES stamped with a fixed binary mask, but hidden from the attacker. The attacker doesn’t actually know what is done in System B. Tell me again why System B is easier to attack.

Anonymous Coward says:

Re: Re: Re:3 Open Source Security Question

“Tell me again why System B is easier to attack.”

It was not stated that a closed system would be “easier to attack”.

Stating that a closed system is not more “secure” than an open one does not imply that it is easier to attack. There are many vectors and tools with which code can be compromised, being able to peruse the source might be interesting but it does not make cracking encryption of modern systems any easier.

Anonymous Coward says:

Re: Re: Re:4 Open Source Security Question

Well, I do seem to remember reading a paper about source code analysis that was then tied to a side-channel attack consisting of measuring certain characteristics of the CPU (maybe cache contents or something like that, hard to remember). Without the source code provided to the attacker, the exploit would not have been possible. I think there are whole families of attacks based on source code analysis, no? Maybe I’m wrong, I can try to look it up if that helps.

Anonymous Coward says:

Re: Re: Re:5 Open Source Security Question

So now you have moved from discussion of encryption techniques and their vulnerabilities to source code and its vulnerabilities … because you think that compromising the platform will magically unlock the encryption? This approach may have limited success when deployed against simple systems, however it will fail miserably when encountering more advanced systems.

Anonymous Coward says:

Re: Re: Re:10 Open Source Security Question

Maybe next time just say this “I don’t like what you say and/or I don’t like you, but I am without the vocabulary and/or reasoning ability to express why. I just don’t like you. So there.”

I think this speaks more directly to what you meant, right?

Better to be direct when speaking with others in public and in writing, it saves everyone time.

🙂

Anonymous Coward says:

Re: Re: Re:2 Well, access to the source code does not INCREASE the level of security, right?

Call me a cynic, but I think everyone operates in their own self interest. The reason “experts recommend” open-source is that they benefit from it in some way. It also might be it is the only thing they CAN recommend, because they cannot (by definition) review closed source.

Anonymous Coward says:

Re: Re: Re:3 Well, access to the source code does not INCREASE the level of security, right?

You sound very confused .. or you might simply be promoting bullshit.

Closed source can possibly contain exploits, both intentional and unintentional, in addition to the much coveted backdoors. All this without your knowledge, whereas with open source one has a community with various goals who routinely review the source code and make updates. The exploits and backdoors would be soon found and eliminated … or so the story goes. There have been attempts, some successful, to circumvent this but they were soon found and stopped – hopefully. Anyways – closed systems do not have this feature and in fact, some profit driven closed source systems have been found to contain intentional exploits, these were not put there for your benefit.

Not sure how your so called experts benefit from open source over closed, perhaps you could expound.

Anonymous Coward says:

Re: Re: Re:4 Well, access to the source code does not INCREASE the level of security, right?

Well, to begin with, closed source does not mean it must be closed to the actual end-user, maybe he gets the source. Maybe he is the only one who gets it. Regarding benefits, there are many things people crave – notoriety, money, group membership, who knows? In any case, in my experience, people tend to do more of what they see as good for them, and less of what they see as not good. Open Source does not mean free for commercial applications, so it could well be for money, right?. Notoriety can lead to teaching or speaking fees, also money. In some circles, open source does approximate a religion, where open source is the right answer no matter what the question is. My overall point is that when it comes to YOUR OWN security, skepticism is usually healthy, blind faith in others less so. Especially when it comes to the government or fanatical “believers”.

Anonymous Coward says:

Re: Re: Re:5 Well, access to the source code does not INCREASE the level of security, right?

Religion – lol. This discussion was already way off track.

Like I said previously, there have been many papers on this topic already – I doubt I can add anything to them as it has been beaten to death already.

My only advice is to keep an open mind.
Many who offer software for your benefit do not actually give a shit about you, your well being or even if said software provides a valuable service – all they care about is ripping you off, and that has become far too easy.

orbitalinsertion (profile) says:

Re: Open Source Security Question

Security through obscurity simply does not work. Much has been written on this. And given the nature of testing software available since ages ago, i would imagine attacker would be trying to break the code functionally rather than trying to run source in their heads.

If you want to toss something completely new into the market, though, open source doesn’t make it magically more secure out of the gate, any more than big money closed source development does. Many eyes, especially the more qualified ones, over time, is what helps secure your source. Which also goes for your algorithms / novel theory.

Then you (or rather vendors using your system) have to make sure they don’t bork it in their implementation of your implementation. Which was the weak spot several times with quantum crypto tools.

If this is all happening ultralocally inside a processor or device, it is less likely to be cracked until the attacker has possession. And you had been mentioning governments…

Many eyes, good eyes, over time. That is the security point of open source. It is only theoretical unless that happens, though. But a truly secure system should be secure regardless of who has the source. Closed systems, you don’t know how well it was done in the first place, certainly not that many people checked it, you don’t know who may have gotten hold of the source, and… since closed source counts on being closed for security, that is a huge weakness. It should be negligible for security reasons whether the source is closed or not, it certainly should not be counted on as a security factor. (And some seem to depend on that as the main bit of security, sadly.)

Anonymous Coward says:

Re: Re: Open Source Security Question

Good arguments. I mention governments only because they are likely the most sophisticated attackers, and are also the promoters of the standard. I question their motives. Perhaps I would also question the motives of a private provider, too. I’m not sure. What I (personally) would really like to see is “The Insider”, small enough and fast enough to encrypt everything, all the time, inside the CPU, while simultaneously protecting the data from device failure (either bits or blocks). That is, a combined Error Correction Code and encryption feature that would add both privacy and protection from failure in a single computational step, and be easy to use on every electronic device everywhere. No more hardware ECC. No more depending on others (network adapters, storage adapters, storage systems) to verify data is correct. Instead, de-crypt and verify it yourself, right in your application, or at least very close to your application in a driver. I think this could change the computing world, and remove the requirement for complex communication adapters and storage devices. Everything would be simpler, more secure, cheaper, and easier to verify.

Anonymous Coward says:

Re: Re: Re: Open Source Security Question

"I mention governments only because they are likely the most sophisticated attackers"

Indeed they are so sophisticated that they have overplayed their hand and convinced everybody to encryption for ordinary tasks, like reading the news. Also, they are so sophisticated that all the data that they collect only provides them with the information to work out what happened after an attack has taken place.

So tell me again how sophisticated governments are, when they are trying to use the law to get private companies to invent and install backdoors for them.

Anonymous Coward says:

Re: Re: Re:3 Open Source Security Question

Using several encryption systems is a good idea, as it means that all your eggs are not in one basket. Remember the t nobody is claiming that AES will not be cracked, just that with current knowledge it cannot be cracked easily.

Also remember that an arrogant person can believe that they or the organization they lead knows everything of relevance for assessing the strength of a cryptography system. A sophisticated person know that they are huge gaps in their knowledge, and the way to deal with that is to open the encryption system to examination by anybody who cares to look at it.

Also classified does not equal kept secret from those you most wish to keep it secret from, as spies exist. This is the second reason why obscurity does not do anything for security, other than install a false sense of confidence.,

Anonymous Coward says:

Re: Re: Re:4 Open Source Security Question

Nicely put. I agree with most of what you said. Here is a simple example I wonder if you could comment on. Say you have secret information on your cell phone, perhaps about money you hide from your wife and give to your girlfriends. And you really NEVER want your wife to find out. And your wife works at the NSA. Would YOU use naked AES, or would you something that was not publicly known? Second question: How do you know your wife does NOT work at the NSA? They’re a secretive bunch. 🙂

Anonymous Coward says:

A question for you enryptions theoreticians

Here is something I have wondered about, maybe one of you experts out there could help me. If you consider using a simple linear technique to encrypt data, like a traditional encoding matrix, I understand that with a few samples, someone can derive the encoding matrix, and “break” your encryption using pretty standard linear algebra. But, the assumption here is that the attacker knows the underlying Galois field properties employed. If the Galois field properties were hidden (symbol size, primitive polynomial, encoding block size, that kind of thing), then linear algebra cannot be employed, and the simple linear code is no longer simple with respect to the attacker, right? It looks non-linear, unless the underlying field properties are revealed, right?

Anonymous Coward says:

Re: A question for you enryptions theoreticians

The reason I ask is I imagine a different kind of computer architecture altogether. Right now, when data moves out of the CPU, it must pass through a LOT of translations. For example, PCIe data is protected by one form of CRC (if I remember) on the way to the network adapter. The network adapter may use CRC or ECC (or something else) to protect it over the network wire. Then the storage adapter uses something else to protect it over the storage wire. Then the storage controller uses something else to protect it on the media. Then the whole thing is reversed, and you can only hope that nothing went wrong on the way back. Replace this with YOUR OWN Error Correcting Code, right in the CPU. Now, no matter who abused your data, or how much, you can both detect it and correct it, inside the CPU, right next to your application. This would allow things like “overclocking” everything, since you can now both detect and correct errors everywhere. It removes the requirement for protection everywhere else while allowing the user choose his own level of protection and encryption appropriate for his application. It is a fundamentally different computing architecture (IMHO) and leverages current technology in the CPU to protect real application data while simultaneously simplifying everything else. The Chinese could manufacture disk drives, for example, because they don’t need a complex ECC design in the drive. They don’t need a complex manufacturing process to verify the media. Close (in terms of correctness) becomes good enough, since errors can be both detected and corrected by the application, not a whole chain of unknown devices. And encryption is free.

Anonymous Coward says:

Re: Re: A question for you enryptions theoreticians

And just to finish my thought: If you accept that an encoding matrix can be used as a suitable encryption device, by hiding field properties, then there is no distinction between Error Correcting Code and encryption. Maybe my premise is true and maybe not, but if it is true, it means that when you protect the data by producing ECC, you have simultaneously encrypted the data, using the same cycles to complete 2 tasks. Finally, this encrypted “tag” (the ECC symbol set) is the perfect de-duplication tag, it is actually even better than a hash. Now you have used the same cycles to perform 3 tasks, and it becomes computationally possible to de-duplicate encrypted main memory in modern CPUs.

Same cycles, 3 tasks that all contribute to the protection, encryption and compression of the data.

That’s the basic idea – what do you think?

Anonymous Coward says:

Re: Re: Re: A question for you enryptions theoreticians

OK, one more detail to try to draw out you anonymous but brilliant cowards for comments: This is an architecture, not a software solution. The way to think about it is with software, you can trade latency to the dram in return for de-duplication, encryption, and protection from memory failures. Or, you can do exactly the same thing in hardware, and eliminate that latency. Mathematically, you are doing exactly the same thing, but in one case you are exploiting the SIMD processor and in another you are exploiting the fundamentally parallel nature of encoding matrixes with gates. Either way, you could (for example) implement DRAM systems for virtual computing environments that really could take advantage of the duplicate copies of things. That could be a big multiplier. Simultaneously, you could use non-ECC DRAM, because the ECC produced by this architecture is stronger. Correctly configured, you could hot service the DRAM without rebooting the system. Finally, you could measure with certainty the reliability of every storage and network connection by measuring it as data arrived. Strong, configurable application level ECC belongs either in or near the processor, but today is it implemented (literally) everywhere but.

Anonymous Coward says:

Re: Re: Re:2 A question for you enryptions theoreticians

Here is another view of the same thing. Today, the strong error correction codes are as far as possible from the application. I don’t think anyone would dispute that near the storage media itself is where the majority of error detection and correction take place. Then, data is passed hand to hand (cable to cable) until it gets back to the application with a totally unknown level of protection.

My suggestion is that the opposite needs to happen. If the application has the ECC encoder and decoder, nothing else actually needs one. The result is simpler, more configurable, more measurable, and as open source, more verified (hard drive and flash vendors guard their ECC techniques). All good, right?

Anonymous Coward says:

NSA Recommends against adopting AES (Suite B)

In a nutshell, the guidance advises using the same regimen of algorithms and key sizes that have been recommended for years. Those include 256-bit keys with the Advanced Encryption Standard, Curve P-384 with Elliptic Curve Diffie-Hellman key exchange and Elliptic Curve Digital Signature Algorithm, and 3072-bit keys with RSA encryption. But for those who have not yet incorporated one of the NSA’s publicly recommended cryptographic algorithms—known as Suite B in NSA parlance—last week’s advisory recommends holding off while officials plot a new move to crypto algorithms that will survive a post-quantum world.

“Until this new suite is developed and products are available implementing the quantum resistant suite, we will rely on current algorithms,” officials wrote. “For those partners and vendors that have not yet made the transition to Suite B algorithms, we recommend not making a significant expenditure to do so at this point but instead to prepare for the upcoming quantum resistant algorithm transition.”

https://arstechnica.com/security/2015/08/nsa-preps-quantum-resistant-algorithms-to-head-off-crypto-apocolypse/

Anonymous Coward says:

Re: NSA Recommends against adopting AES (Suite B)

From Wiikipedia.org: Bullrun (decryption program)

“Out of all the programs that have been leaked by Snowden, the Bullrun Decryption Program is by far the most expensive. Snowden claims that since 2011, expenses devoted to Bullrun amount to $800 million. The leaked documents reveal that Bullrun seeks to “defeat the encryption used in specific network communication technologies”.[6]”

So, the government does not use or even recommend publicly disclosed algorithms for very sensitive information (like their own). And they are spending (likely) billions to break publicly known standards.

Doesn’t this argue pretty firmly that open standards are weaker than closed ones, even according to the NSA? I know that are “many papers” that multiple posters have referred to, but how about considering the facts of the matter?

Closed source encryption is better, right? This is a “by demonstration” example by the US Government. The real question is “can you afford it”, which they obviously can.

Anonymous Coward says:

Re: Re: NSA Recommends against adopting AES (Suite B)

I would just add to this that the “academic community” is not what it used to be. Big picture, almost all their money is now coming from the federal government in the form of student loans. On many campuses, students still can not voice any opinions counter to the PREVIOUS federal administration. Wow. Their own police on their own campuses will not defend free speech. Why do you think you are hearing an honest story if it comes from “academia”? You might be thinking of how it was years ago, when everyone was free to speak, especially on college campuses. It is not that way anymore. Having the “academic community” certify your encryption scheme might not be the endorsement it used to be.

Anonymous Coward says:

Is this a well known argument against public encryption standards?

Again, from a practical and not theoretical perspective, what about this: If you use a public encryption standard based on open source, and there IS (ever) a way to crack the encryption, then any data that you have ever moved over the public network can be examined in an automated way, that is, by a machine and not a human. If you do not use a public standard, there is no automated way to attack your encrypted data, it requires a human with at least some education in cryptography. So, private (closed source) encryption inoculates you from any realistic non-human attack. Does that follow, or is there an obvious argument against it? (cryptography is tricky stuff)

Anonymous Coward says:

Re: Is this a well known argument against public encryption standards?

Any algorithm in use to encrypt data on the public network is known. It does not matter whether it is proprietary or not, as both executable files and integrated circuits can be reversed engineered. The only way to keep the details of an encryption secret is to maintain perfect physical security over all hardware used in the system.

Making obscurity part of a cryptology system security is a fool’s errand, as sooner or latter attackers will get hold of the details of how the system works. Just look at how quickly DRM is broken, and it relies on obscurity as part of its security measures.

Anonymous Coward says:

Wait – not every algorithm is known, right? For example, my algorithm (I just invented) is NOT known, and it may well include varying the Galois Field definitions as part of the key, as I outline above (or not). And, it can be used on the public network, anytime. Maybe it COULD be known IFF a HUMAN took the time to analyze it. That was the distinction I was trying to make. A hidden encryption scheme (even a simple one, like using AES and inverting the bits, or half the bits, for example – easy to do and easy to undo) requires a HUMAN to analyze it. Humans with encryption knowledge are (actually) a scarce resource, and a key weakness of all attackers (even rich ones). Giving them the full definition of the encryption, and withholding only the key, strikes me as a HUGE vulnerability, since ENORMOUS resources are going to go into cracking public standards. Private encryption = harder to crack, more variables to consider, cannot be automated (realistically), needs a scarce resource (educated human). That’s my point.

Anonymous Coward says:

Re: Re:

Another way to put it is you get what you pay for.

Actual value of free (open) encryption software < paid (closed) encryption software

This is demonstrated publicly by the US Government (NSA Suite A cryptography vs. NSA Suite B cryptography). They keep the important one secret. I think your argument is that the government is a fool. I dunno, maybe not.

You get what you pay for. Usually more true than not. 🙂

Anonymous Coward says:

Re: Re: Re:

Maybe another angle would be more illustrative: There are two kinds of crypto systems – open standard open source, where “one size fits all” in terms of cracks, and non-open standard non-open source, each with a unique approach. Note that the latter is “non-open” to the attacker, the user may well have source, and be able to verify whatever he likes.

Type A crypto-system (osos) is subject to automated attacks, that is, if the underlying crypto is ever cracked, everyone who used it is exposed, and everything they ever transferred over the public network may be revealed

Type B crypto-system (nosnos) requires a crypto-analyst to focus on this one code, which he would never do unless paid a lot of money. There is no public cred for breaking a private crypto-system. Money is the only way to do it.

Type A Free to use, big financing in place to break
Type B Pay to use, no financing in place to break

Pay now or pay later.

Anonymous Coward says:

Re: Re: Re: Re:

There’s another way to look at it:

Type A (OSS) – you know for a fact someone, somewhere is trying to break it.

Type B (Proprietary) – you have no idea whether someone is trying to break it.

Proprietary “crypto systems” are only useful for internal messages (think company e-mail). As soon as you use it for public-facing applications it’s like issuing hackers a challenge invitation.

Anonymous Coward says:

Re: Re: Re: Re:

I repeat:

1) an individual or small group of people can only know a limited amount of advanced mathematics, and so are unable to get close to saying there are no known attack vectors.

2Closed source, or even chip implementations of crypto can be reversed engineered, and so attackers know how the system works.

Also note that it is all too easy to design a crypto system where message analysis alone will enable it to be broken.

Obscurity is a bad approach to cryptography, as the assessment of the system is based on the limited knowledge of one or two people, and they just cannot know enough to carry out that assessment. Also, the way the system works cannot be kept secret when an implementation is made available to the public

The failure of obscurity is made abundantly clear by the failure of any DRM system to hold up for more than a few months of attacks by amateurs.

Anonymous Coward says:

Re: Re: Re:2 Re:

And I repeat – a simple modification to a public system (AES, for example) is enough to keep all the benefits and avoid the major risk: if the underlying system is cracked (or a back door is discovered) it will take additional effort to decode your private data. That is, there is no automated attack on a private system (a human expert is required), but there is an automated attack on a public system (once cracked, no human required).

Open Source is Free Crypto, but for a Limited Time. Little doubt it will be cracked, the real question is when, and whether that date is in the future or the past.

Anonymous Coward says:

It will be interesting if the anti-encryption narratives advanced by Sens. Feinstein and Burr (in particular — although others equally sympathetic) continue now that senators can officially begin using an encrypted messaging system for their own communications.

Of course they will. After all, the unwashed masses don’t deserve privacy. 🙂

Anonymous Coward says:

I’ve got a funny feeling they’ll wind up using some modified version, with a back-door installed (or at least say they are), to be followed with deliberate news stories like: “See, were using a system with a back-door in place, and it’s not doing us any harm… in fact, were doing just great, so now you have no excuse but use something similar, and seeing as that is the case, here is the legislation mandating that you must, again, because we are doing just fine (plus if we have to, **you** have too).

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...