Discussion:
Prime example of a can of worms
(too old to reply)
Kurt Seifried
2015-10-19 04:06:13 UTC
Permalink
So in light of:

https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf

and

https://www.eff.org/deeplinks/2015/10/how-to-protect-yourself-from-nsa-attacks-1024-bit-DH

I would suggest we minimally have a conversation about DH prime security
(e.g. using larger 2048 primes, and/or a better mix of primes to make
pre-computation attacks harder). Generating good primes is not easy from
what I've seen of several discussions, my fear would be that people try to
fix this by finding new primes that turn out to be problematic.

Secondly I would also suggest we seriously look at assigning a CVE to the
use of suspected compromised DH primes. Despite the fact we don't have
conclusive direct evidence (that I'm aware of, correct me if there is any
conclusive evidence) I think in this case:

1) the attack is computationally feasible for an organization with
sufficient funding
2) the benefit of such an attack far, far, FAR outweighs the cost for
certain orgs, from the paper:

A small
number of fixed or standardized groups are used by millions
of servers; performing precomputation for a single 1024-bit
group would allow passive eavesdropping on 18% of popular
HTTPS sites, and a second group would allow decryption
of traffic to 66% of IPsec VPNs and 26% of SSH servers.


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Alex Gaynor
2015-10-19 04:24:40 UTC
Permalink
I think we can have a far simpler rule: use of DH at <= 1024 bits gets a
CVE, the same way 512-bit RSA, or DES would.

Alex
Post by Kurt Seifried
https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
and
https://www.eff.org/deeplinks/2015/10/how-to-protect-yourself-from-nsa-attacks-1024-bit-DH
I would suggest we minimally have a conversation about DH prime security
(e.g. using larger 2048 primes, and/or a better mix of primes to make
pre-computation attacks harder). Generating good primes is not easy from
what I've seen of several discussions, my fear would be that people try to
fix this by finding new primes that turn out to be problematic.
Secondly I would also suggest we seriously look at assigning a CVE to the
use of suspected compromised DH primes. Despite the fact we don't have
conclusive direct evidence (that I'm aware of, correct me if there is any
1) the attack is computationally feasible for an organization with
sufficient funding
2) the benefit of such an attack far, far, FAR outweighs the cost for
A small
number of fixed or standardized groups are used by millions
of servers; performing precomputation for a single 1024-bit
group would allow passive eavesdropping on 18% of popular
HTTPS sites, and a second group would allow decryption
of traffic to 66% of IPsec VPNs and 26% of SSH servers.
--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
--
"I disapprove of what you say, but I will defend to the death your right to
say it." -- Evelyn Beatrice Hall (summarizing Voltaire)
"The people's good is the highest law." -- Cicero
GPG Key fingerprint: 125F 5C67 DFE9 4084
Matt U
2015-10-19 04:30:17 UTC
Permalink
unsubscribe
Seth Arnold
2015-10-19 19:34:11 UTC
Permalink
Post by Alex Gaynor
I think we can have a far simpler rule: use of DH at <= 1024 bits gets a
CVE, the same way 512-bit RSA, or DES would.
Should there be any middle-ground for how much use a specific value gets?
Part of the weakdh gift is the reconition that randomly generated 1024 bit
primes might be fine for one router or website to use but is terrible when
used by millions and might repay the cost to crack it.

Do we allow 1024-bit dhparams when they are randomly generated? Or do we
also want to move these to e.g. 2048 out of abundance of caution?

(I don't share Kurt's pessimism on generating DH primes, though that does
come with the caveat that they should only be generated on systems that
have been running long enough to collect enough entropy for random number
generation to work well.)

Thanks
Kurt Seifried
2015-10-19 20:49:55 UTC
Permalink
Post by Seth Arnold
Should there be any middle-ground for how much use a specific value gets?
Part of the weakdh gift is the reconition that randomly generated 1024 bit
primes might be fine for one router or website to use but is terrible when
used by millions and might repay the cost to crack it.
I would say applying similar rules as encryption, e.g. strong encryption
for file encryption and weaker crypto may be ok for e.g. session data.
Where those dials get set is currently anyones guess (since we have no real
public data to support decisions strongly).
Post by Seth Arnold
Do we allow 1024-bit dhparams when they are randomly generated? Or do we
also want to move these to e.g. 2048 out of abundance of caution?
(I don't share Kurt's pessimism on generating DH primes, though that does
come with the caveat that they should only be generated on systems that
have been running long enough to collect enough entropy for random number
generation to work well.)
Thanks
It's not as easy as that. Assuming people follow
http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf and get it
correct (and optionally certified) then yes it is "not hard" but one thing
that continues to worry me:

We have AFAIK no good test suites to ensure random numbers/primes are
cryptographically secure.

If we did we wouldn't have issues like CVE-2008-0166.


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Tim
2015-10-19 22:40:58 UTC
Permalink
Post by Kurt Seifried
We have AFAIK no good test suites to ensure random numbers/primes are
cryptographically secure.
If we did we wouldn't have issues like CVE-2008-0166.
Actually, we might have this now. See:
http://www.cryptol.net/

These guys put on a very short training at BSidesPDX this last weekend
and it seems like it could be exactly what you're looking for. No,
not to solve all the DH trouble, but it can make sure an
implementation matches a specification. Of course you have to have a
specification. But once you do, it can verify binaries' behavior.

tim
Daniel Kahn Gillmor
2015-10-19 21:40:14 UTC
Permalink
Post by Seth Arnold
Should there be any middle-ground for how much use a specific value gets?
Part of the weakdh gift is the reconition that randomly generated 1024 bit
primes might be fine for one router or website to use but is terrible when
used by millions and might repay the cost to crack it.
Do we allow 1024-bit dhparams when they are randomly generated? Or do we
also want to move these to e.g. 2048 out of abundance of caution?
we don't just want 1024-bit primes; we want 1024-bit safe primes (p =
2q+1, where both p and q are prime), because their structure makes it
easy for both peers to avoid a small subgroup attack.

safe primes are expensive to generate (for the party who selects the
prime), and they are expensive to verify (on the side of the party who
accepts the prime). These are difficult risks for peers to assess.

For the peer who selects the group: how often should you re-generate a
new prime? Should you share a prime with other parties in the same
position as you, or should you choose your own? If you share a prime
with other parties, how many parties are OK? how much traffic is safe
to pass under the same group?

For the peer who is offered the group, should you verify that the
modulus you receive is a safe prime? Should you do a complete, rigorous
proof or would a pair of probabilistic miller-rabin tests be ok? You
could "cheat" and ignore the check and things will "just work" most of
the time, and save your users some battery life. How should you reject
an offered prime if you find it doesn't have the expected structure
(non-safe primes could still produce valid groups, even though safe
primes are easier to inspect for)? What if you think you've seen this
prime before from too many other peers? Should you reject it then?

All of these questions are really fuzzy and hard to give good guidance,
and hard to know as an implementor that you're doing the right thing.

On the flip side, saying "use only strong (>=2048bit today in 2015?),
well-known, well-structured, publicly-vetted groups" is very simple
guidance: clear and easy to follow.

A move to well-known, large safe primes seems simpler/saner than trying
to work with an environment where peers are generating new primes which
may or may not be well-formed. (similarly, we're converging on a world
where there are a few trusted, well-vetted, well-optimized DH groups for
elliptic curve DH, because encouraging arbitrary ECDH groups ends up
being sketchier for everyone)

--dkg
Kurt Seifried
2015-10-20 04:16:19 UTC
Permalink
Post by Daniel Kahn Gillmor
On the flip side, saying "use only strong (>=2048bit today in 2015?),
well-known, well-structured, publicly-vetted groups" is very simple
guidance: clear and easy to follow.
A move to well-known, large safe primes seems simpler/saner than trying
to work with an environment where peers are generating new primes which
may or may not be well-formed. (similarly, we're converging on a world
where there are a few trusted, well-vetted, well-optimized DH groups for
elliptic curve DH, because encouraging arbitrary ECDH groups ends up
being sketchier for everyone)
--dkg
So it occurs to me that we have no corpus of data on Diffie Helman primes.
With this in mind I would like to create one. Openssl command line can
easily create them, using either the 2 (default) or 5 generator (explained
at
http://security.stackexchange.com/questions/54359/what-is-the-difference-between-diffie-hellman-generator-2-and-5
)

For example the following code:

#!/bin/bash
for i in `seq 1 100`;
do
openssl dhparam 2048 -text >> $i
done

will generate 100 2048 bit primes. If you can ideally simply commit the
files to the following github repo:

https://github.com/RedHatProductSecurity/Diffie-Hellman-Primes/

simply create a directory in the root with your name/whatever you want to
call it (nothing rude please) and have a "2048" directory for the 2048 bit
primes and a "4096" directory for the 4096 bit primes I would appreciate
it. If you use a tool other than OpenSSL command line to generate the
primes please make a note of it (especially any command line options used)
in a .txt file in the root of your data directory. My goal is to collect a
few million primes of each size so we have some real data to work with.




--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Daniel Kahn Gillmor
2015-10-20 05:37:04 UTC
Permalink
Post by Kurt Seifried
So it occurs to me that we have no corpus of data on Diffie Helman primes.
With this in mind I would like to create one. Openssl command line can
easily create them, using either the 2 (default) or 5 generator (explained
at
http://security.stackexchange.com/questions/54359/what-is-the-difference-between-diffie-hellman-generator-2-and-5
)
#!/bin/bash
for i in `seq 1 100`;
do
openssl dhparam 2048 -text >> $i
done
will generate 100 2048 bit primes. If you can ideally simply commit the
https://github.com/RedHatProductSecurity/Diffie-Hellman-Primes/
simply create a directory in the root with your name/whatever you want to
call it (nothing rude please) and have a "2048" directory for the 2048 bit
primes and a "4096" directory for the 4096 bit primes I would appreciate
it. If you use a tool other than OpenSSL command line to generate the
primes please make a note of it (especially any command line options used)
in a .txt file in the root of your data directory. My goal is to collect a
few million primes of each size so we have some real data to work with.
What's the goal of this proposed corpus? What sort of experiments are
you imagining running?

--dkg
Brad Knowles
2015-10-20 05:27:44 UTC
Permalink
Post by Kurt Seifried
#!/bin/bash
for i in `seq 1 100`;
do
openssl dhparam 2048 -text >> $i
done
will generate 100 2048 bit primes. If you can ideally simply commit the
https://github.com/RedHatProductSecurity/Diffie-Hellman-Primes/
PR filed to update code to generate 4096-bit primes as well.

I’m wondering if we might be able to take advantage of a larger-scale effort in this area, by using something akin to the @Home methods, but maybe generating large numbers of primes using a custom public AMI and some CloudFormation scripts?

--
Brad Knowles <***@shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>
Kurt Seifried
2015-10-20 16:22:40 UTC
Permalink
So some new questions arise:

1) in openssl does the -2/-5 option matter with respect to security? I read
http://security.stackexchange.com/questions/54359/what-is-the-difference-between-diffie-hellman-generator-2-and-5
and some other things and I have no idea if there is a real impact on
security. I bet the other tools have similar switches for which very few
people seem to understand what they actually do, and if they actually
impact security meaningfully.

2) Openssl/gnutls (and likely others) all apparently have slight variations
on how they generate/test primes. E.g.
http://nmav.gnutls.org/2011/12/generating-diffie-hellman-parameters.html
this worries me, diversity is good, but if not implemented correctly. Do
any best practices actually exist?

3) in testing for primeness how sure are we? Reading
https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test and so on
these tests are all "probably prime" but I can't find any data to show that
e.g. given this set of large primes, tested against the various traditional
primality methods, and then brute forced to confirm they are prime/not
prime, what % failed?


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
g***@gremlin.ru
2015-10-20 17:26:55 UTC
Permalink
Post by Kurt Seifried
1) in openssl does the -2/-5 option matter with respect to
security?
Actually, no: it's just a "generator", so it can be almost any small
prime number - say, 3 or 7 or whatever. It can even be just co-prime
to group modulo base.

However, the value 2 is the default in OpenSSL, so there may be some
space for experiments with birthdays paradox... especially when the
modulo is small.
Post by Kurt Seifried
2) Openssl/gnutls (and likely others) all apparently have
slight variations on how they generate/test primes [...]
this worries me, diversity is good, but if not implemented
correctly. Do any best practices actually exist?
All implementations I know of simply use the randomized algorithms
with Miller-Rabin primality test.
Post by Kurt Seifried
3) in testing for primeness how sure are we? Reading
[wikipedia: "Miller-Rabin primality test"]
Post by Kurt Seifried
and so on these tests are all "probably prime" but I can't find
any data to show that e.g. given this set of large primes, tested
against the various traditional primality methods, and then brute
forced to confirm they are prime/not prime, what % failed?
There's the Agrawal-Kayal-Saxena primality test, but I'm unaware of
any attempts to use it for checking the prime candidates which passed
the Miller-Rabin primality test.
--
Alexey V. Vissarionov aka Gremlin from Kremlin <gremlin ПРИ gremlin ТЧК ru>
GPG: 8832FE9FA791F7968AC96E4E909DAC45EF3B1FA8 @ hkp://keys.gnupg.net
Matthias Weckbecker
2015-10-21 15:01:13 UTC
Permalink
On Mon, 19 Oct 2015 17:40:14 -0400
Daniel Kahn Gillmor <***@fifthhorseman.net> wrote:
[...]
Post by Daniel Kahn Gillmor
On the flip side, saying "use only strong (>=2048bit today in 2015?),
well-known, well-structured, publicly-vetted groups" is very simple
guidance: clear and easy to follow.
Interestingly I noticed OpenSSH bumped their 'DH_GRP_MIN' to 2048 bit
just a few days ago to account for precomputation attacks:

http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/dh.h.diff?
r1=1.13&r2=1.14

RFC4419 seems to recommend 1024 bit minimum, but the document appears
to be from 2006.

[...]
Post by Daniel Kahn Gillmor
--dkg
Matthias
Kurt Seifried
2015-10-22 04:27:33 UTC
Permalink
On Wed, Oct 21, 2015 at 9:01 AM, Matthias Weckbecker <
Post by Matthias Weckbecker
On Mon, 19 Oct 2015 17:40:14 -0400
[...]
Post by Daniel Kahn Gillmor
On the flip side, saying "use only strong (>=2048bit today in 2015?),
well-known, well-structured, publicly-vetted groups" is very simple
guidance: clear and easy to follow.
Interestingly I noticed OpenSSH bumped their 'DH_GRP_MIN' to 2048 bit
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/dh.h.diff?
r1=1.13&r2=1.14
RFC4419 seems to recommend 1024 bit minimum, but the document appears
to be from 2006.
[...]
Post by Daniel Kahn Gillmor
--dkg
Matthias
So one of my initial thoughts was "easy, just have systems generate some
primes, even if we have systems with bad primes we have 10 billion unique
primes in use, good luck brute forcing that!", but then I realized I had no
idea how long this takes to generate. I generated 1000 2048bit primes on a
hardware AMD 3ghz or so system (6 cores, one dedicated for running the
openssl prime search), they took anywhere from <1 second to just over 10
minutes,

50% in 58 seconds,
77% at 2 minutes
89% at 3 minutes
94% at 4 minutes
97.3% at 5 minutes
and the last at 10 minutes and 9 seconds

overall average was 80.9 seconds, with a good 6% taking 4-10 minutes. I
can't even begin to think how slow this would be on hardware limited
systems like $20 routers and whatnot (in theory you could have systems
taking tens of minutes), which would not be popular with consumers (turn
the unit on and wait from 0 seconds to an hour or so for the web interface
to come up!).

With this data in mind I think we need to generally encourage everyone to
go to a minimum of 2048 bit primes (which should last a few more years
assuming quantum computers don't suddenly make factorization easy) and
establish some safe methods of creating them, much like generating CA
encryption keys we need to ensure the systems/software in use are correct,
the entropy is available (and not manipulated) and so on. Ideally we'd like
to see people using different primes (e.g. hardware manufacturers not using
the same primes as everyone else) and where possible people needing more
security (e.g. a VPN hosting provider) should generate their own keys
securely.


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Joshua Rogers
2015-10-22 04:45:03 UTC
Permalink
Post by Kurt Seifried
Ideally we'd like
to see people using different primes (e.g. hardware manufacturers not using
the same primes as everyone else) and where possible people needing more
security (e.g. a VPN hosting provider) should generate their own keys
securely.
Could it be possible to generate a new prime in the background, and when
it has been generated, on the next reboot use that one instead? And if
there is not enough time for the new prime to be generated, it falls
back to the old one?

I agree that manufacturers should be using a different prime per, at
least, batch of products.


Thanks,
--
-- Joshua Rogers <https://internot.info/>
Kurt Seifried
2015-10-22 05:09:16 UTC
Permalink
Post by Joshua Rogers
Post by Kurt Seifried
Ideally we'd like
to see people using different primes (e.g. hardware manufacturers not
using
Post by Kurt Seifried
the same primes as everyone else) and where possible people needing more
security (e.g. a VPN hosting provider) should generate their own keys
securely.
Could it be possible to generate a new prime in the background, and when
it has been generated, on the next reboot use that one instead? And if
there is not enough time for the new prime to be generated, it falls
back to the old one?
I agree that manufacturers should be using a different prime per, at
least, batch of products.
My fear would be device makers getting it horribly wrong on the devices in
question. E.g.:

http://www.theregister.co.uk/2015/10/21/german_govt_mulls_security_tests_of_sohopeless_routers/

Having a large pool of known good primes would be easier for them to use I
suspect. Sadly we can't let perfect be the enemy of the good, or in this
case the "not completely terrible".
Post by Joshua Rogers
Thanks,
--
-- Joshua Rogers <https://internot.info/>
--
--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Florent Daigniere
2015-10-22 08:01:33 UTC
Permalink
Post by Kurt Seifried
Post by Joshua Rogers
Post by Kurt Seifried
Ideally we'd like
to see people using different primes (e.g. hardware manufacturers not
using
Post by Kurt Seifried
the same primes as everyone else) and where possible people needing more
security (e.g. a VPN hosting provider) should generate their own keys
securely.
Could it be possible to generate a new prime in the background, and when
it has been generated, on the next reboot use that one instead? And if
there is not enough time for the new prime to be generated, it falls
back to the old one?
I agree that manufacturers should be using a different prime per, at
least, batch of products.
My fear would be device makers getting it horribly wrong on the devices in
http://www.theregister.co.uk/2015/10/21/german_govt_mulls_security_te
sts_of_sohopeless_routers/
Having a large pool of known good primes would be easier for them to use I
suspect. Sadly we can't let perfect be the enemy of the good, or in this
case the "not completely terrible".
I still don't get why people are pushing for "non-standard" groups.
What you need is a good security margin...

No one should be using 1024bit DH groups anymore and 2048 bit groups
should have disappeared *before* ~2020

http://www.keylength.com/en/3/

If we want PFS to work in practice we need "auditable" deployments...
and that won't be possible with custom DH groups (verifying the
security/suitability of a group is non-straightforward as the rest of
the thread has pointed out).

Really, what are we after here? 
- Preventing pre-computation? Pick a larger group.
- Avoiding "massive" problems in case the standardized groups do turn
out to be unsuitable (sub-groups, ...)?
- Something else?


Florent
Daniel Kahn Gillmor
2015-10-22 22:55:06 UTC
Permalink
Post by Kurt Seifried
Having a large pool of known good primes would be easier for them to use I
suspect. Sadly we can't let perfect be the enemy of the good, or in this
case the "not completely terrible".
a large pool of known-good primes doesn't help so much, particularly for
the embedded case -- peers that are offered a group need to be able to
easily verify that the group is strong. embedded devices simply aren't
going to carry around a large list of well-vetted primes of short
length, but we could *maybe* convince them to carry around a shorter
list of well-vetted strong primes.

I'd rather see us increase the security margin for a set of well-vetted
standard groups than ask people to make implementations that can't
determine whether they're in a reasonable group or not.

--dkg
Kurt Seifried
2015-10-22 23:37:49 UTC
Permalink
Post by Daniel Kahn Gillmor
Post by Kurt Seifried
Having a large pool of known good primes would be easier for them to use
I
Post by Kurt Seifried
suspect. Sadly we can't let perfect be the enemy of the good, or in this
case the "not completely terrible".
a large pool of known-good primes doesn't help so much, particularly for
the embedded case -- peers that are offered a group need to be able to
easily verify that the group is strong. embedded devices simply aren't
going to carry around a large list of well-vetted primes of short
length, but we could *maybe* convince them to carry around a shorter
list of well-vetted strong primes.
I'd rather see us increase the security margin for a set of well-vetted
standard groups than ask people to make implementations that can't
determine whether they're in a reasonable group or not.
--dkg
Sorry when I said a "large" pool I meant more then the current 5 or so that
seem to be in popular use, but certainly not more than a few hundred.

Basically we're in agreement, I think nothing under 2048 should even be
considered, and we probably need to bump that up in a few years anyways.

I've also been going through source code to see how people use dh
params/treat them, and I have some worrying results (basically what I
expected though, everything is terrible as usual)

I'm going to be writing this up as an article rather than a long email as I
have a few more sticky points to raise (security rabbit holes are so much
fun).

--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Daniel Kahn Gillmor
2015-10-23 03:41:39 UTC
Permalink
Post by Kurt Seifried
Sorry when I said a "large" pool I meant more then the current 5 or so that
seem to be in popular use, but certainly not more than a few hundred.
ok, that's a relief :) but, running the numbers, even 100 hundred
2048-bit groups comes out to a quarter MiB of RAM. (i figure 256 bytes
per prime, a well-known, shared generator)

Larger groups (or more groups) inflate the size even further. I know
RAM is cheap these days but for embedded devices a quarter meg or more
of RAM is still not insignificant.
Post by Kurt Seifried
Basically we're in agreement, I think nothing under 2048 should even be
considered, and we probably need to bump that up in a few years anyways.
yep, agreed.
Post by Kurt Seifried
I've also been going through source code to see how people use dh
params/treat them, and I have some worrying results (basically what I
expected though, everything is terrible as usual)
:/
Post by Kurt Seifried
I'm going to be writing this up as an article rather than a long email as I
have a few more sticky points to raise (security rabbit holes are so much
fun).
I look forward to reading it.

--dkg
g***@gremlin.ru
2015-10-23 14:56:31 UTC
Permalink
I can't even begin to think how slow this would be on hardware
limited systems like $20 routers and whatnot (in theory you could
have systems taking tens of minutes), which would not be popular
with consumers (turn the unit on and wait from 0 seconds to an
hour or so for the web interface to come up!).
Normally, all those $20 (or even $10) routers don't need to generate
keys at the first start - they are configured via plain HTTP given
that user's PC is connected to a "LAN" port with a cable. And only
when user activates the outside access (via VPN or SSH) the keys are
to be generated - possibly in several hours, like those 17 hours the
`openssh dhparam -5 8192` command took at my notebook :-)
With this data in mind I think we need to generally encourage
everyone to go to a minimum of 2048 bit primes
For my clients, I force the use of 4096 bit for over 5 years.
(which should last a few more years assuming quantum computers
don't suddenly make factorization easy)
That wouldn't be suddenly. At least I'm not going to worry until
they would be able to factorize some number close to 2^160 - say,
266508845991748914569771929356540352347893240569. And yes, I know
one divisor: it is 4458192223320340849 :-)
and establish some safe methods of creating them, much like
generating CA encryption keys we need to ensure the systems/
software in use are correct, the entropy is available (and
not manipulated) and so on.
Here we come to trusted execution, trusted computation and so on.
Ideally we'd like to see people using different primes (e.g.
hardware manufacturers not using the same primes as everyone
else) and where possible people needing more security (e.g. a
VPN hosting provider) should generate their own keys securely.
Theory is fine. But in practice we see weakened algorithms with
(intentionally?) reduced key size.
--
Alexey V. Vissarionov aka Gremlin from Kremlin <gremlin ПРИ gremlin ТЧК ru>
GPG: 8832FE9FA791F7968AC96E4E909DAC45EF3B1FA8 @ hkp://keys.gnupg.net
Kurt Seifried
2016-01-20 15:45:07 UTC
Permalink
I finally got the article written and published, it's at:

https://securityblog.redhat.com/2016/01/20/primes-parameters-and-moduli/

TL;DR: I found a lot of messy problems and no really good solutions. But
ultimately we need to start using bigger keys/primes or this is all just a
waste of compute time (might as well go back to clear text).


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Daniel Kahn Gillmor
2016-01-20 17:20:09 UTC
Permalink
Hi Kurt--
Post by Kurt Seifried
https://securityblog.redhat.com/2016/01/20/primes-parameters-and-moduli/
Thanks for this writeup!

the chart at
Loading Image...
uses the terms "keys" in the axis labels, but i think you mean "primes"
or "moduli".
Post by Kurt Seifried
TL;DR: I found a lot of messy problems and no really good solutions. But
ultimately we need to start using bigger keys/primes or this is all just a
waste of compute time (might as well go back to clear text).
yes, larger primes are clearly needed.

The discussion gets a little ways into the issue of negotiating primes
between peers, but doesn't address some underlying issues.

For one, the writeup addresses probabilistic primality tests, but
doesn't describe proofs of primality, which are significantly more
expensive to generate (and still probably more expensive to verify than
a short Miller-Rabin test). But these proofs provide certainty in a way
that probabilistic tests might not. If we're talking about runtime
primality checking when communicating with a potential adversary, are
there proofs about the (im)possibility of generating a pseudoprime that
is more or less likely to pass a miller-rabin test?

Additionally, the fact that the modulus is prime is an insufficient test
-- it needs to be a prime of a certain structure, or else the remote
peer can force the user into a small subgroup, which can lead to
unknown-key-share attacks, key factorization, or other problems.

One approach is to require that moduli be safe primes (p = (q*2) + 1,
where q is also prime) and to verify that the peer's public share k is
in the range 1 < k < p-1 to avoid the small-subgroup attack of size 2.
This appears to be the best we know how to do with diffie hellman over
finite fields, but it limits the range of acceptable moduli even
further, and requires two primality tests for the peer seeing the primes
for the first time.

It's also worth noting that we have a similar concern with elliptic
curve DH (ECDH) -- the structure of the curve itself (which is the
equivalent of the generator and the modulus for finite-field diffie
hellman) is relevant to the security of the key exchange.

In the ECDH space, there appears to be little argument about trying to
use a diversity of groups: while many specifications provide ways to use
custom (generically-specified) curves, pretty much no one uses them in
practice, and the custom-curve implementations are likely to be both
inefficient and leaky (to say nothing of the difficulty of verifying
that the offered curve is well-structured at runtime). Indeed, the bulk
of the discussion around ECDH is about picking a small handful of good
curves that we can publicly vet, and then using those specific curves
everywhere (see curve 25519 and goldilocks 448, the CFRG's upcoming
recommendations).

Encouraging peers to select a diversity of large custom groups in for
finite-field DH seems likely to be slow (additional runtime checks, no
optimized implementations), buggy (missing or inadequate runtime checks,
side-channel leakage), and bandwidth-heavy (the moduli themselves must
be transmitted in addition to the public keys), and as you say, the
diversity of groups doesn't win you as much as just switching to larger
groups in the first place.

I agree that we need machinery in place to be able to relatively easily
drop believed-weak, widely-shared groups, and to introduce new
widely-shared groups. But i'm not convinced that encouraging the use of
a diversity of groups is really the "Best Default/Operational" tradeoff,
as it is indicated in your chart, given the concerns above.

Thanks very much for your analysis.

Regards,

--dkg
Kurt Seifried
2016-01-20 17:25:42 UTC
Permalink
Post by Daniel Kahn Gillmor
Hi Kurt--
Post by Kurt Seifried
https://securityblog.redhat.com/2016/01/20/primes-parameters-and-moduli/
Thanks for this writeup!
the chart at
https://securityblog.redhat.com/wp-content/uploads/2015/12/DH-Param-Compromise-300x269.jpg
uses the terms "keys" in the axis labels, but i think you mean "primes"
or "moduli".
Sorry yes, although this also applies equally to keys/etc.
Post by Daniel Kahn Gillmor
Post by Kurt Seifried
TL;DR: I found a lot of messy problems and no really good solutions. But
ultimately we need to start using bigger keys/primes or this is all just
a
Post by Kurt Seifried
waste of compute time (might as well go back to clear text).
yes, larger primes are clearly needed.
The discussion gets a little ways into the issue of negotiating primes
between peers, but doesn't address some underlying issues.
For one, the writeup addresses probabilistic primality tests, but
doesn't describe proofs of primality, which are significantly more
expensive to generate (and still probably more expensive to verify than
a short Miller-Rabin test). But these proofs provide certainty in a way
that probabilistic tests might not. If we're talking about runtime
primality checking when communicating with a potential adversary, are
there proofs about the (im)possibility of generating a pseudoprime that
is more or less likely to pass a miller-rabin test?
I looked at this a bit and quite honestly the computational time involved
is just to much to be useful, unless we're talking about generating a small
set of highly trusted primes. For normal people, this just isn't feasible
(witness prime generation taking between less then a second, and more than
10 minutes, nobody wants to wait 10 minutes...).
Post by Daniel Kahn Gillmor
Additionally, the fact that the modulus is prime is an insufficient test
-- it needs to be a prime of a certain structure, or else the remote
peer can force the user into a small subgroup, which can lead to
unknown-key-share attacks, key factorization, or other problems.
One approach is to require that moduli be safe primes (p = (q*2) + 1,
where q is also prime) and to verify that the peer's public share k is
in the range 1 < k < p-1 to avoid the small-subgroup attack of size 2.
This appears to be the best we know how to do with diffie hellman over
finite fields, but it limits the range of acceptable moduli even
further, and requires two primality tests for the peer seeing the primes
for the first time.
It's also worth noting that we have a similar concern with elliptic
curve DH (ECDH) -- the structure of the curve itself (which is the
equivalent of the generator and the modulus for finite-field diffie
hellman) is relevant to the security of the key exchange.
Yup, that was a lesson learned.
Post by Daniel Kahn Gillmor
In the ECDH space, there appears to be little argument about trying to
use a diversity of groups: while many specifications provide ways to use
custom (generically-specified) curves, pretty much no one uses them in
practice, and the custom-curve implementations are likely to be both
inefficient and leaky (to say nothing of the difficulty of verifying
that the offered curve is well-structured at runtime). Indeed, the bulk
of the discussion around ECDH is about picking a small handful of good
curves that we can publicly vet, and then using those specific curves
everywhere (see curve 25519 and goldilocks 448, the CFRG's upcoming
recommendations).
Encouraging peers to select a diversity of large custom groups in for
finite-field DH seems likely to be slow (additional runtime checks, no
optimized implementations), buggy (missing or inadequate runtime checks,
side-channel leakage), and bandwidth-heavy (the moduli themselves must
be transmitted in addition to the public keys), and as you say, the
diversity of groups doesn't win you as much as just switching to larger
groups in the first place.
I agree that we need machinery in place to be able to relatively easily
drop believed-weak, widely-shared groups, and to introduce new
widely-shared groups. But i'm not convinced that encouraging the use of
a diversity of groups is really the "Best Default/Operational" tradeoff,
as it is indicated in your chart, given the concerns above.
Agreed, I listed the diversity more as a stop-gap for the cases where
people have older hard/software (e.g. Java) that will never support larger
primes/keys. At least then you don't get caught in dragnets for the
default/commonly used primes.
Post by Daniel Kahn Gillmor
Thanks very much for your analysis.
Regards,
--dkg
--
--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Daniel Kahn Gillmor
2016-01-20 18:00:15 UTC
Permalink
Post by Kurt Seifried
Sorry yes, although this also applies equally to keys/etc.
sure, though i hope we're not in a "few keys" scenario, that would
definitely be bad :)
Post by Kurt Seifried
[dkg wrote:]
Post by Daniel Kahn Gillmor
For one, the writeup addresses probabilistic primality tests, but
doesn't describe proofs of primality, which are significantly more
expensive to generate (and still probably more expensive to verify than
a short Miller-Rabin test). But these proofs provide certainty in a way
that probabilistic tests might not. If we're talking about runtime
primality checking when communicating with a potential adversary, are
there proofs about the (im)possibility of generating a pseudoprime that
is more or less likely to pass a miller-rabin test?
I looked at this a bit and quite honestly the computational time involved
is just to much to be useful, unless we're talking about generating a small
set of highly trusted primes. For normal people, this just isn't feasible
(witness prime generation taking between less then a second, and more than
10 minutes, nobody wants to wait 10 minutes...).
right, i'm not suggesting that proof generation be done at runtime, just
that it is an example of a stronger guarantee than we have for runtime
checks, and that it *only* applies to the "generating a small set of
highly-trusted primes" case.
Post by Kurt Seifried
Agreed, I listed the diversity more as a stop-gap for the cases where
people have older hard/software (e.g. Java) that will never support larger
primes/keys. At least then you don't get caught in dragnets for the
default/commonly used primes.
I agree with this analysis, but the chart in the middle of your paper
makes it looks like the diversity is "best", while the "small set of
heavily-evaluated primes" (i'm assuming that's what's meant with the
"few keys" side of the X axis) is merely "good".

--dkg
Kurt Seifried
2016-01-20 18:07:19 UTC
Permalink
Post by Daniel Kahn Gillmor
Post by Kurt Seifried
Sorry yes, although this also applies equally to keys/etc.
sure, though i hope we're not in a "few keys" scenario, that would
definitely be bad :)
Yes it would be bad:

https://blog.shodan.io/duplicate-ssh-keys-everywhere/

There was another analysis with even more worrying numbers but I can't find
it.


--
Kurt Seifried -- Red Hat -- Product Security -- Cloud
PGP A90B F995 7350 148F 66BF 7554 160D 4553 5E26 7993
Red Hat Product Security contact: ***@redhat.com
Hanno Böck
2016-01-20 18:12:37 UTC
Permalink
On Wed, 20 Jan 2016 11:07:19 -0700
Post by Kurt Seifried
https://blog.shodan.io/duplicate-ssh-keys-everywhere/
There was another analysis with even more worrying numbers but I
can't find it.
Not sure if that's what you meant, but may be:
http://blog.sec-consult.com/2015/11/house-of-keys-industry-wide-https.html

The more worrying part of that one is that they have not only found
these in the wild, they also extracted the private keys from publicly
available firmware images (and afaik plan to publish them).
--
Hanno Böck
http://hboeck.de/

mail/jabber: ***@hboeck.de
GPG: BBB51E42
g***@gremlin.ru
2016-01-21 01:05:07 UTC
Permalink
Post by Kurt Seifried
https://securityblog.redhat.com/2016/01/20/primes-parameters-and-moduli/
I think the best plan for dealing with this in the short term
is deploying larger primes (2048 bits minimum, ideally 4096
bits) right now wherever possible.
4096 bit keys seem to be the absolute minimum, and personally I've
already moved to 8192 bit keys.

Here are some numbers:

`openssl dhparam -2 4096` took 1:53:29 to generate (HH:MM:SS);
`openssl dhparam -5 4096` took 1:43:44;
`openssl dhparam -2 8192` took 25:51:34;
`openssl dhparam -5 8192` took 16:51:47.
Post by Kurt Seifried
Why not huge primes?
Why not simply use really large primes? Because computation
is expensive, battery life matters more than ever and latency
will become problems that users will not tolerate.
Any and all cryptographic transforms must be expensive - that means
at least time and electric power. As every single bit requires at
least two transistors (physical areas on the chip) just to store it
and much more to process, and each of those transistors consume at
least hundreds of pA, the cryptoprocessors (which are already used
for brute-force attacks) would be much more power-consuming.

Said that, the attackers would need building yet another power
station to get more gigawatts for their key-breaking datacenters
and, as all this power would finally become heat, such facility
should be built at least at Taimyr or Melville peninsula - both
are continental (for laying cables) and cold just enough :-)

Also, there are elliptic curves-based algorithms, but they have
one strong disadvantage: although the computations are more
complex, that must not be the reason to reduce the key size.
Post by Kurt Seifried
Additionally the computation time and effort needed to find huge
primes (say 16k) is difficult at best for many users and not
possible for many (anyone using a system on a chip for example).
That would require a really good hardware RNG. For now, I have an
experimental USB device (based on ATtiny85 and LM393) for such
purposes, but most SoC systems lack them (despite of adding them
would be simple and inexpensive: dual op-amp and one GPIO pin).
--
Alexey V. Vissarionov aka Gremlin from Kremlin
GPG: 8832FE9FA791F7968AC96E4E909DAC45EF3B1FA8
Florent Daigniere
2016-01-21 10:43:45 UTC
Permalink
 > https://securityblog.redhat.com/2016/01/20/primes-parameters-and-m
oduli/
 > I think the best plan for dealing with this in the short term
 > is deploying larger primes (2048 bits minimum, ideally 4096
 > bits) right now wherever possible.
4096 bit keys seem to be the absolute minimum, and personally I've
already moved to 8192 bit keys.
I'd like to know where you guys picked those numbers from:
http://www.keylength.com/en/compare/ suggests that 2048 bits is okay
for everyone but the BSI (at least not past 2016). Surely a
recommendation today should have a higher standard than that.

On the other hand, 3072 bits seems to be enough for everyone for the
next decade or so.

I haven't found anyone suggesting that bigger groups are either
necessary or worth it. If you want QC proof crypto you need groups of
~16k bits.

My favourite recommendation (ECRYPT II):
http://www.keylength.com/en/3/
where
1024 bits -> level 3 (<<2015)
2048 bits -> level 5 (2020)
3248 bits -> level 7 (2040)
for any of the modelled adversaries.
`openssl dhparam -2 4096` took 1:53:29 to generate (HH:MM:SS);
`openssl dhparam -5 4096` took 1:43:44;
`openssl dhparam -2 8192` took 25:51:34;
`openssl dhparam -5 8192` took 16:51:47.
 > Why not huge primes?
 > Why not simply use really large primes? Because computation
 > is expensive, battery life matters more than ever and latency
 > will become problems that users will not tolerate.
Any and all cryptographic transforms must be expensive - that means
at least time and electric power. 
There is a good reason why no one wants custom-groups in protocol
design. I haven't seen it mentioned much so far so I will spell it out
again:

Custom groups need to be transmitted for each handshake: that's
problematic on most networks (none of the group sizes suggested will
fit on a MTU worth of data) as it will involve fragmentation and
potentially retransmission.

If anything, TLS has proven that it won't work; both because 
- no one will use the feature, even if it's present (status-quo with
1024 bits groups today)
- it's impractical for it to be used anywhere where the connectivity is
anything less than perfect (mobile networks, high-latency networks,
...)

K.I.S.S.!

Florent
Steve Grubb
2016-01-21 15:15:55 UTC
Permalink
Post by Kurt Seifried
Post by Kurt Seifried
https://securityblog.redhat.com/2016/01/20/primes-parameters-and-m
oduli/
Post by Kurt Seifried
I think the best plan for dealing with this in the short term
is deploying larger primes (2048 bits minimum, ideally 4096
bits) right now wherever possible.
4096 bit keys seem to be the absolute minimum, and personally I've
already moved to 8192 bit keys.
http://www.keylength.com/en/compare/ suggests that 2048 bits is okay
for everyone but the BSI (at least not past 2016). Surely a
recommendation today should have a higher standard than that.
On the other hand, 3072 bits seems to be enough for everyone for the
next decade or so.
I think that is assuming that quantum computers are not brought to market any
time soon. Over the summer the NSA's Suite B page kind of backpeddled on the
ECC requirements and refocused on RSA. I attended a speech this fall where
NIST talked about what quantum computers will do. The presentation is here but
does not have speakers notes:

http://csrc.nist.gov/news_events/cif_2015/research/day1_research_200-250pt3.pdf

This is the notes that I took while listening to the speech:

This panelist talked about quantum crypto. The issue is that quantum computers
could use Shor's algorithm and Grover's algorithm to kill PKI. In the future
key sizes could be around a million bits. This will mean changes to network
protocols. Its estimated that a key space of N can be search in the square
root of N time. So, in current technology, if you need 128 bits of strength,
you will need to square it to get the key size.

Hallway discussions mentioned that ECC is dead due to trust issues and fuzzy
IP issues which slowed vendor uptake. There was a mention of RSA officially
being allowed to go to 16k key sizes.

-Steve
I haven't found anyone suggesting that bigger groups are either
necessary or worth it. If you want QC proof crypto you need groups of
~16k bits.
http://www.keylength.com/en/3/
where
1024 bits -> level 3 (<<2015)
2048 bits -> level 5 (2020)
3248 bits -> level 7 (2040)
for any of the modelled adversaries.
Post by Kurt Seifried
`openssl dhparam -2 4096` took 1:53:29 to generate (HH:MM:SS);
`openssl dhparam -5 4096` took 1:43:44;
`openssl dhparam -2 8192` took 25:51:34;
`openssl dhparam -5 8192` took 16:51:47.
Post by Kurt Seifried
Why not huge primes?
Why not simply use really large primes? Because computation
is expensive, battery life matters more than ever and latency
will become problems that users will not tolerate.
Any and all cryptographic transforms must be expensive - that means
at least time and electric power.
There is a good reason why no one wants custom-groups in protocol
design. I haven't seen it mentioned much so far so I will spell it out
Custom groups need to be transmitted for each handshake: that's
problematic on most networks (none of the group sizes suggested will
fit on a MTU worth of data) as it will involve fragmentation and
potentially retransmission.
If anything, TLS has proven that it won't work; both because
- no one will use the feature, even if it's present (status-quo with
1024 bits groups today)
- it's impractical for it to be used anywhere where the connectivity is
anything less than perfect (mobile networks, high-latency networks,
...)
K.I.S.S.!
Florent
Florent Daigniere
2016-01-21 15:37:00 UTC
Permalink
Post by Steve Grubb
 > https://securityblog.redhat.com/2016/01/20/primes-parameters-a
nd-m
oduli/
 > I think the best plan for dealing with this in the short term
 > is deploying larger primes (2048 bits minimum, ideally 4096
 > bits) right now wherever possible.
4096 bit keys seem to be the absolute minimum, and personally I've
already moved to 8192 bit keys.
http://www.keylength.com/en/compare/ suggests that 2048 bits is oka
y
for everyone but the BSI (at least not past 2016). Surely a
recommendation today should have a higher standard than that.
On the other hand, 3072 bits seems to be enough for everyone for the
next decade or so.
I think that is assuming that quantum computers are not brought to
market any 
time soon. 
Indeed. It's also assuming no other major breakthrough happens (whether
it's in maths, moore's law or anything else)...

but here we are talking about making recommendations towards replacing
legacy crypto we suspect^wknow to be broken, in practice, in the real
world, today.

I think that it's very important to keep the message simple: use bigger
(possibly standardized) groups, of at least X bits. The BSI thinks that
X should be greater than 2048 bits and so do I.

Florent
Brad Knowles
2015-10-19 20:06:28 UTC
Permalink
Post by Kurt Seifried
A small
number of fixed or standardized groups are used by millions
of servers; performing precomputation for a single 1024-bit
group would allow passive eavesdropping on 18% of popular
HTTPS sites, and a second group would allow decryption
of traffic to 66% of IPsec VPNs and 26% of SSH servers.
I think this may be a bit of a slippery slope here.

How many machines would have to be vulnerable for a given group to be considered big enough to be “weak” and therefore worth of having a CVE issued? Would that number be 1%? 5%? 10%?

At what point is it more dangerous to generate your own DH groups on systems that do not have sufficient uptime, versus re-using an existing DH group that might be considered “weak”?


There was a time when 1024-bit DH groups were considered sufficiently safe, and 2048-bit was overkill. At what point does 2048-bit become “weak” in the same way that 1024-bit is today? How many years in advance are we going to build into the system, so that we can have people “safely” transitioned off 2048-bit DH groups and onto whatever the next new thing is?

I mean, NIST is having a hard enough time getting people to stop using MD-5, much less SHA-1. And if SHA-1 falls this year, how long before SHA-2 falls?

--
Brad Knowles <***@shub-internet.org>
LinkedIn Profile: <http://tinyurl.com/y8kpxu>
Loganaden Velvindron
2015-10-22 04:36:12 UTC
Permalink
Post by Kurt Seifried
https://weakdh.org/imperfect-forward-secrecy-ccs15.pdf
and
https://www.eff.org/deeplinks/2015/10/how-to-protect-yourself-from-nsa-attacks-1024-bit-DH
I would suggest we minimally have a conversation about DH prime security
(e.g. using larger 2048 primes, and/or a better mix of primes to make
pre-computation attacks harder). Generating good primes is not easy from
what I've seen of several discussions, my fear would be that people try to
fix this by finding new primes that turn out to be problematic.
Secondly I would also suggest we seriously look at assigning a CVE to the
use of suspected compromised DH primes. Despite the fact we don't have
conclusive direct evidence (that I'm aware of, correct me if there is any
1) the attack is computationally feasible for an organization with
sufficient funding
2) the benefit of such an attack far, far, FAR outweighs the cost for
I think that it's important for organizations who are providing services
that are considered critical to the stability of the Internet to audit &
take corrective measures for all of their impacted services.
Andrew Gallagher
2016-01-21 18:43:16 UTC
Permalink
Post by Steve Grubb
Hallway discussions mentioned that ECC is dead due to trust issues
and fuzzy IP issues which slowed vendor uptake. There was a mention
of RSA officially being allowed to go to 16k key sizes.
Was there any mention of the relative ease of quantum attacks against
ECC compared to classically-equivalent RSA? [1] That was suggested on
a couple of discussion groups as a possible motivation for the newly
rekindled RSA love.

[1] http://arxiv.org/abs/quant-ph/0301141
--
Andrew Gallagher
Senior Systems Engineer, Ward Solutions Ltd.
2054 Castle Drive, Citywest, Dublin 24
+353 87 1200174
Steve Grubb
2016-01-22 16:57:42 UTC
Permalink
Post by Andrew Gallagher
Post by Steve Grubb
Hallway discussions mentioned that ECC is dead due to trust issues
and fuzzy IP issues which slowed vendor uptake. There was a mention
of RSA officially being allowed to go to 16k key sizes.
Was there any mention of the relative ease of quantum attacks against
ECC compared to classically-equivalent RSA?
Yes. At one time ECC looked good because it offered comparable strength with
fewer operations so it was faster in the age of slower CPUs. Now, the threat
has changed and people are looking over the not too distant future at how best
to provide some resistance in the face of a very different landscape. Things
that are computationally expensive start looking better. The slide on page 9
kind of shows the concern. The leftover part of rectangle X not covered by
rectangle Z means spilled secrets.

To my mind, one of the things that we as an open source community need to
think hard about is how we are going to protect data in the Quantum computing
age. If many of the new QR algorithms get patented, where does that leave us?
Its kinda like ECC all over again except this time the consequences are much
more dire because there may not be any IP unencumbered algorithm to jump to. I
certainly hope that won't be the case.

-Steve
Post by Andrew Gallagher
[1] That was suggested on a couple of discussion groups as a possible
motivation for the newly rekindled RSA love.
[1] http://arxiv.org/abs/quant-ph/0301141
Continue reading on narkive:
Loading...