The title of this post was originally “the problem with cryptography”, but that’s neither original nor helpful.
Recently, I was reviewing server provisioning scripts for best practices. One step, for a webserver, was to set up Diffie-Hellman (DH) parameters. Here’s the command:
$ openssl dhparam -out dhparams.pem 2048
I know generating and setting the server to use the parameters is important to avoid things like Logjam, thanks to sites like WeakDH. But I’m no cryptographer - I’m not a sysadmin either - so I don’t know why or any specifics.
For example, DH parameters with 1024 bits or less are discouraged since 2005. But that’s a while ago now, are 2048 bit parameters still okay? WeakDH and RFC 7525 say yes. Okay then.
So are 4096 bit DH parameters better? I don’t know. It’s a given that it’ll take more computation time, but how much? Not being a sysadmin, I can’t test this under realistic conditions. Does it even provide more security? Apparently not. But I still have to trust some guy on StackExchange posting around 2014.
Finally, I think this StackExchange post confirms that you don’t gain anything from periodically regenerating the DH parameters. It’s hard to tell though. From what I understand, this is because even if you capture several DH exchanges, this doesn’t make guessing the underlying DH parameters any easier than having one DH exchange.
So let’s recap:
- Use Diffie-Hellman parameters with at least 1024 bits, 2048 is better
- 4096 bits does nothing except generate more heat… probably
- You don’t need to re-generate Diffie-Hellman parameters… probably
- Cryptography is hard, so leave that to the experts
- Setting up cryptography is still too hard for me (but I don’t know if that’s generalizable)