Hacker News new | past | comments | ask | show | jobs | submit login
Secure Secure Shell (stribika.github.io)
478 points by padraic7a on Jan 6, 2015 | hide | past | favorite | 180 comments



It's possible that NSA can recover keys from 1024-bit RSA/DH/DSA handshakes, but it's extraordinarily unlikely that they can do so scalably; that would be an ability that currently qualifies as space-alien technology.

RC4 is terribly broken and should be disabled. But the practical attack on RC4 requires many repetitions of the same plaintext --- "many" in the millions. This is a real threat to HTTPS/TLS, but not as much to RC4. Even hypothetical improvements to Fluhrer/McGrew would still require lots of repetitions. Disable RC4, but if you're playing along at home, this probably isn't it.

No serious cryptographer seems to believe that the NIST curves are backdoored. Avoid them if you can; they suck for other reasons.

MD5 is survivable in the HMAC construction.


It is conceivable, with the right tradeoffs, that computing individual discrete logs over a few popular 1024-bit moduli could be made cheap. However, none of the available evidence suggests this practice.

There is a lot of inconsistent thinking behind the advice given in the article:

- Hard-to-implement NIST curves suck, whereas GCM and Poly1305 are recommended.

- NIST apparently sucks, but NSA-designed SHA-2 is recommended.

- MACs need 256-bit tags, so UMAC and not-NSA-designed RIPEMD160 is apparently not fine, but GCM/Poly1305's 128-bit tags are recommended. On this note, 256-bit tags are pointless when the rest of the crypto infrastructure is sitting on 128-bit security.

- 3DES is not recommended because DES is 'broken', not realizing that this break is due to small key length of the original DES; 3DES is deemed to be quite secure (but slow).

- 64-bit block size is enforced, but for no good reason: SSH's 32-bit sequence number, along with counter mode, renders block size worries moot.


Counter mode always seemed like an argument against 64-bit blocks, since it implies a 32-bit counter and a 32-bit nonce.


In general I agree and am as anti-64-bit blocks as it gets; but I don't think it matters in the specific case of SSH.


3DES has become fairly weak, and it seems like most cryptographers wouldn't recommend it - especially if you are trying to defeat the NSA - originally it had a 168 bits, but with known attacks, it's reckoned to have around 80 bits of security left, which given DES was considered insecure with 56 bits makes it sound iffy at best.


Well...there's "insecure"...and then there's really insecure.

DES (and by extension 3DES) isn't, per se, "insecure" at 56-bits except that technology has progressed from the mid-1970s such that an exhaustive search of the keyspace (e.g. brute force) is now practical in reasonable time. DES is resistant to differential analysis and even more modern techniques that could seriously reduce DES security are theoretical exercises at best (like those requiring terabytes of known plain-text to derive a key, or those only applicable to reduced-round implementations).

Yes, I'm aware that to a cryptographer, "theoretical attack possible" == "OMFG insecure cipher", and that attitude is a good thing. If DES was an AES candidate we'd never pick it. But from a practical standpoint, baring implementation mistakes or operational missteps, no one using known 2015 tech and technique is cracking 3DES before the heat death of the universe (or at least before we're long turned to dust).

That said, can you provide a reference to a practical (that is, not the linear cryptanalysis stuff from Davies) attack which reduces 3DES to 80-bits of effective security? If it's there, I missed it, and would invalidate what I've said.


> Hard-to-implement NIST curves suck, whereas GCM and Poly1305 are recommended.

I've always wondered this about DJB - he preaches the gospel of ease-of-implementation with Curve25519 and Salsa/Chacha20, but then for a MAC he has... Poly1305. I guess speed trumps everything?


Poly1305 is significantly easier to implement than GCM. Carter-Wegman MACs have been a research focus for DJB since the 1990s.


Sure. I'm not saying Poly1305 is problem for DJB, just for anyone else trying to implement it, which is a concern DJB has with his other crypto, but not here.


GCM is harder for everyone else to implement than Poly1305. It's harder in the "literally trickier to implement" sense, and in the "needs hardware support to be performant and secure at the same time".


> "needs hardware support to be performant and secure at the same time"

So does Poly1305; it just so happens that most popular processors have strong hardware support. Here's an exercise: implement both GHASH and Poly1305 for MSP430.


I think you're calling fast multipliers "hardware support", which is fair, but the hardware support needed by GHASH is idiosyncratic to things like GHASH. CLMUL is only a few years old and GCM is its primary use case.


Implementing a truly constant-time GCM in software without CLMUL is sufficiently hard that noone has managed to create a remotely competitive implementation. They're all either an order of magnitude slower or vulnerable to cache-timing attacks.

Poly1305 isn't a walk in the park, but doesn't need special hardware support for fast constant-time implementation. Though I will agree something like HMAC is much simpler.


What do you think about 64-bit tags (particularly in the context on ssh)?


I think they're OK, but would not recommend them unless you have serious bandwidth woes.


I think the reason why many still think SHA2 is safe is mainly because of Bitcoin. There's a lot of money at stake and many crypto experts working with Bitcoin. If there was a hole, it's possible someone would've found it. AES was also designed by NSA and is considered safe.

That said, I think it's better to avoid them anyway just to give another hit to NIST/NSA. Plus, ChaCha20 and BLAKE2 have much better performance in software than AES and SHA2/SHA3 anyway, so I would like to see those adopted as default options instead.


> AES was also designed by NSA

No, it was designed by Joan Daemen and Vincent Rijmen, two Belgian academics.


I know I've bothered you before about this, but can you explain [1] for the layman? It seems to be saying that non-rigid curves may have secret attacks. Then provides a table where, for some reason, just the NIST curves are listed as "manipulatable".

It seems, from reading [2], that the NIST curves went out of their way to claim "verifiably random" generation....using unexplained seeds. The page says it's conceivable that the NIST curves have weaknesses that "were introduced deliberately by NSA."

I don't understand the math so it's likely I'm totally misunderstanding. But reading those pages, they seem to hint that the NIST curves might have some intentional flaws, and that it's suspicious that they generated curves that are susceptible to known problems.

1: http://safecurves.cr.yp.to/rigid.html 2: http://safecurves.cr.yp.to/bada55.html


A good comment from a previous discussion:

https://news.ycombinator.com/item?id=7597653


space-alien technology speculation aside, i've been aware of openssh's less-than-reassuring default selection order for Ciphers, HostKeyAlgorithms, KexAlgorithms and MACs for a few years. for most modern computers and cpus, using these stronger algos amounts to, at most, a 10% speed loss when scp'ing and a 10% increase in cpu usage. even machines with weaker cpus will barely show any signs of fatigue with these stronger algos. despite this, at least 2 of the openssh devs have rebuked my suggestion to change the default algorithm selection order.

it's not exactly clear (to me) why anyone who runs a project that so many ppl depend on for security would stick to such old and crufty algos. since openssh and openbsd are intertwined, it does make me wonder if this is being done so that openssh can run on the latest vax, etc (omg! but it will take a week for it to generate the right sized keys!).

EDIT: openssh in 2nd paragraph changed from openbsd, a typo.


I feel like most of this is just because OpenSSH has been under development for 15 years rather than any conspiracy. Cryptography moves forward, new algorithms get added, but yet defaults don't get changed.


Are you aware that they were updated in OpenBSD 5.6/OpenSSH 6.7?

http://www.openbsd.org/56.html


If your OS or customization modifies the base OpenSSH great, but it's not the point of the parent comment.


I have no idea what you're referring to. The OP took issue with the default algorithm(s) selection order over the course of two paragraphs. I noted they were recently changed in the latest OpenSSH release to exclude weak ciphers.


If your OS or customization modifies the base OpenSSH great, but it's not the point of the parent comment.


Using -C with scp will result is drastic speedups most of the time anyway, it's possible any slowdown due to a nicer cipher suite can be counteracted with that.


That's orthogonal and compression won't help much if you're copying things that are already well compressed like archives, videos, photos... And compression only helps if you're I/O bound.

If using a stronger cypher is enough to slow you down chances are that you're CPU bound anyway and adding compression on top is actually going to make your transfer slower.


Making it slower isn't just a theoretical problem but I routinely saw that working with fast network hardware (1G, later 10G) on hosts which were either loaded with user tasks (computational lab) and have still seen it in recent years using hosts which are running AIX/Solaris/etc. where it's apparently routine for vendors to ship OpenSSH without any compiler optimizations enabled.


Depends on where the bottleneck is. A lot of the time scp is hampered by latency, moving most files that are not already compressed around will usually get some level of speed up on latent connections. Compression is also asymmetric, a fast remote host and a slow one being bottlenecked by decryption might see a speedup.


In theory, compression before encryption should make encryption faster on a multicore system. However, I'm pretty sure all ssh clients are single-thread, so that wouldn't apply.


What's wrong with the current selection order?


> Avoid [NIST elliptic curves] if you can; they suck for other reasons.

What are those reasons?


They're in a format that requires implementations to be especially careful validating input parameters to avoid leaking information; for instance, an attacker-submitted faulty coordinate can trick an implementation into performing a calculation with its secret key on the wrong curve.


Thanks. I'm not entirely sure how information can be leaked in that case - just timing attacks, or something else?


Here's the most egregious case: Bitcoin's secp256k1. This curve is defined by the equation y^2 = x^3 + 7 with arithmetic modulo the prime p = 2^256 - 2^32 - 2^9 - 2^8 - 2^7 - 2^6 - 2^4 - 1.

Suppose you have some way to send a point P to a non-point-verifying adversary, and get the scalar multiplication Q = s . P back, where s is the secret key. If we send a point on the curve y^2 = x^3 + 0 over the same prime---which is technically not an elliptic curve---the arithmetic will still make sense and we will get a meaningful result. However, discrete logarithms on this second curve are very easy to compute: s = (Q_x P_y) / (P_x Q_y) mod p. Without point verification stealing the secret key is a simple matter.

This example is slightly artificial; but real examples are just as deadly, however they usually recover the secret key a few bits at a time, and are a little more complicated.


This kind of invalid curve attack exists against all elliptic curves, so it's a bit difficult to argue that they're a reason to prefer one curve type over another.

The situation is a bit different in the presence of point compression, in which case you're typically concerned with twist security, but the security of the NIST P-256 quadratic twist is pretty decent, so again this isn't a strong argument against it.

The two good reasons to choose something like Curve25519 over NIST P-256 are 1/ speed and 2/ the fact that it's somewhat simpler to obtain side-channel protected implementations. For SSH key exchange, it's pretty much a wash for most realistic settings (only a server that spends significant CPU time simply establishing SSH connections would care about the performance difference here).


I chose the singular example for its simplicity; invalid curve attacks are, of course, much more general (smooth-order curve + CRT). That said: how would you mount an invalid curve attack on a curve in Edwards form? It is obvious if the adversary is using Hisil's d-less formulas, but it does not seem obvious otherwise.


It certainly depends on the precise arithmetic being used, but for the usual complete addition law, for example, I'm pretty sure I can recover k from the computation of [k]P where P is of the form (0,y) (or (0:Y:1) in the projective case) and not on the curve. That's a cute idea for a paper that I'll probably write up, by the way; thanks!


And sadly there's a patent covering point-verifying, so the workaround for this includes paying licence fees to Certicom if you're in the US.

As far as I'm aware there's still no real progress on getting better curves (e.g. curve25519) into TLS, despite a lot of noise on the ML, which is a real shame.

Better curves would also be faster - so it's not "just" a security thing.

There's a good overview of available curves here, with a guide to their safety: http://safecurves.cr.yp.to/


Actually, CFRG's doing a consensus call to adopt a rough draft from agl containing Curve25519 as an RG document right now - and I feel fairly comfy saying we seem to have good consensus and running code for X25519 (the Montgomery-x key exchange over the curve known as Curve25519, introduced in that paper). Implementers are already pushing ahead. I wouldn't think the TLS group or anyone else needs to delay that work any longer - it's been quite long enough already in my opinion.

Not quite so sure about signatures, but that's more a PKIX WG problem with more (CA-style) inertia behind it, so that won't move very quickly no matter what. The chairs want to resolve signatures after the curve and key exchange algorithm, which the TLS WG participants seem to want sorted out first.


> As far as I'm aware there's still no real progress on getting better curves (e.g. curve25519) into TLS, despite a lot of noise on the ML, which is a real shame.

I thought Google was pushing for it to be adopted in TLS 1.3. Did everyone else reject that idea or what happened?


Re: curve25519, just do what vendors always do: add it to your implementation as a proprietary extension and wait for everyone to adopt it as de facto standard.


The problem is Apple and Microsoft. If Curve25519 isn't standardized, Google and Firefox will implement it, and Unix servers will get support through OpenSSL, and so a big chunk of the web will get to use Curve25519, which is a bit of a win.

However, Apple probably won't support Curve25519 without a standard, and Microsoft definitely won't: they have a competing proposal. Which will leave the NIST curves widely used across the web as well, because IE and Safari support is critically important.


How are they a problem? Firefox/Chrome/OpenSSL can still pick up Curve25519, and Microsoft can still hold out for their own proposal; neither cancels the other out and we're already stuck with NIST curves (rfc4492). In fact, TLS specifically advertises what curves are available so that an upgraded server/client can use stronger curves in the future - optionally.

I know MS will be dicks and try to force their own version of everything, but i'll bet Apple will implement anything that there's a half-decent reference implementation of. That just leaves Microsoft, and the easiest way to defeat their proposal is to get their customers to demand they support the thing everyone else already implements, which would be Curve25519.


Let's imagine a scenario that could result from this:

1. DJB criticizes NIST and other standardization institutes and their curve selection choices

2. CFRG fails to recommend a curve (or a suite of curves) for TLS WG

3. Microsoft refuses to adopt Curve25519

4. Everyone else does

5. Interop problems

6. ?????

7. Everyone who has to clean up after this is aligned strongly towards standardization processes.

That's how they are a problem.

(For the record: This isn't a conspiracy theory, I don't think anyone wants this to happen and is actively trying to manipulate things to make it happen, it's merely a hypothesis on what could happen.)


DJB said in his recent talk that Microsoft is going to adopt "26 new curves", and he seemed pretty happy about it. But I'm not sure whether that means Microsoft will support Curve25519 or not. If they will indeed support 26 new curves, but they won't support Curve25519, that would be pretty silly of them.


You missed some subtext. Read Bernstein's posts on CFRG to see his take on the MSFT proposal.


re: Apple, that may be true for browsers, but they're already using it in other, not publicly exposed ways such as some of iOS file encryption.


Apple already uses Curve25519 for iOS encryption, so it's not like they're a stranger to it: https://www.apple.com/br/ipad/business/docs/iOS_Security_Oct...


Something else.

If you're expecting a curve with 2^250 ish possible resultant values, and you perform a calculation on a curve with only 2^13 ish possible values, you're going to leak some information about the number you gave it.

The ECC Hacks talk by Dan Bernstein and Tanja Lange explains it better than I can.



Thank you for providing the link :)


The point that is transmitted as part of the shared secret isn't guaranteed to be from the curve but not checked (and it'd be expensive to do so). The problem deos not exist with carefully chosen curves. TBH, It's highly unlikely I got that right, so there's an example in the first half of the talk, you might want to watch.


You can change the effective modulus. Learn a secret X modulo (A, B, C, ...) and eventually you can learn what X is.


djb gave a good overview of general ECC curves as part of his and Tanja's talk at 31C3: http://media.ccc.de/browse/congress/2014/31c3_-_6369_-_en_-_... I'd highly recommend watching it, it was very educational.


What percentage of ssh keys are encrypted, do you estimate or wager? I can't imagine that they haven't built factoring hardware but I agree that scale is a problem. I also can't imagine that they need to use factoring hardware if say 80% (I'm just guessing, could be much higher) of the ssh private keys are just chillin' on disks unencrypted, there has to be tons of other exploits to get those.


Server keys? Probably closer to 99.999% are just chillin' on disk.

Client keys? How many people use github, but don't want to enter a password on every push and aren't hardcore about setting up agents (esp. on Windows)?


Take a look at the YubiKey NEO.

https://www.yubico.com/products/yubikey-hardware/yubikey-neo...

It's basically a smartcard in the form factor of a nano-USB-stick. You can generate a pair of public/private SSH keys, with the private key remaining forever on the token.

Then setup gpg-agent in ssh-agent emulation mode (it's three lines in a file) and voila! you have hardware-backed authentication.

I'll probably write a HOWTO soon, but until then have a look at this:

http://forum.yubico.com/viewtopic.php?f=26&t=1171

It's unnecessarily complicated as described in the top post, but read the comments below, too. The real setup is dead simple - just a few lines in gpg-agent.conf and one of the .*profile files.


I'd love the writeup if you get the chance, I've looked at various places on how to do it (my brother bought me a Yubikey for Christmas) but like you said, it seems more complex than it should be. I'm interested to see how it'll go with OS X.



Generate public/private key pair on the NEO smartcard:

Run 'gpg --card-edit'

In the menu, choose 'admin'. Then choose 'generate'. Then 'quit'.

That's it. The private SSH key will remain on the smartcard forever; it will never leave it, not even during authentication. It cannot be extracted (well, maybe the NSA can, who knows).

To extract the public SSH key from the card, run 'ssh-add -L > my-public-key.pub'

You may want to edit the name (the third field) at the end of the key.

I'm 99% sure ssh-add -L works on any Unix system, you don't need anything preconfigured, just plug the token into it and run the command. This way you can easily get your public key no matter where you are.

The smartcard has a user PIN and an admin PIN. The default user PIN is '123456'. The default admin PIN is '12345678'. It is recommended to change them.

After 3 mistakes entering the user PIN, the card locks up and you'll need to unlock it with the admin PIN.

After 3 mistakes entering the admin PIN, the card is dead forever. Be careful with the PINs.

Read "man gpg", options --card-edit, --card-status, and --change-pin.

You will have to enter the user PIN when you authenticate SSH. It's cached for a while (see below).

#########################

Configure Linux or OS X to use ssh key authentication with the NEO:

Install gnupg, either from Homebrew or from GPG Tools (on OS X), or via repos on Linux.

https://gpgtools.org/

Configure gpg-agent:

  $ cat ~/.gnupg/gpg-agent.conf 
  pinentry-program /usr/local/MacGPG2/libexec/pinentry-mac.app/Contents/MacOS/pinentry-mac
  enable-ssh-support
  write-env-file
  use-standard-socket
  default-cache-ttl 600
  max-cache-ttl 7200
On Linux, I think you don't need the pinentry-program line, so remove it (not sure). Or experiment with various pinentry utilities, see what works for you; there should be a pinentry somewhere on your system after you install gnupg, and usually it's text-mode.

The value shown above is for OS X with GPG Tools, which is a GUI mode pinentry. If you install gnupg via Homebrew, read what I said above about Linux. Or google for the GUI mode pinentry for OS X - it's a separate download, made from an older GPG Tools version, that you can install along with Homebrew gnupg.

  $ tail -n 7 .bash_profile 
  GPG_TTY=$(tty)
  export GPG_TTY
  if [ -f "${HOME}/.gpg-agent-info" ]; then
	. "${HOME}/.gpg-agent-info"
	export GPG_AGENT_INFO
	export SSH_AUTH_SOCK
  fi
The GPG_TTY is not needed with the GUI pinentry that comes with GPG Tools on OS X, but might be needed for the simpler text-mode pinentries that come with other gnupg distros.

On Linux, or on Mac with gnupg installed from Homebrew, you need to launch gpg-agent upon logging in (GPG Tools will do that automatically for you). One way that seems to work well (checked with Homebrew gnupg on OS X, and with the Linux gnupg) is to add this to .bash_profile:

  eval $(gpg-agent --daemon)
To use it, put your public key on a server, plug the NEO into USB, and run 'ssh user@host'. pinentry will ask you for the user PIN. And that's it.

##########################

WARNING:

PCSC is broken on OS X 10.10. If you're on 10.9, stay there if you plan to use the NEO (or any smartcard for that matter). More details here:

http://support.gpgtools.org/discussions/problems/30646-gpg-a...

You can use it on 10.10, but once in a while gpg-agent gets stuck and you'll have to kill/restart it. I've posted a script on that support forum.

Works great on 10.9 and Linux.


Brilliant, thank you so much for this. I'm looking forward to having everything on my Yubikey!


Thanks for this. Will investigate getting a Yubikey now.


Does using the Yubikey in this manner protect your keys even if your machine has malware on it? I read through this tutorial [1], and they say you need read/write access to the device, so it seems to me like malware could access your keys while the smartcard is plugged in. If this is the case, I'm not sure how this setup is any more secure than simply keeping your keys on a USB flash drive and plugging it in whenever you need to use ssh.

[1] https://blog.habets.se/2013/02/GPG-and-SSH-with-Yubikey-NEO


The private SSH key never leaves the smartcard, not even during authentication. It is not exposed to the OS or any process, at all. You can't extract it at all (maybe the NSA can, who knows). The actual authentication takes place on the token, not in a process on your Unix system.

The only thing that the malware can do is issue an authentication request while the token is plugged in. That's all. If the PIN is not cached, you'll be prompted to enter it, and you'll be like "why is it asking me to enter the PIN?"

Maybe they could run a spy debugger on gpg-agent, but again, this would not give them your private key.


Thanks for the info! I was hoping it was more secure than a simple USB flash drive. This seems like a really good way to improve key security.


Do you know of any talks/articles attempting to break the YubiKey NEO in a similar way to how cloning attacks against SIM cards work?


On a Mac, you can store the key password in Keychain. The key is still protected but you don't need to enter the password on every push.

Just create the key and then try to use it. An OS X password dialog window will pop up. Enter the key password and check the box to save it. Done.


Client keys? How many people use github, but don't want to enter a password on every push and aren't hardcore about setting up agents (esp. on Windows)?

I encourage everyone to use encrypted keys on all platforms. You can set up the regular ssh-agent in git bash, and Atlassian's Source Tree can also use encrypted keys.


I don't see the point of encrypted keys, if my computer is compromised it is a trivial matter for an attacker to log input and get the password. If the computer is stolen, the disk encryption should be enough.


There are many ways to compromise a computer without installing something and having the user later provide input. Easiest example is a lost or stolen laptop - if it's not encrypted, you can get the contents of the disk, but the user isn't going to be around to provide more input.


There are many ways to compromise a computer without installing something and having the user later provide input. Easiest example is a lost or stolen laptop - if it's not encrypted, you can get the contents of the disk, but the user isn't going to be around to provide more input.


What does that help? Doesn't the ssh-agent keep the keys in memory too?


Client keys: chk out Userify :) (shameless plug follows) You keep your private key private client-side. (use an agent or not). Userify deploys user/sudo/key to your project servers. </plug>


Maybe I'm missing something, but under what scenario would I not be keeping my private key private and client-side? That being the whole point of it.


Most key "management" systems like to hold onto your private key for you and provision your connection for you. That's insane and defeats the whole point (as you point out)!!

As far as server-side, automation keys are often 'server' side (where the server is itself a client). Userify can manage and deploy those keys as well, but it's not super easy (yet) -- currently, you still have to 'invite' a (fake) user, create a new user account (company_backup_account or whatever), and then choose all the servers that you want that public key deployed to. That part could definitely be easier.. and soon will be.


Unless all they do is capture the flow of every handshake in a huge database. Then they can use that for targeted decryption or other forms of prioritized targeting. The NSA program is called Longhaul. Watch the talk Appelbaum gave at CCC very recently.


Do you have any idea what vulnerabilities the NSA may have in mind when they say they can help their customers with SSH traffic?


At the minimum, you could put the following into your SSH config and have a guaranteed defeat. Anything beyond this would be speculation.

   Ciphers none
   MACs none


Looking at which key exchange protocols, HMACs etc. various other SSH implementations out there support, it seems that many of them would be barred from accessing a server hardened like that :/. In fact, it almost seems like the intersection of mutually supported protocols is empty resp. only contains known-to-be-unsafe ones... :(.

Which makes me wonder: Is there anywhere a comprehensive comparison with tables that show which SSH clients (and servers) support which algorithms for key exchange, HMAC, etc.? That would help determining whom one is going to shut out by disabling which protocol, and thus help in making an informed decision on that.

Of course, to be really useful, such a table would also need to take into account which version of each listed software added support for which protocol -- as it is, the OpenSSH shipped in e.g. Mac OS X 10.8 (5.9p1) does not support curve25519-sha256@libssh.org which the OP recommends...

Sources: http://www.libssh2.org/ http://api.libssh.org/stable/ https://support.ssh.com/manuals/server-admin/44/Ciphers_and_... http://www.lysator.liu.se/~nisse/lsh/lsh.html


Good idea. Maybe a table added here?

http://en.wikipedia.org/wiki/Comparison_of_SSH_clients


This is a good idea. I'm thinking about adding functionality to the next version of Userify that will provide customization of ciphersuites (etc) on the sshd_config side. It'd be even cooler if it used just this sort of table.


For fun, I started work on this. It's still rather crude, and the code is hackish, but anyway: Comparison page: http://ssh-comparison.quendi.de/comparison.html

Git repository: https://github.com/fingolfin/ssh-comparison

Improvements are highly welcome; not just adding more clients, but also on refactoring the code, improving the UI (I suck at HTML), adding features... The TODO already lists some ideas.


This is excellent, thanks BlackFinGolfin!


"Don't install what you don't need", but run Tor to connect to your servers via SSH. Huh?

Hardening SSH makes a lot of sense and this article provoked a lot of thought. But there's too much political rant and leaps of faith for my taste. I'd like to hear some hardening guidance with more fact, less vitriol.


I can't imagine running a tor client on a production server and question its safety on my desktop (When I use it, I run up a disposable VM that gets wiped). So run software that connects you to the most hostile network out there and hope to god everything is secure on the client side of things?

This is the problem with trying to fight the NSA or some other over-inflated bogeyman. If you focus on weird edge cases you lose sight of practical security. I'd be more worried about opening myself to tor than some theorized attack on ssh ciphers.

Unfortunately, it seems network security has become fairly political, and if you don't make jabs at the NSA, while of course ignoring other state actors, then you won't be put on HN and reddit, which always welcomes politicized information at the cost of accuracy. I hope this hysteria is temporary and cooler heads will prevail and the Alex Jones listening crowd will stop holding the microphone.


> I'd be more worried about opening myself to tor than some theorized attack on ssh ciphers.

Most of the attacks launched on Tor aren't in the "remote takeover of the tor server via memory corruption" category, they have (in recent history) mostly been in the form of:

    * Attack firefox.exe in Tor Browser Bundle
    * Control a lot of nodes, do something networky to discover the user's actual IP/location
What is the threat you anticipate will result from "opening yourself to Tor"?


Yes, these are the attacks we know of, and the first one only because the FBI told us so. I suspect Tor is a lot more targeted and dangerous than people assume and using it casually to administer servers is asinine.


>the most hostile network out there

The internet?


heh. Your Faraday cage/hat might be leaking.


I don't think the tor client itself is insecure. They're pretty serious over there.

However, it definitely increases the attack surface.


This is my concern. Massively enlarging my attack surface because "OMGZ NSA" is just bad advice. I don't think people who write these guides have a holistic picture of security or understand its best practices. We can't just keep tacking on services and questionable tricks because of feel good politics and faith in obscurity.

It reminds me of people who use things like ssh password lockouts. Why aren't you using keys or firewalling off to only IPs that need to connect. Or tacking on SSL here and there instead of using a proper VPN.

Security should lean towards simplicity and best practices, not towards a kitchen sink approach that might just make things worse for you via complexity and surface raising.


Completely agreed. Focus on what you know to be an exposure (ie publicly accessible ports) versus what you are guessing might be an exposure. To put another way, fix what you know to be broken.


I can't believe nobody so far has mentioned the obvious answer: spiped[0].

[0] http://www.daemonology.net/blog/2012-08-30-protecting-sshd-u...


I almost fell out of my chair when I read this. Seems like there's a lot of, well, interesting advice mixed in with the good advice! :)


I'm a little surprised that SSH authentication methods can't be easily disabled using the sshd_config file. Almost all the other security algorithms can be easily tweaked. I thought this article had just made a mistake, but I read over the man page and it appears there is indeed no way to modify this using the config file.

This isn't a protocol weakness. Reading over the SSH RFCs, the server is allowed to specify which algorithms it supports, so this could easily be a OpenSSH configuration option.


I found this extremely concerning too. It should be possible to modify these options without resorting to hacks (that may break when OpenSSH is upgraded) _if_ these modifications are indeed necessary. From some of the comments here it's clear that this article is a mixture of good, bad, and silly advice - do those other authentication methods actually need disabling?


It looks like he's found a way to avoid using the broken symlinks hack. The blog has been updated and now advises how to make changes to the sshd_config file to disable old authentication methods.


I'm a bit concerned at some of the leaps of logic made in this article.

We're told to discount 1024 bit exchanges because of "unknown attacks". If they're unknown, why are we determining that 2048 bits is safe? How do we know the attacks aren't specifically against some other aspect?

The guide talks about creating a newer stronger host key, but doesn't provide any information about preventing initial MITM there: ensuring you get the right host key on first connection is a major issue, and somebody trying to "make NSA analysts sad" ought to be explaining methods for ensuring that happens securely via host key CAs or similar tools.

Likewise, the suggestion for "Preventing key theft" is "ven with forward secrecy the secret keys must be kept secret. The NSA has a database of stolen keys - you do not want your key there.". No insight is provided on how to actually accomplish that.

This is certainly a way to turn off weaker ciphers and exchanges, but passing it off as a way to "make NSA analysts sad" is hyperbolic and runs the risk of people believing the hype.


We don't know exactly the design of a rig that would solve a 1024-bit discrete log or factoring problem, but we have a decent idea of what it would probably cost, and most VC firms could fund it.

Costs don't scale linearly. No math projection puts 2048 bit keys in reach. The thing that breaks 2048 bit RSA may well put RSA completely out of action, a reason to prefer alternatives to RSA over 4096-bit keys.


Granted, but if that's the concern, that's what the article ought to be explaining:

We are aware of the infrastructure to break 1024-bit discrete logs, and it's feasible; current projections put 2048 bit key safely out of reach for the time being, so we'll disable 1024 and leave 2048 enabled.

My concern there is with the talk of 'unknown attacks' and then providing suggestions based on that statement.


> but we have a decent idea of what it would probably cost, and most VC firms could fund it.

I'd wonder if a "CaaS" startup (Cracking as a service) would be feasible. Using amazon spot instances, it would require only as much funding as to serve the first ten users - and requiring each user to send the money to an escrow, e.g. a notary (in Germany, it's possible to deposit an amount at a notary, which gives the customer the assurance that without the private key, the company gets their money and the company has the assurance that the customer doesn't walk away with the keys and leaves the company with a 5-figure AWS bill).


One interesting topic of semi-paranoia that hasn't been discussed is timing attacks. I'm not even implying its a realistic concern, but a timing attack could be imagined that works better with longer keys.

So 1024 is run too fast to gather intel based on timing, but an imaginary 8192 bit key would run slowly enough to leak one random bit of key per session via timing analysis, as a physics-style thought experiment.

Not trying to claim out that longer keys are weaker in reality, but am trying to make the point that being impervious to timing analysis attacks might be kinda important. Or another way to phrase it is there's more ways to break a key than pure math or torture.


Longer keys could also make some compromises less obvious; finding identical keys -> bad entropy, weak keys (Debian/OpenSSL, lots of devices). I don't know if anyone found backdoored key generators that aren't based on weak entropy.


Considering it's on github.io, you can make a pull request with recommended changes here:

https://github.com/stribika/stribika.github.io

:P


Non-PFS crypto-systems shouldn't be considered "safe" anyway. The NSA can steal the keys quite easily from most individuals/companies from what I gather from the documents.


Are any non-forward-secure methods under discussion here? The author makes pretty clear that both the options used by SSH for key exchange are forward-secure.


"Unfortunately, you can’t encrypt your server key and it must be always available, or else sshd won’t start. The only thing protecting it is OS access controls."

You can encrypt the server key and only decrypt it into a loopback mount when you want to start sshd or accept a connection (I don't remember offhand if sshd reads it only once or at each connection), then unmount it. You get the same functionality as typing in your keystore password when you start apache or netscape or whatever web server (because you encrypt your https private keys too, right?). An untested poc:

  # making the image
  mkdir TMPFS
  mount -t tmpfs -o size=4m tmpfs TMPFS
  cd TMPFS
  dd if=/dev/zero of=servkeys.img bs=1m count=2
  mkfs.ext2 -F -m 0 -t ext2 servkeys.img
  mkdir MOUNT
  mount -t ext2 -o loop servkeys.img MOUNT
  cp /etc/ssh/sshd_config MOUNT/
  ssh-keygen -t rsa -b 4096 -f MOUNT/ssh_host_key 
  umount MOUNT
  gpg -se servkeys.img
  mv servkeys.img.gpg ..
  cd ..
  umount TMPFS
  # running sshd
  mount -t tmpfs -o size=4m tmpfs TMPFS
  cd TMPFS
  gpg -d ../servkeys.img.gpg > servkeys.img
  mkdir MOUNT
  mount -t ext2 -o loop servkeys.img MOUNT
  sshd -f MOUNT/sshd_config -h MOUNT/ssh_host_key
  umount MOUNT
  cd ..
  umount TMPFS


After a reboot, how do you access the server in order to perform the mount and start sshd?


Out of band. A vSphere console, VPN, KVM, LOM, modem... As a super-cheap alternative, a bastion host vps can be the intermediary. After connecting to the bastion host, you can then connect over the cloud provider's private network to a stripped-down remote shell on the target, and enter the password to bring up the public remote shell. Keeps the real keys safe, adds a layer in front of the target, and is convenient enough to administer from a mobile device.


By definition, it still has to be loaded in RAM, so this strategy is unfortunately moot.


How is it any more moot than the private key loaded into RAM by your https server? It's still not on disk, and it's still more difficult to extract from memory than from the disk.


Agreed, sshd is not any different from the key being loaded into RAM by your http server.

But it's even easier and faster to grab a key from RAM. Could just use a debugger or a handy tool like aeskeyfinder or...

https://github.com/mmozeiko/aes-finder

(or another handy tool like heartbleed ;))


The article states that host key authentication methods cannot be disabled. Instead of creating broken symlinks, you can actually specify this server-side in sshd_config (at least on CentOS 7):

  HostKey /etc/ssh/ssh_host_rsa_key
You can also force this on the client-side in ~/.ssh/config:

  HostKeyAlgorithms ssh-rsa
See man pages for defaults.


FWIW, I have submitted a somewhat similar configuration as a pull request for ioerror's duraconf project at https://github.com/ioerror/duraconf/pull/52.

This project also hosts secure ciphersuite configurations for postfix, nginx, apache, GPG, etc.


In the Symmetric ciphers section, it ends with "This leaves 5-9 and 15" but why are 2-4 bad (aes128-cbc, aes192-cbc, aes256-cbc)? It seems the argument is that GCM is better than CBC, and CTR is okay. My general impression is that CBC is preferred (for example, Rails uses AES-256-CBC by default in its MessageEncryptor class).


My understanding is that, in general, AEAD (such as GCM) > CTR >= CBC > [ a lot of modes ] > ECB

CBC requires unpredictable IVs. CTR works with an integer starting at 0, so long as you never repeat.

The author's motive for not eliminating CTR was stated as compatibility. If you ask me, just use chacha20-poly1305 if you can. If you can't, there are probably more important problems for you to deal with (i.e. upgrading your legacy systems).


When you are reusing symmetric keys, both CBC and CTR require unique IV (as do all reasonable block cipher modes).


    When you are reusing symmetric keys, both CBC and CTR require unique IV (as do all reasonable block cipher modes).
Contrast to:

    CBC requires unpredictable IVs.
and

    CTR works with an integer starting at 0, so long as you never repeat.
http://dictionary.reference.com/browse/unique?s=t&path=/

http://dictionary.reference.com/browse/unpredictable?s=t&pat...

At no point did my statement make an error that was addressed by your reply.


The impact of a repeated CBC IV is less significant than that of a repeated CTR nonce, so in that respect CBC is more conservative. CTR is the better choice for modern code, but the security implications are a coinflip difference.


CBC IV does not in general have to be unpredictable.

But my main point is that there is no significant difference between block cipher modes in regards to requirement for IV as even CTR mode requires IV when used with repeated keys (and arguably repeated IV is more severe concern for CTR mode than for CBC) and conversely, you can just use constant IV for CBC when you can guarantee that used keys are unique.


> CBC IV does not in general have to be unpredictable.

Yes, it does. If not, you lose IND-CPA. That is, if it is predictable the attacker gets to test guesses for the plaintext of other blocks.

Read Duong and Rizzo, "Here come the XOR Ninjas", 2011. http://www.hpcc.ecs.soton.ac.uk/~dan/talks/bullrun/Beast.pdf


CBC in TLS is bad and deprecated in (not yet released) TLS 1.3. AFAIK, CBC in general is not broken if you pad it properly and authenticate it properly, especially if you authenticate before decryption (which TLS does not do).


Apparently implementing CBC correctly is too complex to do right:

https://www.imperialviolet.org/2013/10/07/chacha20.html


No, that's not what he's saying. He's saying that making the TLS CBC constructions secure is hard. They were designed in the 1990s. Making a CBC implementation secure today is much easier.


Indeed, just about every brand of SSL load balancer was doing it wrong:

https://www.imperialviolet.org/2014/12/08/poodleagain.html


I think he eliminated them out of performance considerations. CBC is unparallelizeable by nature.


Instead of storing keys on a pendrive, consider using smart cards or some other crypto devices for that purpose. There are some tradeoffs involved, so research before acting.


Unfortunately it seems that the latest version of PuTTY (0.63) does not support any of the advised MAC's.


Putty supports hmac-sha1, which is more than fine.


Unfortunately PuTTY still isn't served over https, even after all these years. If you're concerned about esoteric MITM attacks, this is worth remembering...


Considering that PuTTY has GPG signatures, there is no need for downloading it over HTTPS. Of course, this does leave the issue of how to obtain and confirm the right GPG key for verifying the download.


Unfortunately PuTTY still isn't served over https, even after all these years...


Hey All, OSX user here, i'm having trouble trying to replicate the config on my machine. Thusfar i've found, If you using a mac, things are slightly different, ie, /etc/ssh/ssh_config Is actually /etc/ssh_config /etc/ssh/sshd_config Is actually /etc/sshd_config /etc/ssh/moduli Is actually /private/etc/moduli

That's as far as i can tell anyway. I'm not sure what to do when it comes to authentication however. I'm assuming that your supposed to be editing /etc/ssh_config/ however i don't know if i'm supposed to remote the '#' from any lines i'm editing? I also dont appear to have any files that i can find called ssh_host_rsa_key nor is there a 'HostKey' line in the ssh_config file.

Any help would be most appreciated!


ftr /etc on Mac and /private/etc are the same. It looks like the contents of /etc/ssh is in /etc on Mac OS X's SSH


For the MAC, the article suggests an exception to github.com. But it looks like an exception is required for the Kex protocol as well. I see this error:

   Unable to negotiate a key exchange method
If I understand correctly the response from ssh -v -v, github.com only supports the following Kex protocols:

ssh-rsa-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-rsa,ecdsa-sha2-nistp256-cert-v01@openssh.com,ecdsa-sha2-nistp384-cert-v01@openssh.com,ecdsa-sha2-nistp521-cert-v01@openssh.com,ssh-ed25519-cert-v01@openssh.com,ssh-dss-cert-v01@openssh.com,ssh-dss-cert-v00@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-ed25519,ssh-dss

Edit: By trial and error, I find that this works for github.com:

KexAlgorithms diffie-hellman-group1-sha1


Doesn't the article suggest a KexAlgorithms exception rather than a MACs exception? I get the same error, but adding the above KexAlgorithms doesn't help


The article seems to have been updated pretty heavily. We might have been looking at different versions.

Here's the complete change log: https://github.com/stribika/stribika.github.io/commits/maste...


Thanks!


> Some of the supported algorithms are not so great and should be disabled completely. If you leave them enabled but prefer secure algorithms, then a man in the middle might downgrade you to bad ones.

How does that work? The algo exchange is authenticated too.


How can it be? What algorithm could you possibly use to authenticate the part where you negotiate which authentication algorithm to use?


Itself. :)

You could possibly force a weaker DH kex I think, but there's no way to force e.g., rc4 with hmac-md5. Disabling group1 is a pretty big interop hit, though.


This seems like a great idea. Unfortunately most of the recommendations seem to not be supported on my MacOS machine. Also, when I paired it down to the ones that are supported I wasn't able to connect to github. :( Very annoying.


OS X 10.10 Yosemite ships with OpenSSH_6.2p2.

Use Homebrew [1] to install 6.6p1 with its bug fixes and enhancements:

    brew install openssh --with-brewed-openssl --with-keychain-support
[1]http://brew.sh

Reference: How to Update OpenSSH on Mac OS X at http://www.dctrwatson.com/2013/07/how-to-update-openssh-on-m...


Agreed. And to add to it, even though Yosemite has OpenSSH 6.2p2, it has most of the newer features disabled.

FWIW, this is what I have in my config file on Mac.

  Ciphers aes256-ctr,aes128-ctr
  MACs hmac-sha2-512,hmac-sha2-256,hmac-sha1


See https://news.ycombinator.com/item?id=8849392 for how to safely upgrade Yosemite’s OpenSSH to 6.6p1.


"ssh-keygen -G /tmp/moduli -b 4096 ssh-keygen -T /etc/ssh/moduli -f /tmp/moduli

This will take a while so continue while it’s running."

First line has taken 6 hours on an ec2 t1.micro.


The second line took a little over 3 hours on my 5 year old MacBook Pro.


One thing bothers me:

> Unfortunately, you can’t encrypt your server key and it must be always available, or else sshd won’t start. The only thing protecting it is OS access controls.

That should be solvable with systemd and fuse - create an encrypting filesystem with a fixed key obtained from an USB pendrive, which is then ejected (so it would need a reboot to be enumerated again), and have the filesystem limit open() calls to the key file. It should not need more than one read call when openssh starts.


Another option would be to store the keys in a TPM, though there are some downsides to that as well. Here's one page that details such a process: http://www.lorier.net/docs/tpm


Unfortunately the ssh-key (probably not the verbatim file, but of course the parts comprising the mathematical key) will have to stay in the sshd process' memory.

If one wants to make it impossible for a "offline thief" (e.g. one that does not have permanent root-access to a compromised server) to make a man-in-the-middle using the server's key, one would have to store the ssh-server's key in a secure USB token. Ideally this token will count the number of ssh-signing and key-exchange operations so that an attacker remotely accessing the USB token might also be detectable.


That actually makes setting up SELinux seem like the simpler route.


iirc SELinux can only restrict the process groups that are able to read the file, which doesn't help you in a remote code execution exploit inside openssh.

The question is, is is possible to use the keys without having them in RAM any more?


> which doesn't help you in a remote code execution exploit inside openssh

To be fair, If it gets to the point where they are executing arbitrary code in the openssh process, your key is already compromised.

SELinux would help with most other causes of key leaks, however.

> The question is, is is possible to use the keys without having them in RAM any more?

In my (albeit limited) experience, no. Even if only because context switching would push the key out of the CPU registers and into RAM if it occurred at the wrong time.


> To be fair, If it gets to the point where they are executing arbitrary code in the openssh process, your key is already compromised.

This. Exactly.



Would'nt NSA attempts at cracking encryption systems basically be free audits on those encryption systems. (providing the public some how figures out how it was broken.)


Assuming they bothered to share the results of those attempts sure. The problem of course is that the NSA is both attempting, and in some cases succeeding in cracking encryption systems but keeping it all locked away behind security clearances. In the past we've learned quite a bit about the weaknesses of various crypto systems via post-mortems of NSA exploits, but it always comes well after the fact that the encryption is weak and broken has already become if not common knowledge than at least largely assumed. E.G. it's probably a safe bet that 1024 bit encryption is either broken by the NSA currently, or is likely to be broken soon, but it will probably be many years (barring another Snowden like event) before the public knows anything about exactly HOW the NSA has/will break it.


An extreme example for this kind of approach might be found in the history of the Enigma cipher and the British/Allied successful attempts in decryping German communications during the war. If it would have been obvious that information about an impeding attack could only have been gathered from intercepted and decrypted communications, the British would rather let boats be sunken and people die than give away the fact that the code had been broken.

Because if the fact would be known, the adversary will immediately cease to use this method of encryption, rendering the advantage of breaking the cipher void.


As a small aside, for anyone interested in this, The Imitation game was in my opinion fantastic, though I assume anyone here would already have seen it, or have plans to.


If the public ever got to see the results, sure.


Every now and again I step back and think of the absurdity of us having to take all of these precautions to stay safe from the people employed to represent our interests


To say the least, the number of recommended options here which are not supported on the version of openssh which comes with OS X is disappointing.



Based on the Snowden documents and the fact[?] that the NSA can decrypt SSH at least some of the time the author advocates disabling some of the weak points of SSH. He also describes how to go about doing so.


Are there any facts on the NSA being able to decrypt SSH? All I saw was some Spiegel article where it showed the NSA had compromised some targets. But zero details, so it could have been as easy as a MITM and the user not verifying the public key. Or it could be an unknown exploit in the actual code.

Everything I've heard so far points to the crypto being OK. That, so far, no special NSA crypto-defeating capabilities have come out. (Hence the NSA doodle with the smiley face on the links where Google removed TLS.)

Of course the NSA might have magic powers, but nothing in the last few years suggests that possibility any more than we'd have thought before.


There was a powerpoint where they implied they could sometimes get around ssh protections. Most people assume it's not actually the protocol that's broken, but it seems like a good excuse to clean up ssh anyway.


It's possible the NSA doesn't give a shit about your shit too.


From the slides leaked at 31c3 and by Der Spiegel that's not the case.

They want everything.

Even if you're not a target you're of interest -- either as a vector to a target.(someone you know, regardless how many "hops") some thing you have (information, or equipment/infrastructure as a proxy or platform for attack)

Think of this example: you've ssh access coincidentally to a VPS on the same metal that coincidentally runs another VPS that belongs to a guy, who as a favour runs a totally separate server that hosts a "bad dude"(tm). Everybody in that chain and everybody connected to every one involved with the hardware are direct targets, all of their work colleagues are for the same reason.

Now extend that up and down every technological stack and industry and you'll see how capricious the notion is.

And if you're not a US citizen, you've no rights so who gives a shit? If you happen to be a US citizen, you've still no rights because the people targetting you will just be members of another Five Eyes organisation.


Ok... First, if they're using my VPS to get to someone else's... um. Ok. So it really doesn't affect me. Sorry about your neighbor. He should cover his ass better and not keep top fkin secret info on a god damn VPS.

Oh, and even if this were like a possible scenario, why not just open random accounts until they strike gold? Or infiltrate the datacenter with a mind reading quad copter drone and plug a usb drive into your machine? I mean honestly, there's a 1000x easier ways to do this. The NSA isn't after your shit. Get over yourself and quit drinking the koolaid.


> Get over yourself and quit drinking the koolaid.

Given these are disclosures from their internal presentations, it's their koolaid that we're discussing.


That sentiment is possibly wrong, too.

The NSA's stated goal: Collect it all.

http://www.theguardian.com/commentisfree/2013/jul/15/crux-ns...


Yes, if it's on the gaurdian and snowden said it then it's got to be some sort of biblical truth.

Or, there's the simpler explanation, like you're really fkin irrelevant in the grand scheme of things. Chances are, if they want your shit, they could just stop you for a traffic stop, incarcerate you, waterboard the shit out of you until you cried and gave up your 4096 bit RSA key so they could read your private journal about your sad paranoid life.

These fkin ERMMGH NSA TAKIN YER SHIT articles are really stupid.



Yeah, my shit is irrelevant. My machine is not though. When attacking other, relevant systems, agency will always use infested low-key systems as a hop.

Is this a valid reason to harden security of your machines? Yes, it is.


Tinfoil hats for everyone.


> Use free software: As in speech. You want to use code that’s actually reviewed or that you can review yourself. There is no way to achieve that without source code. Someone may have reviewed proprietary crap but who knows.

Maybe this person forgot about Heartbleed? Or that Debian bug that was around for years? Or Shellshock, dating back to 1989?

Stop pretending and assuming that just because software is open source that it is automatically reviewed.


His claim is that it's necessary - not that it's sufficient.


Open source doesn't mean it HAS been reviewed, it just means it CAN be reviewed. The same can not be said of closed source products. I rather trust the product that at least has the option of being reviewed by an independent source over one that you have no choice but to trust the creator of.


The reason you are able to use those as examples is because they were caught and mitigated.

Open source doesn't magically prevent bugs. But it does ensure that when a bug is found, you'll hear about it and get a well-vetted fix pretty quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: