Hacker News new | past | comments | ask | show | jobs | submit login
Timeline to remove DSA support in OpenSSH (mindrot.org)
257 points by throw0101d 8 months ago | hide | past | favorite | 159 comments



About a one-year time frame for complete de-orbit:

    2024/01 - this announcement
    2024/03 (estimated) - DSA compile-time optional, enabled by default
    2024/06 (estimated) - DSA compile-time optional, *disabled* by default
    2025/01 (estimated) - DSA is removed from OpenSSH


And then another 10 years until the last Linux distro LTS version has removed support for it :)


Missing from your timeline:

   2006-05-01 NIST SP 800-57 Part 1 Rev. 1 is published, which recommends to stop using 1024-bit DSA by 2010.
   2010 passes, but OpenSSH never added support for DSA with keylengths larger than 1024-bit.
   2015-08-11 OpenSSH 7 disables DSA at runtime.
The new announcement is just removing the config option to re-enable the algorithm. Hopefully most people stuck on DSA already starting migrating in 2015 (ideally: finished migration before 2010).


I don't understand the rationale for removing support for any algorithm. I can understand disabling support, but why remove it? I have often needed to connect to ancient systems that only support very old algorithms. From a modern ssh client, I will usually need to specify the needed algorithm on the ssh command line in order to connect.

E.g.

ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 root@x.x.x.x


They provide a good reason for completely removing the DSA code:

> We are also likely to start exploring a post-quantum signature algorithm soon and are mindful of the overall size and complexity of the key/signature code.

Vulnerabilities like heartbleed and Log4Shell is what you get when you have limited developer capacity but insist on endlessly keeping legacy code around.


I am a proponent of ruthlessly deprecating, deleting, and decommissioning. I fully understand there are a LOT of downsides with this approach, but legacy code is such a huge and difficult to quantify drain on developer productivity, in addition to a vector for exploitation and other bugs.

Yea, it is annoying to keep your systems up to date, and yes some (let’s be honest very small but vocal minority of) users cannot update and will be left in the cold. But security is everyone’s responsibility at all layers, and even stable OSS doesn’t owe it to you to support legacy cases at the expense of just moving forward.

It sucks but I do believe hamstringing users with complex and unsupported use cases is (unfortunately) the right thing to do. The less support these old and vulnerable systems get, the more annoying or impossible they will be to maintain, and the more inclined users will be to shut down systems that probably should have been deprecated decades ago.

Bracing myself for ire…


Problem is that lots of cases just cannot be replaced, period. But probably those people will keep around some old ubuntu and will be hacked through those, also not removing but keeping around after the deadline until it needs work would also make sense. If work would be needed, then just remove. We might have like 2-10 years before that happens.

Whole stuff is about security and that kind of implies some “probably best before” tags anyway. Sacrificing security is not worth it and your reasoning is sane.


Yep, I work with some old industrial hardware, and most of it is stuck in the year it was made and never upgraded... rs232, rs485, telnet, etc., is a thing you see very often. Now with modern networks, you can isolate those machines and the machines controlling/monitoring them pretty efectively, so those segments never touch the internet, but you still need to connect to those devices and use them. Telnet and rs* just work, because noone complained about zero security there and wanted them removed... but now we're removing stuff that's still in use on newer devices that are not even at hal their 20, 30, 40year lifespans.

I understand the security aspect, I know that telnet is insecure, but I know when, how and why it's insecure and use it accordingly... just add some -use-bad-crypto flag, maybe even make it as a module/plugin, and leave it working as it did.


I don't see why such a flag is necessary when you can always use an old version of the software instead. That's your flag. Otherwise they would never drop support for anything ever, which seems less than ideal. Something simply existing in the code incurs a maintenance cost.


So how do I install the old version next to the new version in ubuntu 34.04? Will the old version even compile with gcc 27? Or will I have to find some ancient ubuntu image then, run it in a virtualbox, then run wget/curl on a newer virtual machine (becase old wget/curl won't support tls 2.4, and won't be able to download the script and after the command, http-POST the result), then copy the data to an old machine, run old ssh there, get the output, copy it to a new machine, and then http-post it from there?

Imagine if every software was coded by this logic... nobody uses BMP images anymore? Just remove them from gimp... if users want BPM support, they'll use gimp 1.x. Security? Unencrypted http is insecure, just remove support for http from firefox/chrome... if users need to use http, they'll just uninstall the current version, backup their profile, install an old version, that doesn't support the lastest tls standards, open that website, copy the text they need from there into notepad, uninstall the old version, install the new version, restore their profile, open gmail and paste the text to an email... oh wait, you've missed something and need to copy some more text... whoops, back to uninstalling.


You virtualise, or find someone willing to maintain some ancient version of the software on modern platforms (or you pay for it). If someone wants to maintain support for legacy protocols until the heat death of the universe, they are free to do so, but again it comes with a cost that not all projects can or should bear. Someone will have to think of how the ancient protocol works on every single software update - even if technically nothing changes that's still a maintenance cost.

Also, ossified infrastructure is not a good thing. That's yet another problem we need to solve as a civilisation. Not everything new is good but some old things are genuinely inferior and should be replaced.


> So how do I install the old version next to the new version in ubuntu 34.04?

You would install Nix and run something similar to "nix run nixpkgs-23.11#openssh <address>"


So, what's currently the oldest version of openssh that you can install this way?


  export LC_ALL=C
  nix-shell -I nixpkgs=https://github.com/NixOS/nixpkgs/archive/2322899a1fa85f6547004b2829af81e7b444f506.tar.gz -p openssh
  ssh -V
  OpenSSH_6.1p1, OpenSSL 1.0.0i 19 Apr 2012
Note that the first stable release of nixpkgs was in 2013.


> Otherwise they would never drop support for anything ever, which seems less than ideal.

In the realm of operating systems and protocols, that sounds absolutely ideal. Microsoft has the right approach here.


In an ideal world where the maintenance cost and added complexity do not matter, yes, but that is not this world and we cannot have everything we want without compromises.


That kind of attitude is why Windows continues to dominate the desktop market.

Granted, I speak very generally while this thread pertains specifically to OpenSSH. I also understand the added burden of maintaining more and more code, which end-users might not properly appreciate.

Ultimately though, people use computers to achieve something and expect software to help them in that endeavour. Software cutting off features, and thus the users, will always draw ire because it inhibits people from using computers to achieve something.

Arguments from devs that software must move forward mean nothing to users who want or need to do something right now.


This is a feature who's replacement was available 30 years ago and the replacement of the replacement was available 10 years ago.

For comparison, Microsoft deprecated SMBv1 the same year OpenSSH deprecated DSA and removed it in 2016.


Fortunately I’m not competing with windows :). But real talk, I only have 40 hours a week and I put a lot of energy into providing my users features they ask for. In return, I ask of them to make it easy for me to continue to do that.

If you look at any company’s internal tooling, this is universally well understood. Migrations and upgrades are a pain in the ass in the short term but a net positive in the long term. I don’t want to break their systems, but software evolves over time and if you expect something that worked once to always work, that’s not realistic expectations for any software I’ve ever been a part of.


I would argue the notion of "it stops working" is something acutely unique to computer software (and hardware to a significantly lesser degree).

Think about it: A computer is a tool, but most tools never "stop working" per se. That screwdriver? It used to work when it was first invented and it will still work a thousand years from now. That car? Keep it maintained and it will take you places for at least the better part of a century. That boat? We can keep boats floating forever.

Computers are one of the few, if not only, piece of tool that demands it be replaced every few years, and it probably hasn't even broken down yet which would merit a replacement.


> That screwdriver? It used to work when it was first invented and it will still work a thousand years from now.

Maybe it did not and will not -- old screwdriver is probably flat blade, now people use Philips or Pozidriv screws and in some time most will be Torx.

> That car? Keep it maintained and it will take you places for at least the better part of a century. That boat? We can keep boats floating forever.

The same: the fuel and oil is changing (maybe you will not even be able to source gasoline in 2050 if everyone migrated to electric vehicles), and specific spare parts are no longer manufactured.

I guess with large old boats also come expensive support contract - similar to paying someone for a super-long-term individual software support.

> Computers are one of the few, if not only, piece of tool that demands it be replaced every few years, and it probably hasn't even broken down yet which would merit a replacement.

Computers are also relatively new and quickly advancing in features, while the other examples you have mentioned were mostly "stable" for 100 years.


> That kind of attitude is why Windows continues to dominate the desktop market.

On the other hand, macOS is also pretty common on desktops and they don't honor backwards compatibility very much.


Two of the big reasons are to keep code clean and security. Reference the latest linux kernels, and how they achieved most of their performance increases (hint they removed old code and simplified the codepaths). The second reason is to prevent it's use. It is not just good enough to consider the "happy path." There are unhappy paths where an adversary is able to make use of these abilities. When the aggregate unhappy paths overwhelm the single happy path, it's time to bite the bullet.


... plus hardware vendors, some very large companies and security consultants financially benefit from churn


Consultants might benefit, I don't think that hardware vendors (especially network appliance vendors) really do: these kinds of changes are purely an expense for them, since they have to reallocate engineers to fix things that were otherwise on "autopilot" for their support contracts.


The announcement explains the rationale. If you don't like it, volunteer to pay for ongoing maintenance.

What system do you still need to access where your user has a DSA key?

Why can't you generate an RSA-2048 key, like you should've done over a decade ago?

The server might've generated a DSA host key, along with an RSA host key, but that won't prevent you from connecting, it'll just use the RSA one if your client only supports RSA (of {RSA, DSA}). If you purposely disabled RSA host keys so that it only has a DSA host key, it's time to generate an RSA host key and configure its use, just like an RSA key for the account you need to continue to access.

If you don't know of any such systems but you're worried you might run into one in the future, nothing is stopping you from using an old distro livecd to access such an old and insecure ssh server.


>I have often needed

That's the rationale. It helps motivate people like you to finally upgrade the ssh servers on these systems, or at least replace the keys with non-dsa ones.

And it'll always be possible to connect to these, just not with current openssh. There's old ssh, and there's other ssh implementations still.


In such situations you can build (and maybe just download) a version of OpenSSH from before this feature was removed.


Engineer: “I can write that code.”

Senior Engineer: “Should we write that code?”

Staff Engineer: “We should delete that code."


Salespeople, I done sold a feature that needs that code.


Does Azure support ed25519 keys yet?


I thought this was a joke, but they really don't: https://learn.microsoft.com/en-us/troubleshoot/azure/virtual...

what the fuck? What is Microsoft doing? ed25519 was added to openssh, what, 7 years ago? This boggles my mind.


Just about exactly 10 years ago, actually: https://lwn.net/Articles/590870/


It actually violates some federal contracting rules to support non-NIST keys. Maybe they don’t bother for that reason?


Azure's gov cloud is a separate API.


With a lot of shared software. Need a strong reason to introduce differences which if accidentally misconfigured could have large contract liabilities.


Maybe they're supporting TLAs abilities to crack Microsoft users when 'necessary'?


Does AWS or GCP? Doing some web searches, it looks like at least parts of GCP do not:

https://issuetracker.google.com/issues/195231191

And some parts of AWS do support the key type:

https://aws.amazon.com/about-aws/whats-new/2021/08/amazon-ec...


No. :(


Some of the HP iLO and other servers as well as some networking gear uses this. Not sure those devices can be upgraded.


You'd need to keep an older client to keep around for old devices. Sort of like that VM with WinXP and IE6, the one that's only kept because it has a semi-working ActiveX that some ancient device only interfaces with... shudder


This is already the case for very old target devices, that don't support the ssh v2 protocol at all; hence the existence of the ssh1 client (https://packages.debian.org/bookworm/openssh-client-ssh1).


Older clients may be a target for attack then.


Possibly. You don't want to use the older client for anything except connecting to the older system. Keep it somewhere offline (encrypted, whatever: distinctly non-executable), too. (Not that it would solve the issue altogether, see Stuxnet for a sufficiently motivated attacker and airgapped systems, but it's a viable mitigation.)


Yes, but I'd be much more concerned about the SSH server.


I dunno, exploiting a vulnerability that lets you steal SSH keys from a sysadmin seems a bit more useful than hitting a single server.


As usual, it seems like it's a good idea for the open source project (ssh in this case) to drop old tech and move on while those dependent on it should maintain it, for whatever that means.


> DSA, as specified in the SSHv2 protocol, is inherently weak - being limited to a 160 bit private key

This should be 160 bit equivalent, right? An actual 160 bit (non-EC) DSA key would be trivially brute-forceable.


No, it's genuinely 160 bit keys. The SSHv2 protocol mandates FIPS-186-2, which only allows 160 bit keys. 224-bit and 256-bit keys were only introduced in the later FIPS-186-3.


Ah, I mistook "private" for "public" key. The private key is indeed 160 bit for DSA per FIPS-186-3; it's the public key that's 1024 bit.

That's commonly used as the "security parameter" for DSA, though; 160 is very easy to confuse with e.g. a symmetric encryption key or hash length.


Don't remove it, add a "--yes-I-really-want-to-use-DSA" option. We sometimes have to connect to older systems that use insecure keys, and no those systems cannot be fixed (old network hardware, doing V2V conversions from old RHEL, etc)


    * What if I have devices that only support DSA?

    Removing DSA from OpenSSH will not remove endpoints that require DSA
    from the world and users may still need to connect to them. Although
    new releases of OpenSSH will no longer support DSA, past releases and
    alternate SSH implementations will continue to do so.

    We recommend that users with an ongoing need to connect to DSA-only
    endpoints maintain a legacy release of an OpenSSH client for this
    purpose, similar to what was recommended when support for the SSHv1
    protocol was removed.

    For example, Debian maintains a "openssh-client-ssh1" package built
    from OpenSSH 7.5 for the purpose of connecting to SSHv1 endpoints.
    This package or something similar is likely to be sufficient for
    DSA-only endpoints too.


So basically, the recommended solution is “instead of us maintaining DSA support, everybody should maintain their own fork of an old OpenSSH version”?


So the people providing software for free should support things indefinitely because the people you've paid money for those things won't?


Oh this is beautiful and perfect, couldn’t have said it better, people are so awesome in their entitlement


Well, they are seemingly explicitly calling out Debian as your new favorite place for SSHv1 support :-) Who are no more paid than they are.


I imagine the "people you've paid money" refers to the vendors of the old hardware that requires DSA.


Nah, it simply won't be maintained by anyone - just like the gear still requiring DSA. Nobody is going to stop you from using the last DSA-included OpenSSH build for the next few decades, just don't expect any updates.


I started using OpenSSH in OpenBSD 2.6, released in 1999, and I don't believe I've ever even used a DSA key. Here's a quote from the release notes of OpenSSH 1.2.3, released March 6, 2000:

"Any user can create any number of user authentication RSA keys for his/her own use. Each user has a file which lists the RSA public keys for which proof of possession of the corresponding private key is accepted as authentication. User authentication keys are typically 1024 bits."

ECDSA key authentication was added in OpenSSH 5.7, released in January 2011:

https://www.openssh.com/txt/release-5.7

Ed25519 key authentication was added in OpenSSH 6.5, released in January 2014:

https://www.openssh.com/txt/release-6.5

And RSA support has been around since the beginning.

I think the overlap between "must use DSA keys" and "uses modern OpenSSH" is practically zero, and the level of pushback in this thread doesn't correspond to reality.


If you have no problem using an obviated algorithm why is there suddenly a problem using an obviated ssh? These complaints almost beggar belief.


Because the old algorithm only endangers the communication and the remote device (and this may even not be the case, as such an old device should not be exposed to the world), while the old client may compromise security of the local computer. (fortunately, bugs in SSH client are uncommon; with e.g. browsers it's a different story)

Of course a reasonable solutions would be to run it in some sandbox/VM.

Additionally, the old client will be difficult to use in a current OS because of library and general system incompatibility (Debian with openssh-client-ssh1 is a rare exception, and it's just a command-line ssh, not the library mentioned in https://news.ycombinator.com/item?id=38963372).


More or less, yes. You get OpenSSH for free; you are not paying for it, and not paying to support its development. Critically, you are not paying for its developers to support old, obsolete, broken cryptography. Keeping DSA would be an added maintenance burden that the OpenSSH team just isn't prepared to continue taking on.

If there's a wide need for it, hopefully everybody won't maintain their own fork; all that's necessary is people band together and maintain a single fork.


DSA support isn’t free. It requires maintenance. We aren’t entitled to lifetime support.


Is there active work being done on the DSA code recently?


The main source code for DSA is here

https://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/ss...

You can see that the team did a big refactor of key handling about 14 months ago that required multiple rounds changes to the DSA code.

That's the sort of cost that legacy code brings - it's not about make changing to the DSA feature, it's about the cost of maintaining the DSA code when you make changes across the codebase.

In the original mail, DJM mentions that they'd like to explore a post-quantum signature algorithm. Adding that to the codebase is likely to require some broad changes to key management, and that will be less work if there are fewer supported key types.


> […] “instead of us maintaining DSA support, everybody should maintain their own fork of an old OpenSSH version”?

"Instead of of us maintaining DSA for a smaller and smaller population, that small/shrinking population should take some responsibility on themselves."


You could alternately install alternate SSH clients (non-OpenSSH) for this purpose.

  $ rpm -qi putty | tail -1
  Putty is a SSH, Telnet & Rlogin client - this time for Linux.

  $ rpm -qi dropbear | tail -2
  Dropbear is a relatively small SSH server and client. It's particularly useful
  for "embedded"-type Linux (or other Unix) systems, such as wireless routers.
The focus for SSH is safety. These others shift more to broad compatibility; I do wish they would throw warnings for weak ciphers.


Probably not since replacing the ancient equipment only capable of DSA is likely cheaper than maintaining an OpenSSH fork.

I’m going to assume that the hardware that supports DSA only has long been abandoned by its manufacturer.


You'd be suprised, the shear amount of black box vendor nonsense out there borders on astronomical. Weird telco stuff, old cisco hardware, BMCs in servers, it would mean gutting probably millions of dollars of equiment. It's stupid, but throwing the baby out with the bathwater over openssh won't make anyone happy.


Some day that stuff is going to unexpectedly break and cause more problems. There are two choices; pain now when you have a chance to plan, or more pain later when you’re forced to react.


Not "everybody", just "not us".


Or just not upgrading the entire operating system, say for example in an industrial control system. Because we find that this old version of SSH no longer compiles on modern operating systems. Thus making the whole system more vulnerable, so that it can keep communicating with some 10-year old device that cannot be easily replaced.

Again, air-gapping and limiting the physical extent of the network to the room or building would provide significant protection against attacks here.


I mean that's the deal you enter into with open-source software. In exchange for having full access to the source code for free, you should have no expectations of entitlement of new features or continued support from upstream.


Yes.



> ...past releases and alternate SSH implementations...

Absolutely not faulting the devs in any way for wanting to rid themselves of having to maintain the DSA bits, but this idea of "just" using past releases seems theoretical. I'd be surprised if I managed to compile an OpenSSH release from even just a couple of years ago. There'd be some glibc incompatibility or something equally gnarly.


> […] but this idea of "just" using past releases seems theoretical.

* https://packages.debian.org/search?keywords=openssh-client-s...


You can use a Windows build on wine.


I don't think I've ever seen an actual DSA key in use in SSH, and I was using it before there was an RFC. Even the oldest actual deployments tended to ignore the patent and use RSA.

> systems cannot be fixed (old network hardware, doing V2V conversions from old RHEL, etc)

If you can't update a system, you have no business exposing it to a network. Full stop.


When I got into Linux around 2010 there was something that encouraged me to make a DSA key instead of an RSA one ...


Encouraged, sure. Last time I saw something that actually required DSA, it was literally decades old, and that encounter didn't happen in this decade either. I do have some tools for ancient machines, such as with SSHv1 support, but it's more akin to archeology than access to working machines.


Arch Wiki was still recommending dsa around that time: https://wiki.archlinux.org/index.php?title=SSH_keys&oldid=66...


There was a lot of suspect advice on the TLS and SSH pages. I did a pass a couple of years ago, updating the recommendation to EC keys if possible and RSA otherwise, with links to relevant academic sources.


> If you can't update a system, you have no business exposing it to a network. Full stop.

What about isolated networks, a single Ethernet switch for example, used for industrial automation? We can't keep replacing equipment every 5 years because they keep changing crypto protocols.

Many industrial networking protocols have no security at all. No encryption. Not even a password. They achieve their security by being disconnected from the outside world, constrained to a single room or building. If an attacker can get physical access to the wires, then we have a lot more to worry about than network security.


In this case though literally what are you even complaining about? If it's physically isolated so you don't need to update the old hardware, then you don't need to update OpenSSH either. Right? You can just leave whatever terminal you use right now on whatever it runs right now forever.

That's the basic discontinuity here. The reason one would want to keep OpenSSH updated and continuing to receive development work is for use in a networked environment without full physical security from the world. If you have something stuck on DSA it shouldn't have any exposure to the world. So there's no problem here. At the very least one could softgap with old-ssh-in-VM that is completely restricted in what it can access.


Sorry, I likely screwed up my reasoning in that comment, I was under a lot of pressure at the time. The option to delete my post is gone, so I can't remove it.


> If you can't update a system, you have no business exposing it to a network. Full stop.

Ah, another i-know-better tells everyone how to do the things.

Okay, I cut the CAT5 cable with a wire cutter with nonconductive grips (shoulda be safe!), you don't whine what you can't access your mortage/medical history/whatever else thing and should physically be present at the facility to receive the documents.

Deal?


Deal, Because I don't want my mortgage, medical history, and other important and sensitive documents exposed to the internet by companies that can't be bothered to secure their systems.

The alternative is these systems never get updated until the next big "welp, the private information of 10 million people just got leaked to the open web". For example, equifax.


The sibling thread is for you.


Don't confuse the Internet and networks please. Older machines are completely fine to be used in private firewall-protected networks, as long as they're facing only LAN. SSH still be required to access them.


They’re the perfect target to hit when looking to pivot across/through a network though. Not to mention the risk of being hit during a ransomware event. Keeping around legacy hardware/software is the pinnacle of fuck around and find out.


Instead we'll have the company going bust for constantly having to replace critical machinery in the name of "security". Sometimes it is necessary to keep old equipment around and it can be secured using physical methods (an air-gap).


You're offering hypothetical, worst-case whataboutisms. In the real world, any shop that isn't stupid will use defense-in-depth to protect the unupgradable bits.


> You're offering hypothetical, worst-case whataboutisms

Otherwise known as “security”, yes


Security is about risk profiling and making good tradeoffs between things like cost, convenience, timeliness, and confidentiality/integrity/availability. All computer security is basically futile because in the face of a sufficiently motivated attacker, so chasing perfection is wasting your time.

If you're doing home security, you don't use armed guards and reinforced steel doors, with the defense of depth of an extra-secure bulletproof safe room, because the security would cost more than the value it provides. You might use a good deadbolt though.

The same goes for computer security. In combination with certain security approaches like air gapping, a technically insecure out of band management network can quickly become a dramatically less plausible means of being exploited compared to say - unsexy things like email phishing attacks. So replacing all your servers with ones with supported out of band management systems can simply not be a reasonable priority to have.


Whatever. Imaginary paranoia strawman that is the wrong kind of paranoia: the unactionable, ego-based kind. You completely ignored defense-in-depth approaches like airgapped systems and adding additional layers and protections to mitigate your hypothetical non sequitur. If you fail at these then you don't actually understand security and are just arguing without a leg to stand on.


Your attitude here is confusingly hostile.

But sure! If I have a server I currently use OpenSSH to connect then I certainly _could_ airgap the machine and require anyone using it to be in physical proximity to it. But don't you think that might be unrealistic in the vast majority of scenarios?


If you have a server that you can't secure properly because it only supports obsolete, known-broken cryptography, then yes, absolutely, you should airgap it or find some other way to protect it.

Or you could... not do that... and expect to be hacked, over and over.


> Otherwise known as “security”, yes

Content-free, arrogant, hostile quip that you chose to start.

> Your attitude here is confusingly hostile.

I don't know, perhaps it's you who is projecting and has the problem.

Have a nice day and a happy new year. :]


But the airgap scenarios are very real, and they make it more difficult to just go online and grab an old ssh client that will do the job.

It seems that the argument for removing support for the old algorithms involves the need to maintain them in the new releases. This only becomes a problem if/when the code and/or regression testing is refactored. So eventually the effort required to remove support becomes less than the effort needed to continually support the old algorithms.

The OpenSSH maintainers can of course do anything they like, but removing support for legacy algorithms is basically passing the problem down to (probably less capable) users who are stuck without the ability to connect to their legacy systems.


You do recall that the source control doesn't disappear, even after support is pulled? I've built ancient versions (specifically in case of SSH, to get arcfour for whatever convoluted system); this wasn't a simple operation, but feasible, even for someone with just a general knowledge of SSH and its build toolchain.

Maintaining code also takes time and effort: smaller codebase, effort better spent. If it's too costly to just keep an ancient version of ssh around, and even too costly to pay someone to do that for you, how's it suddenly NOT too costly for the maintainers? If you're going to the lengths of having a special airgapped network of legacy systems, how do you NOT have the tools to use with those systems?


I think you missed my remark about the inability to pull an old version from within an airgapped environment. It's usually still possible, but the level of difficulty can vary depending upon security requirements. Imagine a security officer refusing to approve the introduction of an older and insecure program into a secure environment.


Assuming the airgapped systems are currently working, nothing will stop them working as they are in the future.

It's already insecure, so don't update.


I think that you are making a lot of assumptions about the purpose of airgapped systems. Why would you assume that no changes or development work occurs? In my experience, there are often legacy components that are a critical part of a larger system. Also, in my experience, such environments are often segregated into smaller enclaves. Some of those may have the most up-to-date tools available.


I very much did not miss that: "how do you NOT have the tools to use with those systems?"

The hypotetical airgapped secure environment, running an old version of SSH (which only supports DSA) has no requirements for a SSH client, just "eh, just bring whichever openssh that you happen to have, and let's assume it works"? That's a failure to plan: if your network is airgapped, you can't expect to have client software in compatible versions appear out of thin ether.


I appreciate that you're trying to drill down and improve your understanding of such environments, which you obviously do not have much knowledge of. I'm limited in how much specific information I can disclose, but I'm certain that I'm not the only one who has worked in these environments and faced these challenges.

Here's a hypothetical example of a situation closely matching some of my experiences: A long-term support contract exists for some legacy system that cannot be updated because it is under configuration control. The contract involves peripheral development activities, which are best done with the most modern tools available. The whole environment is airgapped, and has security protocols that require security updates to the peripheral development systems, and these are done under a strict and bureaucratic review process. The legacy system interoperates with the development system via a single network connection, which is monitored by a separate entity. (The system is airgapped, but is part of a larger airgapped network, and is protected from unauthorized access even within the airgapped environment.) So you've got a new environment talking to a legacy environment via SSH, and they need to share a common security algorithm. If a new development environment is spun up, and its SSH client does not support the legacy algorithm, then a long and complex delay occurs in which multi-level approvals are required from bureaucrats who are generally not qualified to understand the problem, and are thus inclined to deny approval, to introduce the legacy SSH client software, which will be compared with the modern SSH client for any change history related to security issues, which would include the deletion of these security algorithms. The legacy SSH client would be assumed to be a security risk by the ignorant bureaucrats, and a months-to-years-long process ensues to convince them otherwise.


So, this essentially boils down to "someone else, ANYONE not me, should simplify the bureaucratic process for me (because there's the actual issue), and I've picked OpenSSH maintainers for the job. Oh, and for free, too."

You're not expecting the toolchain to appear out of thin ether, that was my misunderstanding: you fully expect volunteers to provide you with it for free, for your highly specific situation; in return, you offer...nothing? That's not a very enticing trade.

I sense there may be other ways around this, but those would a) cost you (in a broad sense; after all, the infrastructure is supposedly critical) money, and/or b) cost you (perhaps in a stricter sense) time, effort, perhaps influence. I agree that's rather inconvenient, given the alternative.


Personally, I'm able to do what is needed to make things work. My whole point was that by pushing the work from the OpenSSH dev team to downstream, the sum total of work will increase.


Old outdated versions of OpenSSH are are also fine to use in private protected environments where security isn't a concern so there's not much problem


Exactly. There are untold 10's to 100's of millions of critical infrastructure systems that cannot be upgraded containing insecure and horrible SSH implementations. Defense-in-depth by layers of other security measures and isolation permits them to be reasonably secure for their use prior to lifecycle replacement where possible.

Furthermore, no one should place remote access servers on the internet and should instead place them on a private, internal network behind an infrastructure VPN-jumpbox such as OpenVPN or Wireguard.

Only a few extremist developers in control of all of their own software and who don't have to interact with anything in the real world can maintain the idealistic purity to forever run only the latest version of everything.


> idealistic purity to forever run only the latest version of everything

But the OpenSSH devs are specifically saying “just use the old version if you need this”?


You're sweeping several huge assumptions under the rug. While it might work for the moment incidentally but it isn't a long-term solution.


I'm not sure that I understand this.

The openssh developers supporting outdated systems and software forever also isn't a long term solution. Why should they pay this cost, but not you (or your company)?

If you can keep unsupported hardware in operation, why can't you keep a containerized openssh image around, or maybe a VM image, or ideally a statically linked executable?

Maybe your company can hire an expert in software archival to set this up and maintain it if needed, or an extra developer to maintain an openssh fork that supports your environment.

Expecting other people (who you don't even pay) to support your outdated systems doesn't really make sense.


It seems only fair to me that if someone is insisting that they must connect to ancient systems that they should be expected to use only-slightly ancient software to do so. Or fork it, of course. If the team doesn’t want to be responsible for maintenance you’re welcome to take it on.


They said it has been disabled by default for years. There is a cost to maintaining it and a security risk for untested code. I agree with this removal.

> those systems cannot be fixed (old network hardware, doing V2V conversions from old RHEL, etc)

How old are these old systems? V2V conversions? What version of RHEL?


I tried to look for active DSA keys now, and didn't see anything but don't feel confident that there are in fact none.

Hmm.


Yeah old switches, BMCs, and other embedded SSH servers often use it. Really that stuff is too old to be using in a production environment but sometimes reality is different.

Keep an older release of openssh if you need it for those. No sense keeping obsolete code for obsolete use cases in the codebase, it's a maintenance an testing headache and a security risk.


I don't want to be nervous about whether I need it, I'd prefer an alert when it's used, turned on by default for a year.


It's been disabled by default since 2015 - you need to enable support for them in the config file, otherwise the client will error out. In other words, if you don't have the following line in your ~/.ssh/config, you're fine:

PubkeyAcceptedKeyTypes=+ssh-dss


That's great. Do you know what happens when an old unattended client with an old old key connects to a new server?


The same as happened last year, and will happen next year, since your old client isn't being updated.


So it's only being ditched from the clients, not the servers? I'm happy to hear that.


Assuming you haven't edited it, I misread your comment and my reply is probably incorrect.

You presumably have a configuration option allowing the old encryption in your server configuration, which is your warning that you're running a legacy system.


> We sometimes have to connect to older systems that use insecure keys, and no those systems cannot be fixed

At least there's now a way to tell management "hey, there is no choice, <thing> must be replaced because we literally won't be able to connect to it any more".


Evil manager from hell: nope, you can still reconfigure the <thing> to use telnet. At least it would be then plain obvious that there is no security, instead of the current security theater with DSA.


> Evil manager from hell: nope, you can still reconfigure the <thing> to use telnet.

Possible reply for some folks: our cyber-insurance mandates encryption for all logins.


Telnet uses double rot 13 encryption.


The grind of audit requirements tends to mean those managers are a lot more willing to do the right thing now – it’s less risk than putting your name in writing on the thing which the auditors / insurance company will be using to fail you.


I don't think anything using SHA-1 for security is going to pass an audit anyways...


Yes, or telnet. My point was that it used to be that random acts of management/ BOFH were more common because they could play politics out of consequences; now that lots of people have insurance or regulatory checks, that’s harder to do. That paperwork might not be the most efficient way to do it but it does at least produce a slow grind forcing the business people to pay attention to problems they used to ignore.


Or just use an existing version of OpenSSL. They aren't sending a terminator back in time to purge DSA support from all of time.


That would be a better timeline than this one.


Worse that old network gear is technically incompetent business partners.

Speaking in very general terms, let's just say that we've had to maintain an extremely locked down machine running a version of sshd from about a decade ago hosted in its own DMZ, all for one particular partner. It is monitored for everything we can think of, and I still find myself stressing about possible ways someone could escape from that machine - I'm mostly surprised we haven't been attacked through them yet.

Their problem is they bought proprietary software that's no longer supported, and look at it as a one-time purchase rather than a forward commitment. Our problem is the nontechnical relationship is important and we can't cut them off.


Indeed. I can't imagine what it's going to be like 10 years from now when the mountain of SaaS offerings start disappearing and there'se zero options to "run it a little longer"


It's already disabled by default, which is basically what you are asking for. However, disabling it by default doesn't solve the problem of the maintenance burden. That's what this change is meant to solve:

> [...] we no longer consider the costs of maintaining DSA in OpenSSH to be justified.


exactly. and if the added burden of having to maintain (or more likely, just download) a version of ssh that supports dsa means that some piece of legacy hardware is no longer cost effective, then it's probably time to upgrade that legacy hardware.

and you (the general you) should probably be thanking the openssh folks for all the work they've done for the last 8+ years keeping the dsa code around which allowed you (the general you) to externalize that cost.


Couldn't you grab an old distro image (or livecd), install/run it in a VM, and use that? Archive.org even has an iso of redhat 6.2. :)


Write a guide for how to obtain and install an old version of openssh for that usecase. Make it available within your org. Problem solved.


Use an older version of OpenSSH then.


Do they support telnet?


But then Linus from LTT will end up using DSA by accident! /s


Does this include ECDSA?


No.


What is DSA? I checked wiki and it looks like RSA for signatures, using modular exponentials. RSA is going to stay for a while, so why is DSA removed?


DSA was an alternative to RSA developed by the NSA, which used (other parts of) the US government to push for its implementation. It was never entirely clear why they were doing that, and there have always been (unproven) suspicions that the NSA wanted it adopted over RSA for underhanded reasons. Due to this history, it was never anywhere near as popular as RSA. Anyway, the US government (NIST) now recommends use of DSA be discontinued. By contrast, RSA was developed in academia and its adoption was independent of the US government’s influence (and at times even against its preference for DSA instead)


My impression is that RSA was covered by patent (from 1983 until 2001). DSA was attractive in part because, while patented, the primary patent was held by the US government and royalty-free.

I have vague memories (from circa 1995 comp sci undergrad coursework) that discrete log and integer factorization both appeared similarly difficult as mathematical/algorithmic problems.

Thus, if you had to choose between a patent-encumbered protocol and a royalty-free one, it would make sense to choose DSA. That said, it seems to have never offered a particular advantage, and the modern view appears to be that DSA is weaker in practical terms.


RSA and DSA rely on different number theory problems. RSA relies on prime factorization being hard while DSA relies on the discrete logarithm being hard.


Telnet will still work. So if you're developing an embedded device that will keep functioning after 20 years, it's best to not use any crypto at all. I'm half joking here.


If you have an embedded device that has to work for 20 years without update you better design it's networking part so that telnet isn't an issue in your threat model.


It would have to be on a small isolated LAN, e.g. within a machine or section of a plant.

Many buses used in industrial control equipment such as CAN bus and Modbus do not use encryption. I suppose there's more risk if you're using TCP/IP and connect laptops that have been online to the LAN, because it's such a common protocol.

But again sophisticated malware could just wait until you have your USB CAN dongle connected to the laptop, and still attack the CAN bus. Heck, CAN bus and Modbus don't even have any password protection at all.


CAN Bus is more like a layer 2 bus, so it should not really bother with encryption, just like Ethernet doesn't provide it. It all comes from the layers above it, and there has been proposal to add encryption or authentication to CAN. The big issue is that in normal CAN you only have 8 bytes to work with.


This is kind of a nitpick, but Ethernet _does_ bother with encryption, at least in recent versions of its standards. Obviously Ethernet has a long history and this is all optional, but it's pretty straightforward to set up 802.1X / MACsec (+MKA) on a LAN in such a way that all traffic is encrypted at L2.

I've never heard of this being used with really low-power embedded stuff, but if you stretch your definition of embedded to the point where you include things running stripped down Linux, this is a pretty viable setup if you have those devices distributed across a LAN with a managed switch in the middle.


If that device connects to WiFi you've already lost.


802.11b devices can still connect to many networks (depending on whether they've disallowed legacy clients for performance reasons).


telnet does allow encryption via flag '-x' on Linux, maybe others too.

But I never used '-x' it myself so I have no idea how it works.


At least for BSD telnet -x has been enabled by default for years now, but the option actually doing anything is predicated on kerberos.


Apple removed telnet from Mac OS. And ftp. You're installing a client anyway because of the security paternalism of others.


There's lots of ways to ensure security aside from at the protocol level, and DSA helps with compatibility, so this just seems like a bad idea to me. I don't think anybody was really accidentally using DSA not understanding the implications.


If you need this support, perhaps you should pay someone on the OpenSSH team (or an independent 3rd party) to support it. It's not free to keep code around, unfortunately; every bit adds a maintenance burden. It's an open source project, and no one is entitled to anything in particular.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: