> However, it's interesting to note that in both 2002 and 2024 we got a backdoor rather than a bugdoor.
As far as we know.
Related, there was a pretty interesting backdoor-by-bug attempt on the Linux kernel (at least, one that we know of) back in 2003: https://lwn.net/Articles/57135/
The Linux "bug" was unsophisticated by modern standards, but you could imagine a modern equivalent that's harder to spot:
Make the "bug" happen across several lines of code, especially if some of those lines are part of existing code (so don't appear in the patch being reviewed). Ensure the compiler doesn't warn about it. Make the set of triggering events very unlikely unless you know the attack. It would be very surprising to me if three letter agencies hadn't done or attempted this.
The problem with these is that bugdoors require you to target way more valuable stuff compared to backdoors. With a backdoor you can target practically any library or binary that is being run with root privileges, while with a bugdoor you can only really target code that is directly interacting with a network connection.
Direct network facing code is much more likely to have stringent testing and code review for all changes, so as of now it seems a bit easier to target codebases with very little support and maintenance compared to an attack that would target higher value code like OpenSSH or zlib.
I imagine there's a cross-TLA committee that analyses all the available zerodays, and meets quarterly to prioritise what order and which agencies can burn those zerodays when needed. And probably KPIs for each agency to add new zerodays which move their agency higher up the list to potentially burn the really good ones.
Even further back, someone claimed that a three letter agency had paid some developer to introduce backdoor to openbsd (or possibly openssh).
Theo did not belive this and publicly disputed these claim and even revealed the name of the whistle-blower. But I have always felt the story rang true and Theo sound not have been so dismissing.
Can't find the story, but it should be on the mailing lists somewhere
It was talk about a backdoor in OpenBSD's IPsec stack. The software was audited and nothing was found. The person who stepped forward (after being named) claimed on Twitter to have been formerly involved with the FBI and a supposed project of theirs looking into the feasibility of infiltrating the OpenBSD developer sphere in order to plant a backdoor, but that the project never reached planning.
Edit: the discussions and auditing of the IPsec stack happened around 2011 if memory serves me right. The supposed backdoor "happened" a decade earlier. The "agent" was named Greg Perry.
I remember this, it was the FBI. OpenBSD people did a huge audit and nothing was found. That was also like 20 years ago.
Also, other articles stated that never happened.
Plus, the "backdoor" in OpenSSH was a Linux only thing possibly related to systemd. It never affected OpenBSD. That is because of Linux people patching OpenSSH with "dependency hell". I believe systemd people is doing something about these dependency chains.
The "thing to do" about the dependencies is not to have them in the first place. Distributions where patching OpenSSH to add a libsystemd dependency instead of adding 15 lines of code.
And at least one of them was intentionally whistleblown to create an OpenBSD witchhunt wasting both OpenBSD developers time and distraction all the other *BSD and Linux security/devs, while the _real_ target slid under the radar...
At this point what makes us think all major contributors are not on a payroll of one or the other state agency? The attack surface of the whole SW supply chain is huge.
"Facebook on Monday joined a lawsuit pressing the Obama administration to allow it to disclose more details of its forced cooperation). Google and Microsoft filed suit in June."
Not to mention Twitter files.
These are just the companies with enough legal gravitas and social gravity to even mention the canary dying.
> Regardless, I'm not convinced we can defend against this with the way we're currently thinking about operating system design.
Amen!
On that note: Why is it so so difficult to set up a (rootless) container/sandbox correctly? (I mean, look at what runc does – shit's incredibly complex!) And why is it next to impossible to nest containers without privileges and arcane knowledge of how some of the underlying kernel syscalls work? Even those 250 lines of code to landlock `make` that the author mentions sound awful to me.
I don't want to have to set up a sandbox for every single application by hand, let alone set up rules for all things that a malicious application could possibly exploit. Instead, I want security-by-default! Have every application run in a tight sandbox by default and let the application specify what permissions it needs, so that I only need to review those and can grant them as I like. Meanwhile, deny access to everything else!
Clearly, we are not (Linux is not) ready for this yet – we lack both a good UI for all of this permission handling and an agreed-upon contract that all application developers can follow.
In fact, for the vast majority of applications we don't even really know (have documented) what permissions / access to kernel syscalls they would need, so it'd be incredibly hard to switch to a principle of least privilege-based approach over night.
The solution to that is linkers need to generate those permissions and create a manifest section in the elf file so the OS can handle it transparently.
My wish is that FOSS software test engineering approaches and tools evolve to appropriately model and formally-verify code behavior in the spirit of what the seL4 project did and maybe further. Similarly, system behavior should also be formally-specified and provable. Until we get there, it's one removed from whipping up untested code ad-hoc and then YOLO'ing to hope it all works out. It's going to take a lot more work and a new way of defining constraints, relationships, and nonvariants but it's unavoidable to prove that code behaves as intended.
PS: Neither "Just rewrite everything in X where X = Rust", "just use fuzzing", or "just use MISRA coding standards" doesn't get us there. Holistic improvements help, but not with the fundamental deficiency above.
That would be a great solution, but full formal verification is a very high bar.
The more realistic answer is to use safer languages than C for this sort of critical work. Rust and Ada spring to mind. They could even expose a C ABI/API. From a quick search, it looks like this has already been achieved for TLS, but it sees little real usage. [0] There are also Rust implementations of SSH (Russh and Thrussh), but I don't think they expose a C ABI/API.
I'm surprised this rather obvious solution doesn't seem to even receive serious consideration. I'd be surprised if performance was seriously impaired. This blog post [1] found Rustls to be faster than OpenSSL. I couldn't find a similar performance evaluation for Russh or Thrussh.
Well, the important things that run the world demand rigor. The "it's a hobby" defense rings hollow when decades of slip-shod processes have repeated led to a class of failures that repeated dozens and dozens of times.
That's not what I said, and it's not what we're talking about.
There's very little formally verified software out there, due to how challenging it is to use formal methods in large software solutions.
It makes little difference here whether the developer does it for a living. C codebases are prone to certain kinds of bugs that don't tend to arise when a safer language is used. We see a steady stream of these sorts of defects in FOSS C codebases and proprietary ones. Microsoft's closed-source TLS library has had similar issues, [0] as has Windows more broadly of course.
All that said, it would be great to see a serious and well-funded effort at a formally verified TLS implementation. Until then, we have the option of just using safer languages, but it's not getting much traction.
I helped out reviewing the winning contribution[1] by Linus Åkesson, for what I now know was the final contest.
Linus, aka ”LFT”, is a remarkable and truly talented person in so many ways. If you haven’t heard of him before, I suggest you check out all of his projects.
Was there ever a writeup of exactly how the XZ exploit worked? I mean exactly, I get the general overview and even quite a few of the specifics, but last time I checked no one had credibly figured out exactly how all the obfuscated components went together.
That is, as it says in the title, about the Bash-stage obfuscation. That’s fun but it’d also be interesting to know what capabilities the exploit payload actually provided to the attacker. Last I looked into that a month or so ago there were at least two separate endpoints already discovered, and the investigation was still in progress.
I agree 1000% with this. One thing I don't see addressed in the article you reference, though, is whether any OpenSSH maintainers spotted the addition of a co-maintainer to xz utils and did any due diligence about it.
Seems unlikely. xz is not a dependency of OpenSSH.
It's only a transitive dependency of sshd on Linux distributions that patch OpenSSH to include libsystemd which depends on xz.
It's wholy unreasonable to expect OpenSSH maintainers to vet contributors of transitive dependencies added by distribution patches that the OpenSSH maintainers clearly don't support.
> Very annoying - the apparent author of the backdoor was in communication with me over several weeks trying to get xz 5.6.x added to Fedora 40 & 41 because of it's "great new features". We even worked with him to fix the valgrind issue (which it turns out now was caused by the backdoor he had added). We had to race last night to fix the problem after an inadvertent break of the embargo.
> He has been part of the xz project for 2 years, adding all sorts of binary test files, and to be honest with this level of sophistication I would be suspicious of even older versions of xz until proven otherwise.
Yeah, what's posted by you and other users so far is stuff I know, build scripts, injection, obfuscation. I'm more looking for a careful reverse engineering of the actual payload.
I haven't looked again in months, but I'd be interested in the same thing you're looking for. I poked at the payload with Ghidra for a little bit, realized it was miles above my pay grade, and stepped away. Everybody was wowed by the method of delivery but the payload itself seems to have proved fairly inscrutable.
The TL;DR is that is hooks the RSA bits to look for an RSA cert with a public key that isn't really an RSA public key; the pubkey material contains a signed & encrypted request from the attacker, signed & encrypted with an ed448 key. If the signature checks out, system() is called, i.e., RCEaaS for the attacker.
Random aside to the other commenter's linked articles, I find it a bit coincidental that the supposed "kill switch" environment variable, yolAbejyiejuvnup=Evjtgvsh5okmkAv, decodes from UTF-16LE to UTF-8 as 潹䅬敢祪敩番湶灵䔽橶杴獶㕨歯歭癁 which google translates to "You can't do it without a soul."
Any even-length alphabetic ASCII string decodes to random Chinese characters in UTF-16LE. Digits and = unlock some Japanese hiragana, Korean hangeul and assorted punctuation, but those only make up a small fraction of the total.
For example, 'backdoor'.encode('ascii').decode('utf_16_le') == '慢正潤牯', which Google Translate turns into "Slow and positive", but it's just nonsense.
I'm naive to the translation tech space but is this sort of thing unique to languages like Chinese? I figured all this stuff was mostly solved. Like I wouldn't expect dflhglsdhfgalskjdf to have Google Translate output some grammatically valid Spanish output.
There is one difference between gibberish Chinese and Latin character sequences. In Chinese, each character indeed carry some meanings (like a word). So I guess the model may hallucinate some output inspired by these meanings. In the case "慢正潤牯" -> "Slow and positive", it actually translated the first two characters literally (慢 -> slow, 正 -> correct/positive/upright).
So equivalent English gibberish would be like "hast prank bibble done anut me me ions." Google translates this one to "对我而言,恶作剧已经完成了。" (To me, the prank has been done.) in Chinese -- very valid sentence, and "¿Me has hecho una broma a mí, Bibble?" in Spanish -- also seems valid.
I guess the model is (over) optimized to generate valid outputs. This can be a feature, so it still translates grammatically invalid but to some degree understandable text (like with typos or non-standard Internet language).
I think the Latin script might be somewhat protected because random jumbles of letters do appear as serial numbers and whatnot, but for other scripts, anything goes.
I say ҏӲҨЏ ҜъКѠ ЇЩіН гӞэѷ in "Russian", Google Translate says "Let's talk about it".
I hadn't looked into that story before so was following the rabbit hole of articles and gists and stuff and saw that some referenced a kill switch via env variable, so I just tossed it into that CyberChef online tool using its "magic mode" and ticked the "intensive mode" box and it was the top result. Just commented because I hadn't seen it elsewhere and figure it might be a little easter egg of sorts.
Wow I didn't realize what implicit trust I put in their translation output. Indeed I just tried some other Chinese -> English translation sites and they vary widely on what they output. Is it gibberish chinese characters these translators just guess on? Either way thanks for the insight I clearly put too much assumed faith in their quality/accuracy.
Right, completely gibberish. as a native speaker, I can recognize at most 4 characters, and not even one subsequence makes any sense.
Actually just by shuffling these characters you have a good chance to get some specious translations (adding a punctuation makes it more likely to generate a completed sentence):
"祪癁番䔽䔽!" -> "I am so sick!"
"獶獶祪灵癁癁癁!" -> "The soul is full of blood!"
In terms of its level of severity (and all round insanity) attacking OpenSSH with a backdoor is as if someone had hoax-mailed a packet labelled “Anthrax” to every tech business in the world.
Brazenly stupid and pointlessly broad: the motivation could just as easily have been to cause mass societal disruption (“terrorism”) instead of a targeted attack that just happened to sweep up everyone else in its arms.
Backdoor, bugdoor or or "convenient" bug... you can spot in the source code...
Now, have a look at machine code which is spit out by compilers... and there, right there... no bug or backdoor in the source code though, but yet... how would you call that? compiler-injected-door?
To fight this you just need to trust and audit those small pieces of s..oftware which are gcc/clang(llvm).
What is the meaning of "outside"? What if SQLite founder decides to step down. In case of `xz` a fresh insider was a malicious actor who was supposed to help keep up the maintenance. I think open-community projects like Linux Kernel have benefits as they won't die with founder interest waning, still there is more eyes to find issues very early.
DonHopkins on July 8, 2019 | parent | context | favorite | on: Contributor Agreements Considered Harmful
And then there's Linus's Law, which he made up, then tried to blame on Linus.
"Given enough eyeballs, all bugs are shallow." -Eric S Raymond
"My favorite part of the "many eyes" argument is how few bugs were found by the two eyes of Eric (the originator of the statement). All the many eyes are apparently attached to a lot of hands that type lots of words about many eyes, and never actually audit code." -Theo De Raadt
>In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]
>Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.
The little experience Raymond DOES have auditing code has been a total fiasco and embarrassing failure, since his understanding of the code was incompetent and deeply tainted by his preconceived political ideology and conspiracy theories about global warming, which was his only motivation for auditing the code in the first place. His sole quest was to discredit the scientists who warned about global warming. The code he found and highlighted was actually COMMENTED OUT, and he never addressed the fact that the scientists were vindicated.
>During the Climategate fiasco, Raymond's ability to read other peoples' source code (or at least his honesty about it) was called into question when he was caught quote-mining analysis software written by the CRU researchers, presenting a commented-out section of source code used for analyzing counterfactuals as evidence of deliberate data manipulation. When confronted with the fact that scientists as a general rule are scrupulously honest, Raymond claimed it was a case of an "error cascade," a concept that makes sense in computer science and other places where all data goes through a single potential failure point, but in areas where outside data and multiple lines of evidence are used for verification, doesn't entirely make sense. (He was curiously silent when all the researchers involved were exonerated of scientific misconduct.)
> Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".
This makes a lot of sense, because for the most part, you only go looking for bugs when you've run into a problem.
Looking for bugs you haven't run into is a lot harder (especially in complex software like OpenSSL), you might get lucky and someone sees a bug while looking for something else, but mostly things go unlooked at until they cause a problem that attracts attention.
Even when you pay for a professional audit, things can be missed; but you'll likely get better results for security with organized and focused reviews than by hoping your user base finds everything.
Large open source projects are regularly subjected to security audits.
I think the reality is that closed source software is vulnerable to the same attack, the only difference is fewer eyes to see it and more likely a profit motive will keep those eyes directed in other ways.
It's not a complete fallacy. In events like this, after the news hits, there are a flurry of eyeballs looking, at least for a little while. The heartbleed bug got people to look at the openssl code and realize what a mess that code is. Inspectre and meltdown has led to the discovery of many more CPU vulnerabilities. After Chatgpt hit the market, there has been lots of new research on AI security, such as into prompt injection attacks.
There's another venue for backdoors in most Linux distros. It's insanely complex by design (so that an endless supply of backdoors can be implanted), touches absolutely everything, was required for the XZ backdoor to work, etc.
Even with the best intentions, can a volunteer-driven project like OpenSSH truly guarantee the same level of security as a commercial solution with dedicated resources and a financial stake in preventing backdoors?
Imagine a closed source company with cost pressures employing a random developer who can commit code, perhaps without any peer review, but certainly limited peer review from harried employees.
Now imagine why a nation state would want to get staff working in such a company.
Now if companies like
Microsoft or Amazon or Google want to pay people to work on these open source projects that’s a different thing, and a great thing for them to do given how much they rely on the code.
There's a ton of great truth here. It's hard to bite the bullet and believe that insiders already exist (everywhere), but I can share that from my experience working in big tech:
- There 100% will be bad actors. Many of them.
- But not always nationstate. Instead, they do it for (dumb) personal reasons, too. Also, don't forget lulzsec as a great example of just doing it for fun. So we cannot presume to know anything about the 'why'. The bad guys I caught did it for the most asinine reasons...
But the good news is that we have options:
- Strategic: Develop processes and systems that account for the perpetual existence of unknown bad actors and allow for successful business operation even when humans are compromised.
- Reactive: Structural logging that makes sense in the context of the action. Alerts and detection systems too.
- Reduction: Reduce access to only what is needed, when it is needed.
- Proactive (not always necessary): Multi party approvals (a la code review and production flag changes or ACL changes, too)
- Social: Build a culture of security practices and awareness of bad actors. Don't make people feel guilty or accusatory, just empower them to make good design and process decisions. It's a team sport.
Bonus: By guarding against evil actors, you've also got some freebie good coverage for when an innocent employee gets compromised too!
---
Companies like Google and Amazon do the techniques above. And they don't generally rely on antiquated technology that cannot and will not change to meet the modern standards.
I know because I was the person that built and Google's first time-based access system and rational-based access systems. And multi party approval systems for access. (Fun fact: The organizational challenge is harder than the technical).
And, those strategies work. And they increase SRE resilience too!
---
But even with the best UX, the best security tooling, the best everything, etc there's no guarantees that it matters if we just reject anything except the old system we're used to.
It's like a motorcycle helmet: Only works if you use it.
Your argument is a model that does no vetting of contributors whatsoever, which resulted in the catastrophe that is the topic of discussion, is better than a hypothetical company which is full of compromised developers that have free reign to commit to the source tree with no oversight? That sounds extremely contrived.
If you are positing that government infiltration of companies is hypothetical and not a real threat, here is an example of compromised corporate staff:
This wasn’t a contributor to OpenSSH, it was a deep level supply chain attack - something that closed source commercial companies are not immune to.
Given how much closed source companies love BSD/apache/etc licenses where they can simply use these low level libraries and charge for stuff on the top I’m not sure how they would be immune from such an attack.
The risk from this was highlighted in xkcd back in 2020
Moving the goalposts and splitting hairs. The fact remains the open source model allowed an imaginary person, operating on behalf of a threat actor, to obtain privileged commit access to a widely used open source project without any vetting whatsoever. Let me repeat that. They were given control of the repo without even verifying this person exists. To do this at a commercial company you actually have to show up and interview which is an order of magnitude more difficult than creating an anonymous Gmail account and be given the keys to the kingdom.
You are the one who moved the goalpost here. Vanilla OpenSSH doesn't link against xz, period. Not even the portable versions as LibreSSL does for OpenSSL.
If distros randomly patch OpenSSH because of SystemD, it's their problem.
> tldr; your statement overlooks the reality of businesses with high ethical and financial obligations, like Google, Amazon, and Azure.
- These companies underpin much of the internet's infrastructure.
- Their security practices are far more advanced than typical businesses, with SSH being a heavily restricted last resort. That's not to imply that everyone else shouldn't strive to do meet that (modern) bar too.
- Dedicated teams focus on minimizing access through time-based, role-based, and purpose-based controls.
- They rarely experience conventional hacks due to reduced blast radius from attacks and insider threats.
- Leading security experts in both major tech companies and niche organizations are driving new strategies and ways to think about security... their focus includes access reduction, resilience, and reliability, regardless of whether the solutions are closed or commercial for them. The ideas spread. (looking at you, Snapchat, for some odd reason)
- This is key: This evolution may not be obvious unless you actively engage with those at the forefront. I think it's what makes people think like the comment above. We cannot see everything.
- It's crucial to recognize that security is a dynamic field... with both open-source and closed-source solutions contributing.
So, the notion that volunteer-led projects are inherently more secure overlooks the significant investments in security made by major corporations that host the internet, and their relative success in doing so. Their advacements are coming to the rest of the world (eventually).
> especially when the product itself claims security as a core principle
My thought is that both volunteers and corporations contribute. In different ways, too.
One example is how a YC company made an open source version of Zanzibar. Zanzibar was an open paper to the world from Google that describes a flexible, resilient, fast access control system. It powers 99% of ACLs at Google. It's /damn/ good for the whole world and /damn good/ for developers' sanity and company security.
Corporate endeavors may fail, but they are often intense in direction and can raise the bar in terms of UX and security. Even if it's just a whitepaper, it still cannot be discounted. Besides, the larger places focusing on security aren't getting a big blast radius hack all that often, yeah?
I'm curious though, you've intrigued me. What kind of evidence or just lightweight ideas are you thinking of wrt volunteer led being more secure? No need to dig up sources if it's hard to find, but the general direction of what makes you feel that would be useful.
OpenSSH, as load-bearing infrastructure for much of the Internet, is heavily sponsored by tech companies. It empirically has one of the best security records in all of general-purpose off-the-shelf software. If for some benighted reason I found myself competing directly with OpenSSH, the very last thing I would pick as a USP would be security.
Yep. I would not attempt to differentiate against OpenSSH based on security track records. It's one of the most trusted pieces of software in the industry.
@dns_snek, it's right there in my real name username, comment history, and profile. :)
My entire youth and professional life I've seen nothing but footguns with actual practical use of SSH. The HN community loves to hate, but the reality is that almost no one uses SSH safely. It's near impossible. Especially when it comes to configuration, keys, passwords, and network factors.
I observed the common SSH failure patterns, and I made the most obvious footguns less than lethal. Looking a step further, I made remote terminal access a pleasure to use safely even for absolute novices.
So to your point about being in YC: In doing so, I thought it would be beneficial to join a community that supports one another (YC) so that an option (Teclada) can scale to make a real impact in the world WRT the warts and footguns of SSH.
Not the person you are asking but the two footguns I see all the time in corporations are the use of multiplexing in SSHD and the mismanagement of public key trusts in authorized_keys. I suspect this may also be an issue in the DoD given none of the federal hardening guidelines address this so I hope they are reading. Especially right now
Multiplexing: This feature improves speed when using SSH to proxy applications but it also gives an authentication-free unlogged channel to phishers, allowing zero friction trivial bypass of MFA. There are ways to log this with auditd but very few companies even touch auditd much less customize it to catch phishers. With multiplexing and phishing, corporate firewalls effectively become non existent and phishing is highly effective even in places one would think it would not be. Phishing capabilities include but are not limited to any org with access to public email or public chat, Slack, Instagram, Signal, Whatsapp, Discord, etc... Loose Lips Sink Ships
Public Keys: People instinctively worry about private keys but the biggest risk in OpenSSH in my opinion is the lack of auditing of what public keys are created by whom or what and trusted on what accounts. I can for example add a key to your account if I have temporary root privs and the much later log in to a system as you do bad things and now you are the first person investigators look at. Combine this with passwordless-sudo and it's game over, not that one really needs root to pilfer secrets from a company or let competitors destroy it. In most companies this is a hot steamy mess that people choose to ignore and do mental gymnastics to avoid it all together.
How to do these things properly is a much bigger topic, too big for HN comments.
Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
It's still an insinuation even if its true. You should consider rephrasing your comment and not starting it with "did you forget to say...". Presumably they did not forget to mention this.
Unpleasant and insulting for no reason then. I'm just trying to advocate for respectful and professional discourse. Save the insults for strangers in real life, not Hacker News.
The point of the guideline is that accusing people of commenting in bad faith --- another guideline here --- makes for bad, boring conversation. What's worse, the comment you responded to made a bad argument that I think is easy to knock down on its merits, and by writing like this you weakened those rebuttals. Don't write like this here.
I'm sorry you feel like I don't live up to your standards, I believe that transparency about affiliations and conflicts of interest is paramount to healthy and productive discussion. Disclosing these things when criticizing competitors is really basic etiquette.
And look, it was just a simple nudge for them to disclose their affiliations more plainly in the future, while also providing relevant (and apparently appreciated) context to other readers. It was a footnote, not an argument.
A sensitive product like this would have to defend against well funded, patient, well resourced threats, including but not limited to infiltrating an organzation in order to plant code that only a few people may even be able to notice.
As an employee, I've typically needed to show up in person, but I've worked with contractors who never showed up in person. I've even been such a contractor at times.
Lots of commercial products use contractors and licensed code in the final product.
At least with most open source projects, a lot of the contribution process is in the open, so you could watch it you wanted to. As DonHopkins writes elsewhere, few people do, but it's possible. Not a lot of commercial projects offer that level of transparency into changes.
I worked at my current job for 3 months before I met a coworker in person. That might slightly help at a legacy butts-in-seats factory, but doesn't do a lot for remote jobs. I could be proxying in from Romania for all they'd know.
Thankfully, we aren't limited to asking leading questions and then hand waving at it; we have a rather lot of empirical evidence. OpenSSH is 24 years old; has it ever been successfully backdoored?
> We don't know. We won't know the negative case, but we may someday in some circumstance find out the positive (bugged) case.
But that's either the same with any tool regardless of whether it's commercially supported / FOSS / made by anonymous devs or not. If anything, FOSS is easier to audit.
SSH is kind of a swiss army knife. But 1000x sharper ;) The delta I'm speaking of would be to have bespoke tooling for different needs. And the tooling for each purpose could have appropriate, structured logging and access controls.
With SSH you can do almost anything. But you can imagine a better tool might exist for specific high-value activities.
Case study:
Today: engineering team monitors production errors that might contain sensitive user data with SSH access and running `tail -f /var/log/apache...`.
Better: Think of how your favorite cloud provider has a dedicated log viewing tool with filtering, paging, access control, and even logs of what you looked at all built in. No SSH. Better UX. And even better security, since you know who looked at what.
---
There are times when terminal access is needed though. SSH kinda fits that use case, but lacks a lot. Including: true audit logging, permissioned access to servers, maybe even restricting some users to a rigid set of pre-ordained commands they are allowed to run. In that cases, a better built tool can allow you to still run commands, but have a great audit log, not require direct network access (mediated access) to servers or internal networks directly, flexible ACLs, and so on.
It's off topic, but in my consulting and networking, security/firewall appliances are an easy first line approach I see companies buy in to. The security sales pitch sounds good and makes you feel good. Cannot name names.
I mean, everybody has a perimeter, even the ZT believers, but I think the notion of large networks protected by like a high-end NetScreen or Palo Alto firewall is 10-15 years out of date. We have, like, Tailscale, and netfilter.
Sometimes commercial companies have "incentives" to put backdoors, like i.e. secret orders by intelligence agencies. Snowden papers and all related information from that time set a baseline on what you may consider safe.
We must first precisely define "level of security" that is expected from OpenSSH and a commerical version. Only then the discussion about who can guarantee what would make sense.
Of course not. OpenSSH comes with no warranty, read the license.
Historically, it's been pretty good though.
If you would consider a commercial alternative, consider how much you would need to pay to get an actionable warranty, and consider if you could find someone to do a warrantied audit of OpenSSH (or especially of OpenSSH in your environment) for that amount. It might be a better use of your money.
Has any open source project taken down the majority of single OS install base as quickly as CrowdStrike? Seems like they would have the "dedicated resources and a financial stake" to prevent such as situation.
As far as we know.
Related, there was a pretty interesting backdoor-by-bug attempt on the Linux kernel (at least, one that we know of) back in 2003: https://lwn.net/Articles/57135/
The Linux "bug" was unsophisticated by modern standards, but you could imagine a modern equivalent that's harder to spot:
Make the "bug" happen across several lines of code, especially if some of those lines are part of existing code (so don't appear in the patch being reviewed). Ensure the compiler doesn't warn about it. Make the set of triggering events very unlikely unless you know the attack. It would be very surprising to me if three letter agencies hadn't done or attempted this.