Hacker News new | past | comments | ask | show | jobs | submit login

"We" doesn't have to be the U.S. This is a false dichotomy that I see people in this thread keep pushing. I suspect in bad faith, by the people that want to insert backdoors. As a baseline, we could keep the contributors to NATO and friends. If a programmer is caught backdooring, they can be charged and extradited to and from whatever country.



If it's just an extradition issue, the US has extradition treaties with 116 countries. You'd still have to 1) ensure that user is who they say they are (an ID?) and 2) they are reliable and 3) no one has compromised their accounts.

1) and 3) (and, to an extent, 2) )are routinely done, to some degree, by your average security-conscious employer. Your employer knows who you are and probably put some thought on how to avoid your accounts getting hacked.

But what is reliability? Could be anything from "this dude has no outstanding warrants" to "this dude has been extensively investigated by a law enforcement agency with enough resources to dig into their life, finances, friends and family, habits, and so on".

I might be willing to go through these hoops for an actual, "real world" job, but submitting myself to months of investigation just to be able to commit into a Github repository seems excessive.

Also, people change, and you should be able to keep track of everyone all the time, in case someone gets blackmailed or otherwise persuaded to do bad things. And what happens if you find out someone is a double agent? Rolling back years of commits can be incredibly hard.


Getting a TS equivalent is exactly what helps minimize them chances that someone is compromised. Ideally, such an investigation would be transferable between jobs/projects, like normal TS clearance is. If someone is caught, yes rolling back years isn't practical, but we probably ought to look very closely at what they've done, like is probably being done with xz.


I guess it depends on the ultimate goal.

If the ultimate goal is to avoid backdoors in critical infrastructures (think government systems, financial sector, transportation,...) you could force those organizations to use forks managed by an entity like CISA, NIST or whatever.

If the ultimate goal is to avoid backdoors in random systems (i.e. for "opportunistic attacks"), you have to keep in mind random people and non-critical companies can and will install unknown OSS projects as well as unknown proprietary stuff, known but unmaintained proprietary stuff (think Windows XP), self-maintained code, and so on. Enforcing TS clearances on OSS projects would not significantly mitigate that risk, IMHO.

Not to mention that, as we now know, allies spy and backdoor allies (or at least they try)... so an international alliance doesn't mean intelligence agencies won't try to backdoor systems owned by other countries, even if they are "allies".


The core systems of Linux should be secured, regardless of who is using it. We don't need every single open source project to be secured. It's not okay to me that SSH is potentially vulnerable, just because it's my personal machine. As for allies spying on each other, that certainly happens, but is a lot harder to do without significant consequences. It will be even harder if we make sure that every commit is tied to a real person that can face real consequences.


The "core systems of Linux" include the Linux kernel, openssh, xz and similar libraries, coreutils, openssl, systemd, dns and ntp clients, possibly curl and wget (what if a GET on a remote system leaks data?),... which are usually separate projects.

The most practical way to establish some uniform governance over how people use those tools would involve a new OS distribution, kinda like Debian, Fedora, Slackware,... but managed by NIST or equivalent, which takes whatever they want from upstream and enrich it with other features.

But it doesn't stop here. What about browsers (think about how browsers protect us from XSS)? What about glibc, major interpreters and compilers? How do you deal with random Chrome or VS Code extensions? Not to mention "smart devices"...

Cybersecurity is not just about backdoors, it is also about patching software, avoiding data leaks or misconfigurations, proper password management, network security and much more.

Relying on trusted, TS cleared personnel for OS development doesn't prevent companies from using 5-years old distros or choosing predictable passwords or exposing critical servers to the Internet.

As the saying goes, security is not a product, it's a mindset.


We wouldn't have to change the structure of the project to ensure that everyone is trustworthy.

As for applications beyond the core system, that would fall on the individual organizations to weigh the risks. Most places already have a fairly limited stack and do not let you install whatever you want. But given that the core system isn't optional in most cases, it needs extra care. That's putting aside the fact that most projects are worked on by big corps that do go after rogue employees. Still, I would prefer if some of the bigger projects were more secure as well.

Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.


>Your "mindset" is basically allowing bad code into the Kernel and hoping that it gets caught.

Not at all. I'm talking about running more and more rigorous security tests because you have to catch vulnerabilities, 99% of which are probably introduced accidentally by an otherwise good, reliable developer.

This can be done in multiple ways. A downstream distribution which adds its own layers of security tests and doesn't blindly accept upstream commits. An informal standard on open source projects, kinda like all those Github projects with coverage tests shown on the main repo page. A more formal standard, forcing some critical companies to only adopt projects with a standardized set of security tests and with a sufficiently high score. All these approaches focus on the content, not on the authors, since you can have a totally good-willing developer introducing critical vulnerabilities (not the case here, apparently, but it happens all the time).

On top of that, however, you should also invest in training, awareness, and other "soft" issues that are actually crucial in order to actualy improve cybersecurity. Using the most battle-tested operating systems and kernels is not enough if someone actually puts sensitive data on an open S3 bucket, or if someone only patches their systems once a decade, or if someone uses admin/admin on an Internet-facing website.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: