Hacker News new | past | comments | ask | show | jobs | submit | more andix's comments login

That's great that you are considering this more now.

But the xy story taught us, that every contributor is dangerous, the most dangerous ones are probably the most helpful and most skilled contributors. If someone barely get's a PR accepted, they probably lack the skills to add a sophisticated backdoor.

Another thing that was not talked about a lot: There are many ways to compromise existing maintainers. Compromising people is the core competency of intelligence, happens all the time, and most cases probably never come to public knowledge.


> Compromising people is the core competency of intelligence, happens all the time, and most cases probably never come to public knowledge.

Yea. It would almost be strange if security service didnt consider the route of getting "kompromat" on a developer to make them "help" them.


> consider the route of getting "kompromat" on a developer to make them "help" them

I suppose that’s an option, but it also introduces an additional risk of exposure for your operation as it doesn’t always work and makes it much more complicated to manage even when it does work.


Does it matter though? They don’t have to say “I am so and so of the Egyptian intelligence service and would like to blackmail you”


They might not even use blackmail, they might just "help out" in a difficult financial situation. Some people are in severe debt, have a gambling problem, are addicted to expensive drugs, or might need a lot of money for a sick relative. There are many possibilities.

The trick is finding the people that can be compromised.


I think you're going overboard on what's required. Take anybody who is simultaneously offered a substantial monetary incentive (let's say 4 years of total current/vesting comp), and also threatened with the release of something that we'll say is little more than moderately embarrassing. And this dev is being asked to do something that stands basically 0 risks of consequences/exposure for himself due to plausible deniability.

For instance, this is the heartbleed bug: "memcpy(bp, pl, payload);". You're copying (horrible naming conventions) payload bytes from pl to bp, without ensuring that the size of pl is >= payload, so an attacker can trivially get random bytes from memory. Somehow nobody caught one of the most blatant overflow vulnerabilities, even though memcpy calls are likely one of the very first places you'd check for this exact issue. Many people think it was intentional because of this, but obviously there's zero evidence, because it's basically impossible for evidence for this to exist. And so accordingly there were also 0 direct consequences, besides being in the spotlight for a few minutes and have a bunch of people ask him how it felt to be responsible for such a huge exploit. "It was a simple programming mistake" ad infinitum.

So, in this context - who's going to say no? If any group, criminal or national, wanted to corrupt people - I really don't think it'd be hard at all. Mixing the carrot and the stick really changes the dynamics vs a basic blackmail thing where it's exclusively a personal loss (and with no guarantee that the criminal won't come back in 3 months to do it again). To me, the fact we've basically never had anybody come forward claiming they were a victim of such an effort means that no agency (or criminal organization) anywhere has ever tried this, or that it works essentially 100% of the time.


This doesn't look intentional at all, because this is basically like how 90% of memory disclosure bugs look


Absolutely. And that's the point I'm making here. It is essentially impossible to discern between an exploit injected due to malice, and one injected due to incompetence. It reminds one of the CIA's 'simple sabotage field manual' in this regard. [1] Many of the suggestions look basically like a synopses of Dilbert sketches, written about 50 years before Dilbert, because they all happen, completely naturally, at essentially any organization. The manual itself even refers to its suggestions as "purposeful stupidity." You're basically exploiting Hanlon's Razor.

[1] - https://www.openculture.com/2015/12/simple-sabotage-field-ma...


Nobody can tell if they are intentional or accidental.


I suppose the point is that even though any given instance of an error like this is overwhelmingly likely to be an innocent mistake, there is some significant probability that one or two such instances were introduced deliberately with plausible deniability. Although this amounts to little more than the claim that "sneaky people might be doing shady things, for all we know", which is true in most walks of life.


> They might not even use blackmail, they might just "help out"

If the target knows or suspects what you’re asking them to do is nefarious then you still run the same risks that they talk before your operation is complete. It’s still far less risky to avoid tipping anyone else off and just slip a trusted asset into a project.


> “I am so and so of the Egyptian intelligence service and would like to blackmail you”

No, but practically by definition the target has to know they’re being forced to “help” and therefore know someone is targeting the project. Some percentage of the time the target comes clean about whatever compromising information was gathered about them, which then potentially alerts the project to the fact they’re being targeted. When it does work you have to keep their mouth shut long enough for your operation to succeed which might mean they have an unfortunate accident, which introduces more risks, or you have to monitor them for the duration which ties up resources. It’s way simpler just to insert a trusted asset into a project.


I would guess there are many projects they could target at any given time.


The more projects they target the more risk of being flagged and preventive measures to be engaged by counter intelligence etc.


I’m reading all this with sadness realizing that one of the Internet’s last remaining high trust spaces is being destroyed.


> one of the Internet’s last remaining high trust spaces is being destroyed

One of the Internet's last remaining high trust spaces is being attacked.

What happens next is still unwritten.


From what I know of today's developer culture the solution will be for one company, probably Microsoft given their ownership of GitHub, to step in and become undisputed king and single point of failure for all open source development. Developers will say this is great and will happily invite this, with security people repeating mantras about how securing things is "hard" and "Microsoft has more security personnel than we do." Then MS will own the whole ecosystem. Anyone objecting will be called old or a paranoid nut. "This is how we do things now."


As an positive counterexample, US recently reduced federal funding for the program which manages CVEs [1]. There was/is risk of CVE data becoming pay-for-play, but OSS developers have also pushed for decentralization [2]. A recent announcement is moving in the right direction, https://medium.com/@cve_program/new-cve-record-format-enable...

  The CVE Board is proud to announce that the CVE Program has evolved its record format to enhance automation capabilities and data enrichment. This format, utilized by CVE Services, facilitates the reservation of CVE IDs and the inclusion of data elements like CVSS, CWE, CPE, and other data into the CVE Record at the time of issuing a security advisory. This means the authoritative source (within their CNA scope) of vulnerability information — those closest to the products themselves — can accurately report enriched data to CVE directly and contribute more substantially to the vulnerability management process.
> solution will be for one company, probably Microsoft given their ownership of GitHub, to step in and become undisputed king and single point of failure for all open source development.

A single vendor solution would be unacceptable to peer competitors who also depend on open-source software. A single-foundation (like LF) solution would also be sub-optimal, but at least it would be multi-vendor. Long term, we'll need a decentralized protocol for collaborative development, perhaps derived from social media protocols which support competing sources of moderation and annotation.

In the meantime, one way to decentralize Github's social features is to use the GH CLI to continually export community content (e.g. issue history) as text that can be committed to a git repository for replication. Supply chain security and identity metadata can be then be layered onto collaboration data.

[1] https://www.darkreading.com/vulnerabilities-threats/nist-nee...

[2] https://github.com/yoctoproject/cve-cna-open-letter/blob/mai...


It can be a stepping stone towards a world in which we use sandboxing and (formal) verification to safeguard against cultural degradation. There's no alternative, too many bad actors are roaming about. I hate that as much as the next guy :(


The most secure systems are those that are also resistant to rubber hose cryptography.


"Rubber Hose Cryptography" comes in the form of a PR.

"Rubber Hose Cryptanalysis" comes in the back door and waits for you in the dark.


No, it 'comes in the form of' a rubber hose...


They would be really bad at their job, if they didn't try.


Developers would be really bad at their newly expanded job, if they didn't resist.


> If someone barely get's a PR accepted, they probably lack the skills to add a sophisticated backdoor.

That's true, but it's also true that a sophisticated and well formed PR is probably genuine too. Hostile PRs are the exception rather than the rule. And if only the high quality PRs are treated with suspicion, then the attackers will tailor their approach to mimic novices. General vigilance is required, but failure is likely because these attacks are so rare that maintainers will grow weary of being paranoid about a threat they've never seen in years of suspicion and let their guard down.


Early this year, I've received a hostile PR for a "maintenance only" JavaScript authentication library with less than 100 stars but which is actively used by my employer.

It added a "kinda useful but not really needed" feature and removed an unrelated line of code, thereby introducing a minor security vulnerability.

My suspicion is that these low quality PRs are similar to the intentional typos in spam emails: Identify projects/ maintainers who are sloppy/ gullible enough and start getting a foot in the door.


> Another thing that was not talked about a lot: There are many ways to compromise existing maintainers.

Also not talked about a lot - there are many ways to compromise existing software engineers who are paid to work on proprietary software systems.


It is a worthwhile reminder.

But, source-not-available proprietary systems are just totally hopeless from this point of view, of course an intelligence agency could slip something on. A bored developer at the company could too. Users of this sort of proprietary system have just chosen to have 100% faith for some incomprehensible reason.


I don't think that's relevant to this discussion though, as open- and closed-source subversion would seem to follow really different paths.

- Open-source subversion has the big advantage of having the code, testing and build processes in the open which allows for the attack surface to be exhaustively studied, whereas closed source requires code exfil, reverse engineering, inside intel on processes etc.

- Closed-source subversion can hide in other places -- binaries can be corrupted on a compromised server etc. Seeking to influence the code-based development seems like the hardest road IMO.

- Open-source maintenance (at least the kind under discussion here) stops at the maintainer, whereas most corporate dev is in a hierarchy with non-uniform commit authority. None of the same social techniques would apply.


One follow up to compromising existing maintainers: This makes the creators or long-term good faith maintainers maybe even more "dangerous" than new maintainers.


Are we facing a Byzantine generals kind of situation now?


We have always faced it, it’s just that there's more awareness of the potential issues.


Who can share a threat model with specific probability estimates on this? FWIW, I’m less interested in the particular estimates (priors) and more interested in the structure.


>If someone barely get's a PR accepted, they probably lack the skills to add a sophisticated backdoor.

Unforuntately it's easy to sandbag being dumb. Just because someone submits a PR defining constants for 0-999 does not mean they're actually bad at programming.


How incredibly wasteful.

You can form most useful numbers with just ten singe-digit constants, some casting, and string concatenation.


Indeed, in python you can just eval.


Sure, but being known for submitting bad code is going to make code reviews more thorough, not less. It's drawing additional attention to yourself.


> defining constants for 0-999

That person might just be an old school Java <5 developer.


That person might just be a regular Java developer who works on a project which onboarded Checkstyle, and can't disable it's MagicNumber check.

https://checkstyle.sourceforge.io/checks/coding/magicnumber....


Man, I hate such tools. Do I run into problems when I try to convert seconds to minutes?

Larger problem than magic numbers ever could be.


Yes! Anytime you see a function signature like "int timeout", it's safe to assume that the unit is in femtoseconds and pass a gigantic number while you curse out the incompetence of the developer. Either name your variables correctly (timeoutZeptoseconds), or use a proper data type (like a Duration or Period in Java, TimeSpan in C#, or a user-defined literal in C++).


Oh, now I get why you need those constants:

  var secondsPerWeek = Math.pow(2, SEVEN) * Math.pow(THREE, THREE) * Math.pow(FIVE, 2) * SEVEN


[flagged]


If someone gets stabbed in the eye, we find out about it. So our statistics on eye-stabbing are probably accurate.

We literally have no idea how many xz-style compromises are out there in the wild. We got really lucky with xz - it was only found because the backdoor was sloppy with performance and a microsoft employee got curious. But we have no data on all the times we got unlucky. How many packages in the linux ecosystem are compromised in this way? Maybe none? Maybe lots? We just don't know.


It did at least reveal the playbook, and that you have to get pretty creative to hide things in plain sight.

I'm sure any binary blobs in OSS software, no matter what the reason for having them will be viewed with suspicion, and build scripts get extra inspection after that.

Maybe I'm naive in thinking that some people are already looking into packages that are included in all base Linux builds? Including simplifying the build env, and making sure that the the build tools themselves (cmake, pkgconfig, gmake, autotools etc) are also not compromised.


The de facto standard serialization library for Rust, serde, started using binary blobs to speed up builds only a few months before the xz back door was discovered. Lots of people asked the author to include build scripts so they could (re)generate the blobs on their own and his response was basically if you want it, fork it.


You can always use the "we have no idea" argument because you can't prove something doesn't exist. Go find evidence. It's been over a month since xz and thus far we have zero additional incidents. And if you look at the specifics of xz attack: that wouldn't work for most projects because most don't have binary test files.


I'm nobody so you have no reason to believe me - but there have indeed been other, very prominent projects targeted in very similar attacks. We're still inside the responsible disclosure window.. hell, even in the blog post we're commenting on, three JS projects were targeted in failed attempts. That's 4 public projects now..


And xz wasn't the first. Several attempts have been made to put garbage in the kernel.


History may record XZ alongside Spectre/Meltdown as industry turning points for "too wide to see".


> three JS projects were targeted in failed attempts.

Suspected to be targetted, in a way that seems to have 0% chance of succeeding for almost any project. Which is why nothing happened.


> seems to have 0% chance of succeeding for almost any project.

Its obviously more than 0% given xz was successfully taken over and backdoored. Even a 5% chance of malicious takeover per project would make the situation pretty worrying given how many well funded, motivated government agencies are out there.


I'm not talking about xz, I'm talking about that OpenJS thing: random people emailing out of the blue "plz gimme maintainer". Entirely different situation.

I did quote the "three JS projects were targeted in failed attempts" bit, which should have made that abundantly clear.


Is it a different situation? Seems similar to me, except the examples we know about (the obvious ones) are the low skill examples. If someone played the long game like xz and made some helpful improvements to the project in that time, we wouldn’t know about it.

People have also done the same thing (to great effect) on the chrome extension “store” to get all manner of malware into chrome extension updates.

“Nobody unsubtle was successful” tells us nothing about the success rate of subtle attackers. It’s like looking at all the dodgy ssh and http requests any host on the internet is connected to and concluding “yep, 0% of low effort script kiddie attacks get through. I’m 100% safe from hackers!”


Are people really looking though? Are all open source libraries being run through extensive performance profiling to look for known heuristics? Are they being looked at line by line for aberrations?

I don’t have confidence that people are looking for evidence of potential exploitation because of reasons like the ones you bring up.

So we’re back to we just don’t know.


With hindsight it's not the runtime behaviour of the library that you'd want to test - the weakest point in the chain is where the distributed source .tar.gz can't be regenerated from the project repository.


For how many projects is that actually checked? I bet barely any.

Its especially difficult because most projects aren't built in a reproducible way. You should be able to uncompress and compare a source tarball. But if you get a binary and the source code used to generated that binary, there's no way to tell that they match.


Luckily the source tarball is the more important one to check, because that's the difference between backdooring one distribution and backdooring them all.

It's still not trivial because there might well be legitimate processing steps that are used to create the tarball, but it should be doable.


It’s worse than that, and that wouldn’t be enough.

A large class of exploitation methods simply have no performance impact.


Most commonly-used projects are watched by a bunch of people, or diffed on updates. These are not in-depth reviews, but should catch most of it. So yes, people are looking, and have been looking for a long time.

The reason Jia Tan could do their thing is because 1) the main meat was in a binary test file, 2) the code to use that seemed relatively harmless at a glance, and 3) people were encouraged to use the .tar.gz files instead of git clone. Also you need to actual get maintainer status, which is not as easy as it sounds.

I've been thinking of inserting a "// THIS LINE IS MALICIOUS, PLEASE REPORT IF YOU SEE IT" in some of my projects to see how long it would take. I bet it would be pretty fast either after commit or after tagging a release.


Maybe

  // this line is an external audit test - a free gift card to the first person to report finding it.


> I've been thinking of inserting a "// THIS LINE IS MALICIOUS, PLEASE REPORT IF YOU SEE IT" in some of my projects to see how long it would take.

Tools that use LLMs to review code will catch such projects.


No. If there is strong incentive to compromise, and little to no chance a compromise is being found, it's statistically most likely to assume compromises happen on a regular basis and only rarely are found out.


We know about the failed attempts, we have no idea about the successful ones, and the ones that are going to be successful in the future.


You can always use this line because you can never prove something doesn't exist. Go find evidence. It's been over a month.


Your choice of language in your comments (in this thread, not in general) isn’t bolstering your argument.

Why not be curious rather than just dismissive? This seems to be people just talking past each other at this point.

There have been a lot of changes in the last ~five years that point in the direction of supply chain security being at greater risk.

Evidence comes in many forms. The relevance of evidence depends on what part of the problem you are looking at.

Also, it is rational to talk about the probability by which different evidence is likely to be surfaced!

I think it is possible you are sensitive to people making such claims for self-interested purposes. Fair? But I don’t think it’s fair to assume that of commenters here.


> Your choice of language in your comments (in this thread, not in general) isn’t bolstering your argument.

Yeah, you're probably not wrong. I've had this argument a few times now, and it's the same dismissive "we don't know what we don't know" every time. Well, you can say that for everything and given the complexities of the xz attack that seems a bit unlikely to me, which is then again countered with "but we don't know!!11"

"Every contributor is dangerous" is spectacularly toxic type of attitude. I've already seen random people be made a target and even had their employers contacted over this before they even had a chance to explain(!!) To say nothing of "there are many ways to compromise existing maintainers. Compromising people is the core competency of intelligence, happens all the time" – so great, now I'm also potentially dangerous after spending untold hours and money over the last 20 years because I could be compromised. Great.

This was never a nuanced conversation about risk management to start with. This is not the type of community I've worked for all this time. "Let's use some common-sense tech so this isn't that easy". Sure, let's talk about that. "Let's treat every volunteer involved as potentially hostile and compromised after we've seen a single incident"? Yeah, nah.


Thanks for your thoughtful reply.

> "Every contributor is dangerous" is spectacularly toxic type of attitude.

I view this from the lens of "How well can people reason about probabilities?" and research has shown, more or less, "not very well". In the short term, therefore, it is wise to tailor communications so as to avoid predictable irrational reactions. In the medium term, we need to _show_ people how to think about these questions rationally, meaning probabilistically.

For what it is worth, I prefer to avoid using the phrase "common sense", as it invites so many failure modes of thinking.

My current attitude is, more or less, "let's put aside generalizations and start talking about probabilities and threat models". This will give us a model that makes _probabilistic predictions_. Models, done well, serve as concrete artifacts we can critique and improve _together_.

I hope to see some responses to my other comment at https://news.ycombinator.com/item?id=40271146 but I admit it takes more effort to share a model. It is well outside the usual interaction pattern here on HN to make a comment with a testable prediction, much less a model for them! Happily, there are online fora that support such norms and expectations, such as LessWrong. But I haven't given up hope on HN, as it seems like many people have the mindset. I think the social interaction pattern here squanders a lot of that individual intelligence, unfortunately... but that pattern can change in a bottom-up fashion as people (more or less) demand, at the very least, clearer explanations.


In the end you can never fully trust anyone, including yourself. This has always been true for anything: people get drunk, have psychotic episodes or have other mental health issues, things like that. It happens. Remember that Malaysian pilot flying the passenger plane in the ocean?

Every pilot in the world will agree that we need to think about risk management to prevent that sort of thing. I think a lot of them will have issues if we start saying things like "every pilot is dangerous" and (in a follow-up) "long-term good faith pilots are maybe even more dangerous than new maintainers". Then you've gone from "risk management" to just throwing shade.


I don't disagree. But my follow-up response is "don't leave it there; factor that into the probability tree".

What should professionals in cybersecurity do? (Not my field, so I could be off-target here) My recommendation: communicate a risk model [1], encourage people to update it for their situation, and demand that people act on it [2]. Not too different from what the field of cybersecurity recommends now. (Or am I wrong?)

[1] based on a set of attack trees (right?)

[2] based on the logic that if you get pwned, you become a zombie to attack me


> This was never a nuanced conversation about risk management to start with. This is not the type of community I've worked for all this time.

I'm not quite following the second sentence. What kind of community have you worked for? Do you mean "worked for" as in e.g. "the spirit of your comments on HN"? Or something else?


I think they are using community to refer to F/OSS projects as a monolithic entity, rather than a million separate and often competing and disagreeing fiefdoms that have always had issues with toxic assholes worming their way into too much power.


Communities are rarely monolithic; but do tend to have some vague set of shared ideas and values (even if there's ton of internal disagreement).

But yes, that's what I meant, roughly.


You have evidence of a state-sponsored attack which was only discovered because we got extremely lucky, and you’re not worried?

The attack itself is the frankly evidence. It’s sort of like how we expect there to be life on other planets because there is life on earth.


There are a lot of dismissive folks who think this is some kind of one-off event because you can't prove it's not- oh wait, the other attempts we can prove aren't enough evidence either!

I understand being wary of America trying to solve this the only way we know how (PRIVATIZE IT!), but dismissing it as a non-issue makes that more likely because you're basically saying you plan on ignoring it rather than putting your own controls in place.

Yes, FOSS projects need to be welcoming to new devs. No, they don't need to pretend malicious actors aren't an issue in order to do that.

You can vet new people, and be welcoming, at the same time.


> So we've had what, two incidents (xz and eventstream) in how many years?

This is specious reasoning.

You're only complaining you only heard of two incidents.

What you're really pointing out is that this attack vector works reliably well and is reproducible across projects.

You're also pointing out that this attack vector will continue to work until something is done to mitigate it.

I really do not understand what point you think you are making.


You're really going to pretend like there have been no socially-engineered cybersecurity attacks in the last 30 years...?

And by the way, stabbings happen all the time, at least 3 per day. Stabbings hurt a few people, cybersecurity incidents can hurt millions.


This is about "social engineering takeovers of open source projects", not "socially-engineered cybersecurity attack", which is much much broader.

I've been pretty clued up on open source for the last 20 years, and I don't really recall any other similar incidents other than the two I mentioned. I tried to find other examples a few weeks ago and came up empty-handed. It's certainly not common. So please do post specifics if you know of additional incidents, because from what I can see, it's exceedingly rare.


You seem super confident that there have been zero similar attacks that achieved their goals without detection. By definition, almost anyone who pulled off this kind of thing would try really hard not to burn that backdoor by being super obvious (for instance, using it to deface a website). We literally would not know anything about it, in all likelihood. Therefore I feel like it’s a lot more intellectually honest to say we have no idea if that has happened elsewhere, than it is to confidently proclaim that it certainly has not just because it’s been a month since xz.


What I'm argueing against is absolutist fear-mongering statements such as "every contributor is dangerous".

I'm not confident about anything, but anything could happen or have happened all the time. We need to operate on the reality that exists, not the reality that perhaps maybe possibly could perhaps maybe possibly exist. And we certainly shouldn't be treating anyone sending you a patch as a dangerous hostile actors by default.


You seem to think that vetting contributors or reviewing all code commits for malicious actions or code is some unreasonable ask. That should be standard practice.

If someone is getting angry that you actually check their code for vulns, or that you don't let them make changes to certain core areas of a large app without establishing some credibility first, you probably don't want them working on your project.

You can be welcoming AND cautious at the same time.


It has been standard practice for decades. Sometimes this goes wrong, because everything can go wrong. It happens. Casting doubt on any contributor, any maintainer, and any long-term maintainer with fantastical stories is just throwing shade. Of course no one can be trusted absolutely; that has always been true for anything from software to child care to launching nuclear bombs. Anyone and anything can become suspect if you analyse things with enough of a suspicious mindset.


It's literally not fantastical.

It literally just happened.

And no, being cautious is never throwing shade, unless you're doing it in a discriminatory way, like assuming that Chinese or Russian contributors are more dangerous.


There are CVEs where an empty string performed an authentication bypass.

> social engineering

The best bugdoors are deniable.


[flagged]


I don't think the "armed to the teeth" theory is correct. If you were right, people wouldn't honk at each other or otherwise involve themselves in any sort of road rage. But people rage at each other all the time, and only very rarely does someone get shot.

The reason people aren't walking around stabbing you in the eye with a needle is because there is no reason for them to do that. They gain nothing. They don't desire that it be done.


If the news articles about Instagram extortion are anything to go by, adding weapons to an extortion situation is more likely to lead to a suicide than the extortionist being dissuaded.


Flipside of that being a highest number of school shootings in the world.


Maybe not so impossible. Start with making a list of projects that are everywhere. Inside every Linux distribution, inside every react/angular/vue/etc project, …

Then check which companies support those projects with active development, and calculate a rating. Are the companies located inside democracies or are they mostly from china or Russia?

It’s probably not that many packages in the end. A few thousand high impact/risk projects probably.


Backdoor attempts won't be that obvious. The xz incident just had a random unaffiliated burner account and nothing of any clear national origin.


I wanted to make a different point. If for example Google or Red Hat were deeply involved within the xz project, there might have been more people reviewing the code. The evil changes to xz were easy to overlook, but not impossible to notice.

Especially the added "accidential" semicolon made me think about probabilities. I think in a code review I would notice that with a probability of 10-20%. So if 10 people would've looked at it, there might have been quite a low chance to get away with it.

Having some high profile companies involved into an open source project the risk score would drop in my opinion, which would highlight the projects that are completely community maintained, and might be more susceptible.

Having such a list might be a security threat by itself though, because attackers would focus on the "low risk" projects first.


One possibility could be a license that requires big companies to dedicate one or more people as maintainers or at least reviewers of a project if they want to get license to use the software.


GPLv4? I doubt this would be a bigger success story than v3 and v2. Permissive licenses won the war against Copyleft a long time ago.


The list you speak of already exists — it is the package registries of Debian/Ubuntu, RHEL, etc.

What about American companies using mainland China developers to drive their (well known) open source projects with crappy code? Who’s to blame?

We’re currently smoking at the gas station and things haven’t blown up yet…


It would be nearly impossible to ignite gasoline or its fumes with a lit cigarette.


Maybe we need a reporting system for maintainer changes of bigger projects. Some list where they get published and people can keep an eye on it.

Those changes of maintainers need to be synced to package distribution sites like npm.js or Debian packages and put in context with versions/releases.

In Europe this was introduced for banks after the banking crisis. If a bank does any organizational change, a report is sent out to all member states of the EU right away and any of the 27 national bank agencies can check if they notice something unusual. It might be possible to bribe a few people in your own country, but it’s really hard to bribe all responsible people in 26 other countries.


I'm sure some security researcher is doing this, but we could easily create a visualization of "who has contributed over time" and identify transitioning of maintainers automatically just from git.

This might be worth doing and contributing to a site like bestofjs or libraries.io (I don't really use that one though!)


> who has contributed over time

When major security players insist that using GPG is bad, there is no way of knowing if bob@bob.bob is the same account that it was last month or not.


It is not the idea of GPG thats bad. In fact, the idea is great! The implementation of GPG however is quite another thing. Ease of use and user experience are really not that great with GPG. It is difficult to use even for developers. Developers are users too amd so on.


Try uploading a signed package on pypi. Sign it with sequoia instead of GPG if you like.

You'll receive an email asking you to stop uploading signatures.


you can sell/steal keys just as easily as accounts


Ok. Can you get my private key? Feel free to respond to this comment with my private GPG key.

I think guessing a password and getting lucky is much easier.


> Maybe we need a reporting system for maintainer changes of bigger projects. Some list where they get published and people can keep an eye on it.

The rust project does it. There's a repo with all [active] members and their permissions on github, etc. These get synchronized and updated every time there's a change.


This is, in my eyes, one of the most important parts of "Infrastructure as Code". You should make the list of who has what permissions a critical artifact, as immutably part of the repo as any other change.


The major projects aren't on github.


Could you elaborate?


Just for the main project, or for all/most packages on crates.io?


Every repository and team under rust-lang on github.


That's great, but I think that's not enough. This would need to extend to crates.io, I'm sure there are some packages, that are very commonly used and not part of rust-lang.


While I love open source, this feels to me like something companies need to pay for.

It might be that no open source contributions I've made are things people care about, but I'm not spending one second of my time for free updating databases so multi-billion dollar companies can feel safer.


How about simply paying the maintainers and then getting stuff done like the classical business does.


Well yes, sounds great, but it doesn't really address the security problem. Now you've just got the bad guys getting two paychecks instead of one and the good guys getting one paycheck instead of zero.


One of the biggest risks for companies is securing dormant code, it's perfectly fine for a project to be no longer sexy enough to maintain. Platforms like thanks.dev have already proven how reward & recognition can help promote development in an ecosystem https://www.youtube.com/watch?v=e5FV-AnKPlo&t=1s


> bad guys getting two paychecks instead of one

1 to 2 paychecks = 100% increase.

> good guys getting one paycheck instead of zero

0 to 1 paycheck = infinity increase.

With a known baseline of "good paychecks", financial analytics can pursue identification of "bad paychecks".


Did you mean: instead of trying to become a maintainer to a trusted open source project, how about bad actors simply bribe the existing maintainer to do their bidding? There would be no maintainer changes in that scenario.

Related, the motivation for trying to gain privileged access to open source projects is to leverage the existing trust associated with that project. A different long game that could be played is to create a new project with the intent on backdooring it a few years down the road, after it has gained sufficient trust.


Is it really though? Cum-ex and cumcum appear to still work great.


Cum-ex is about taxes (and criminal law). It's unrelated to the stability of banks.


Could you maybe elaborate on what you consider an opinionated kubernetes deployment? Are there some open source projects you find promising?


Opinionated meaning it picks, install, patches your CNI/Ingress/Load Balancer/DNS Server/Metrics Server/Monitoring Setup.

k3s is probably most well known as it ships with bunch of preinstall software: https://github.com/k3s-io/k3s so you can just start throwing yaml files at cluster and handling workloads. It's what I use for my homelab.

Paid things I've heard of include OpenStack and SideroLabs. Haven't used personally by SRE coworkers say good things about them.


Thanks, now I get what you mean. I’ve always called that a kubernetes distribution.

Plain kubernetes is as useless as a plain Linux kernel without a userland around it, and normally you don’t want to build a kubernetes or Linux distribution from scratch.


Most hosted options like GKE also fall into this category - networking, load balancers, and to a certain extent monitoring is all set up for you.


Yea, biggest thing I see missing in EKS/GKE/AKS is they don't come with Ingress Controller out of the box which is really frustrating. By default, they really should install Ingress-Nginx unless administrator asks for not to be installed.

It's pretty minor problem overall though.


AWS used to have an integrated Ingress Controller - It just sucked (At least partially because it was built by Google, not AWS). That AWS didn't take over hosting of it (it's not even available as an add-on!) when Kubernetes the Project removed the first-party support of it is... Well, it's a statement by AWS. They were dragged kicking and screaming into Kubernetes at all, because they see it as hurting their moat, and have stalled the Ingress project quite a bit.


This startled me too in the beginning. I was expecting something built in, pre-wired to one of the commercial cdn/reverse proxy offerings (like cloudfront or Azure CDN).

But honestly I think the big cloud providers don’t want their kubernetes offerings to be too easy to use, they try to nudge inexperienced people to use their proprietary serverless products. Kubernetes does make switching to another cloud provider far too easy ;)


GKE does ship with both Ingress and Gateway controllers integrated, they set up GCP load balancers with optional automatic TLS certificates.

I think you need to flip a flag on the cluster object to enable the Gateway controller.


Even with most unsupported CPUs you can upgrade to Windows 11. (edit: I think everything newer than 1st gen Intel core is still working)

And there is not one piece of software I know of, that promises infinite support time. Windows 10 keeps its promised EOL dates.


You can but it requires third-party hacks (Eg. Rufus. Microsoft removed it's first-party hacks to allow it).


See the comment a few levels up about windows 10 literally promising to be the last version of windows and only updates from now on.


Technically Windows 11 is an update of Windows 10. Look at the version number. It's 10.0.x.x.

And seriously, nobody ever promised there will be updates for old hardware for eternity.


The hardware requirements are arbitrary though. It is more of a severe shortcoming of the OS because they want to enforce certain machine DRM.

And certainly not because of security, that wasn't a priority at Microsoft.


But they are excluding recent (maybe also current?) hardware, unnecessarily.


Yes, they are, but that's not related to the version number. Things like that also happened before on a smaller scale with windows 10 updates.


I make no comment on version numbers. It sounded like you were saying they're only dropping support for old hardware, which is not the case.


Just because you can doesn't mean you should.


Windows 10 EOL is coming in October '25. So either it's replacing the hardware or running Windows 11 on unsupported CPUs. Or not running Windows at all.


Yep I'm not buying a new Surface Pro for win 11.

When win 10 dies, I'm installing some Linux build and hoping for the best


There is a decade old rule: don't upgrade to the next windows version before the first Service Pack is released.


Dude, it's not 2014. Decade*s* old rule!


Tell me I’m getting old without telling me I’m getting old. Thanks! ;)


Maybe unexpected: I agree

One of the biggest improvement for me was improved multi monitor and multi-DPI/PPI support. Being able to just put two different monitors next to each other and moving windows between them works. They just adopt their rendering on the fly to the screens DPI setting and it works with most current software.

One of the biggest issues that block me from moving to Linux, I run my displays with 125% and 150% fractional scaling. That just doesn't work like that on Linux, fractional scaling comes with many drawbacks.


> One of the biggest improvement for me was improved multi monitor and multi-DPI/PPI support. Being able to just put two different monitors next to each other and moving windows between them works. They just adopt their rendering on the fly to the screens DPI setting and it works with most current software.

This was a significant fix from W10 to W11 as it was truly awful with W10.


It already works quite well with current Windows 10, the true nightmare was before Windows 10, or maybe with one of the earlier versions. But there was another big improvement with Windows 11.


For me it was an overall improvement over Windows 10. Every new release of every software has some drawbacks, but I got over getting upset about those little details. I just try to find new solutions, if old ones stop working.


Companies, free OEM licenses, and habit.

It's not a bad OS, I'm thinking more and more about moving to Linux, but Windows 11 is fine for me and has the broadest compatibility. There is nothing that doesn't work with Windows, since WSL2 it runs nearly all Linux software, and there is nothing that bothers me too much.


I wouldn't call X-Elon-Twitter a success. It was more like not a complete failure.


The point is, there isn't literally a smoking crater where Twitter HQ used to be, therefore all criticism of the acquisition is invalid.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: