We live in a society, to operate open communities there are trade-offs.
If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.
This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.
In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.
The academics abused this good-will towards them.
What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.
Of course everyone knows bugs make it in software. That’s not the point and I find it a little concerning that there’s a camp of people who are only interested in the zzz I already knew software had bugs assessment. Yes the academics abused their goodwill. And in doing so they raised awareness around around something that sure many people know is possible. The point is demonstrating the vuln and forcing people to confront reality.
I strive for a high trust society too. Totally agree. And acknowledging that people can exploit trust and use it to push poor code through review does not dismantle a high trust operation or perspective. Trust systems fail when people abuse trust so the reality is that there must be safeguards built in both technically and socially in order to achieve a suitable level of resilience to keep things sustainable.
Just look at TLS, data validation, cryptographic identity, etc. None of this would need to exist in a high trust society. We could just tell people who we are, trust other not to steal our network traffic, never worry about intentionally invalid input. Nobody would overdraft their accounts at the ATM, etc. I find it hard to argue for absolute removal of the verify step from a trust but verify mentality. This incident demonstrated a failure in the verify step for kernel code review. Cool.
This is how security people undermine their own message. My entire job is being "tge trust but verify" stick in the mud, but everyone knows it when I walk in the room. I don't waste peoples time, and I stop short of actually causing damage by educating and forcing an active reckoning with reality.
You can have your verify-lite process, but you must write down that that was your decision, and if appropriate, revisit and reaffirm it over time. You must implement controls, measures and processes in such a way as to minimize the deleterious consequences to your endeavor. It's the entire reason Quality Assurance is a pain in the ass. When you're doing a stellar job, everyone wonders why you're there at all. Nobody counts the problems that didn't happen or that you've managed to corral through culture changes in your favor, but they will jump on whatever you do that drags the group down. Security is the same. You are an anchor by nature, the easiest way to make you go away is to ignore you.
You must help, first and foremost. No points for groups that just add more filth to wallow through.
We live in a society, to operate open communities there are trade-offs.
If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.
This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.
In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.
The academics abused this good-will towards them.
What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.