But bad people don’t follow some mythical ethical framework and announce they’re going to rob the bank prior to doing it. There absolutely are pen tests conducted where only a single person out of hundreds is looped in. Is it unethical for supervisors to subject their employees and possibly users to those such environments? Since you can’t prevent this behavior at large, I take solace that it happened in a relatively benign way rather than having been done by a truly malicious actor. No civilians were harmed in the demonstration of the vulnerability. Security community doesn't get to have their cake and eat it too. All this responsible disclosure “ethics” is nonsense. This is full disclosure, it’s how the world actually works. The response from the maintainers to me indicates they are frustrated at the perceived waste of their time, but to me this seems like a justified use of human resources to draw attention to a real problem that high profile open source projects face. If you break my trust I’m not going to be happy either and will justifiably not trust you in the future, but trying to apply some ethical framework to how “good bad actors” are supposed to behave is just silly IMO. And the “ban the institution” feels more like an “I don't have time for this” retaliation than an “I want to effectively prevent this behavior in the future” response that addresses the reality. For all we know Linus and Greg could have and still might be onboard with the research and we’re just seeing the social elements of the system now tested. My main point is maybe do a little more observing and less condemning. I find the whole event to be a fascinating test of one of the known vulnerabilities large open source efforts face.
We live in a society, to operate open communities there are trade-offs.
If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.
This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.
In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.
The academics abused this good-will towards them.
What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.
Of course everyone knows bugs make it in software. That’s not the point and I find it a little concerning that there’s a camp of people who are only interested in the zzz I already knew software had bugs assessment. Yes the academics abused their goodwill. And in doing so they raised awareness around around something that sure many people know is possible. The point is demonstrating the vuln and forcing people to confront reality.
I strive for a high trust society too. Totally agree. And acknowledging that people can exploit trust and use it to push poor code through review does not dismantle a high trust operation or perspective. Trust systems fail when people abuse trust so the reality is that there must be safeguards built in both technically and socially in order to achieve a suitable level of resilience to keep things sustainable.
Just look at TLS, data validation, cryptographic identity, etc. None of this would need to exist in a high trust society. We could just tell people who we are, trust other not to steal our network traffic, never worry about intentionally invalid input. Nobody would overdraft their accounts at the ATM, etc. I find it hard to argue for absolute removal of the verify step from a trust but verify mentality. This incident demonstrated a failure in the verify step for kernel code review. Cool.
This is how security people undermine their own message. My entire job is being "tge trust but verify" stick in the mud, but everyone knows it when I walk in the room. I don't waste peoples time, and I stop short of actually causing damage by educating and forcing an active reckoning with reality.
You can have your verify-lite process, but you must write down that that was your decision, and if appropriate, revisit and reaffirm it over time. You must implement controls, measures and processes in such a way as to minimize the deleterious consequences to your endeavor. It's the entire reason Quality Assurance is a pain in the ass. When you're doing a stellar job, everyone wonders why you're there at all. Nobody counts the problems that didn't happen or that you've managed to corral through culture changes in your favor, but they will jump on whatever you do that drags the group down. Security is the same. You are an anchor by nature, the easiest way to make you go away is to ignore you.
You must help, first and foremost. No points for groups that just add more filth to wallow through.