Hacker News new | past | comments | ask | show | jobs | submit login
“They introduce kernel bugs on purpose” (kernel.org)
3025 points by kdbg on April 21, 2021 | hide | past | favorite | 1912 comments



This thread is paginated, so to see the rest of the comments you need to click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=26887670&p=2

https://news.ycombinator.com/item?id=26887670&p=3

https://news.ycombinator.com/item?id=26887670&p=4

https://news.ycombinator.com/item?id=26887670&p=5

https://news.ycombinator.com/item?id=26887670&p=6

https://news.ycombinator.com/item?id=26887670&p=7

(Posts like this will go away once we turn off pagination. It's a workaround for performance, which we're working on fixing.)

Also, https://www.neowin.net/news/linux-bans-university-of-minneso... gives a bit of an overview. (It was posted at https://news.ycombinator.com/item?id=26889677, but we've merged that thread hither.)

Edit: related ongoing thread: UMN CS&E Statement on Linux Kernel Research - https://news.ycombinator.com/item?id=26895510 - April 2021 (205 comments and counting)


The professor gets exactly what they want here, no?

"We experimented on the linux kernel team to see what would happen. Our non-double-blind test of 1 FOSS maintenance group has produced the following result: We get banned and our entire university gets dragged through the muck 100% of the time".

That'll be a fun paper to write, no doubt.

Additional context:

* One of the committers of these faulty patches, Aditya Pakki, writes a reply taking offense at the 'slander' and indicating that the commit was in good faith[1].

Greg KH then immediately calls bullshit on this, and then proceeds to ban the entire university from making commits [2].

The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.

As was noted, this obviously has a bunch of collateral damage, but such drastic measures seem like a balanced response, considering that this university decided to _experiment_ on the kernel team and then lie about it when confronted (presumably, that lie is simply continuing their experiment of 'what would someone intentionally trying to add malicious code to the kernel do')?

* Abhi Shelat also chimes in with links to UMN's Institutional Review Board along with documentation on the UMN policies for ethical review. [3]

[1]: Message has since been deleted, so I'm going by the content of it as quoted in Greg KH's followup, see footnote 2

[2]: https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...

[3]: https://lore.kernel.org/linux-nfs/3B9A54F7-6A61-4A34-9EAC-95...


Thanks for the support.

I also now have submitted a patch series that reverts the majority of all of their contributions so that we can go and properly review them at a later point in time: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...


Just wanted to say thanks for your work!

As an OSS maintainer (Node.js and a bunch of popular JS libs with millions of weekly downloads) - I feel how _tempting_ it is to trust people and assume good faith. Often since people took the time to contribute you want to be "on their side" and help them "make it".

Identifying and then standing up to bad-faith actors is extremely important and thankless work. Especially ones that apparently seem to think it's fine to experiment on humans without consent.

So thanks. Keep it up.


How could resilience be verified after asking for consent?


Tell someone upstream - in this case Greg KH - what you want to do and agree on a protocol. Inform him of each patch you submit. He's then the backstop against anything in the experiment actually causing harm.


Same way an employer trains employees on phishing campaigns or an auditor or penetration tester tests resilience or compliance.


Yes, employers often send out fake phishing e-mails to test resilience and organizational penetration testing is done on the field with unsuspecting people.


Ah. I never replied to the e-mails sent out by my employer about registering for a training in phishing detection. I just assumed those e-mails were phishing e-mails.


I assume that so many official emails from my employer are phishing.. it's a mess.


This read as sarcastic to me but FYI what you said is actually, unironically, true


A lot of people are talking about the ethical aspects, but could you talk about the security implications of this attack?

From a different thread: https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have already reached the stable trees.

Apologies in advance if my questions are off the mark, but what does this mean in practice?

1. If UNM hadn't brought any attention to these, would they have been caught, or would they have eventually wound up in distros? 'stable' is the "production" branch?

2. What are the implications of this? Is it possible that other malicious actors have done things like this without being caught?

3. Will there be a post-mortem for this attack/attempted attack?


I don't think the attack described in the paper actually succeeded at all, and in fact the paper doesn't seem to claim that it did.

Specifically, I think the three malicious patches described in the paper are:

- UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an error message to explain the failure of pci_request_mem_regions, https://lore.kernel.org/lkml/20200821031209.21279-1-acostag.... The day after this patch was merged into a driver tree, the author suggested calling dev_err() before pci_disable_device(), which presumably was their attempt at maintainer notification; however, the code as merged doesn't actually appear to constitute a vulnerability because pci_disable_device() doesn't appear to free the struct pci_dev.

- UAF case 2, Fig. 9 => tty/vt: fix a memory leak in con_insert_unipair, https://lore.kernel.org/lkml/20200809221453.10235-1-jameslou... This patch was not accepted.

- UAF case 3, Fig. 10 => rapidio: fix get device imbalance on error, https://lore.kernel.org/lkml/20200821034458.22472-1-acostag.... Same author as case 1. This patch was not accepted.

This is not to say that open-source security is not a concern, but IMO the paper is deliberately misleading in an attempt to overstate its contributions.

edit: wording tweak for clarity


> the paper is deliberately misleading in an attempt to overstate its contributions.

Welcome to academia. Where a large number of students are doing it just for the credentials


What else do you expect? The incentive structure in academia pushes students to do this.

Immigrant graduate students with uncertain future if they fail? Check.

Vulnerable students whose livelihood is at mercy of their advisor? Check.

Advisor whose career depends on a large number of publication bullet points in their CV? Check.

Students who cheat their way through to publish? Duh.


The ethics in big-lab science are as dire as you say, but I've generally got the impression that the publication imperative has not been driving so much unethical behaviour in computer science. I regard this as particularly cynical behaviour by the standards of the field and I think the chances are good that the article will get retracted.


FWIW, Qiushi Wu's USENIX speaker page links to a presentation with Aditya Pakki (and Kangjie Lu), but has no talk with the same set of authors as the paper above.

https://www.usenix.org/conference/usenixsecurity19/speaker-o...


Can I cite your comment in exchange for a future citation?


Sure?

Edit: Oh now I get it you clever person you. Only took an hour ha.


Feigning surprise isn't helpful.

It's good to call out bad incentive structures, but by feigning surprise you're implying that we shouldn't imagine a world where people behave morally when faced with an incentive/temptation.


I dislike feigned surprise as much as you do, but I don't see it in GP's comment. My read is that it was a slightly satirical checklist of how academic incentives can lead to immoral behavior and sometimes do.

I don't think it's fair to say "by feigning surprise you're implying..." That seems to be putting words in GP's mouth. Specifically, they didn't say that we shouldn't imagine a better world. They were only describing one unfortunate aspect of today's academic world.

Here is a personal example of feigned surprise. In November 2012 I spent a week at the Google DC office getting my election results map ready for the US general election. A few Google engineers wandered by to help fix last-minute bugs.

Google's coding standards for most languages including JavaScript (and even Python!) mandate two-space indents. This map was sponsored by Google and featured on their site, but it was my open source project and I followed my own standards.

One young engineer was not pleased when he found out about this. He took a long slow look at my name badge, sighed, and looked me in the eye: "Michael... Geary... ... You... use... TABS?"

That's feigned surprise.

(Coda: I told him I was grateful for his assistance, and to feel free to indent his code changes any way he wanted. We got along fine after that, and he ended up making some nice contributions.)


Why should we imagine this world? We have no reason to believe it can exist. People are basically chimps, but just past a tipping point or two that enable civilization.

We'd also have to agree on what "behave morally" means, and this is impossible even at the most basic level.


Usually "behave morally" means "behave in a way the system ruling over you deems best to indoctrinate into you so you perpetuate it". No, seriously, that's all there is to morality once you invent agriculture.


Thank you.

Question for legal experts,

Hypothetically if these patches were accepted and was exploited in the wild; If one could prove that they were exploited due to the vulnerabilities caused by these patches can the University/ Prof. be sued for damages and won in an U.S. court (or) Would they get away under Education/Research/Academia cover if any?


Not an attorney but the kernal is likely shielded from liability by it's license. maybe the kernal could sue the contributers for damaging the project but I don't think the end user could.


Malicious intent or personal gain negate that sort of thing in civil torts.

Also US code 1030(a)5 A does not care about software license. Any intentional vulnerability added to code counts. Federal cybercrime laws are not known for being terribly understanding…


License is a great catch, thank you. Do the kernel get into separate contract with the contributors?


I literally LOL'd at "James Louise Bond"


I wonder about this me too.

To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let's say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

Maybe they can pose as 10 different people, in case some of them gets banned.


You're still in a better position with open source. The same thing happens in closed source companies.

See: https://www.reuters.com/article/us-usa-security-siliconvalle...

"As U.S. intelligence agencies accelerate efforts to acquire new technology and fund research on cybersecurity, they have invested in start-up companies, encouraged firms to put more military and intelligence veterans on company boards, and nurtured a broad network of personal relationships with top technology executives."

Foreign countries do the same thing. There are numerous public accounts of Chinese nationals or folks with vulnerable family in China engaging in espionage.


The principal researches appear to be alumni of mainland China schools.


Read into the socat diffie-hellman backdoor, I found it fascinating at the time.


Woah. I Googled that! Nice reference. This is a good explanation with more links: https://github.com/AllThing/socat_backdoor


Isn't what you've described pretty much the very definition of advanced persistent threat?

It's difficult to protect against trusted parties whom you assume, with good reason, and good-faith actors.


The fundamental tension is between efficiency and security. Trust permits efficiency, at the cost of security (if that trust is found to be misplaced).

A perfectly security system is only realized by a perfectly inefficient development process.

We can get better at lessening the efficiency tax of a given security level (through tooling, tests, audits, etc), but for a given state of tooling, there's still a trade-off.

Different release trains seem the sanest solution to this problem.

If you want bleeding-edge, you're going to pull in less-tested (and also less-audited) code. If you want maximum security, you're going to have to deal with 4.4.


I have the same questions. So far we have focused on how bad these "guys" are. Sure, they could have done it differently, etc. However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

How to solve this "issue" without putting too much process around it? That's the challenge.


What's next, will they prove how easy it is to break into kernel developers' houses and rob them? Or prove how easy it is to physically assault kernel developers by punching them in the face at conferences? Or prove how easy it is to manipulate kernel developers to lose their life savings investing in cryptocurrency? You can count me out of those...

Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.


Just playing devil advocate here, the surprising factor does play into it. No bad actor will ever give you heads-up.

If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.


> the surprising factor does play into it. No bad actor will ever give you heads-up.

I too thought like this till yesterday. Then someone made me realize thats not how getting consent works in these situations. You take consent from higher up the chain, not the people doing the work. So Greg Kroah-Hartmancould could have been consulted, as he would not be personally reviewing this stuff. This would also give you a chance to understand how the release process works. You also have an advantage over the bad actors because they would be studying the process from outside.


it's not simple like that, if Greg doesn't do the work of review then who gives him the authority to consent on behalf of others?


I see what you are saying. But he is also sort of the director to this whole thing. The research question itself is worthwhile and I don't think if it was done properly this much time would be wasted. All they have to prove is that it will pass few code reviews. That's a few man hours and I really don't think people will be mad about that. This whole fiasco is about the scale of man hours wasted both because they repeatedly made these "attacks" and because this thing slipped into stable code. Both would be avoided in this scheme.

But I would like to put in a disclaimer that before getting to that point they could have done so many other things. Review the publicly available review processes, see how security bugs get introduced by accident and see if that can be easily done by a bad actor, etc.


the way it should work imho is contributors to be asked for consent (up-front, retroactively), that stealthy experiments would happen at some point. Given the vital role of the linux kernel, maybe they'll understand. And if they turn out to be under-resourced to be wasted on such things, then it would highlight the need for funding additional head count, factoring in that kind of experiments/attacks.


> No bad actor will ever give you heads-up.

Yes, and if you do it without a heads-up as well that makes you a bad actor. This university is a disgrace and that's what the problem is and should remain.


C'est la vie. There are many things that it would be interesting to know, but the ethics of it wouldn't play out. It would be interesting to see how well Greg Kroah-Hartman resists under torture, but that does not mean it is acceptable to torture him to see if he would commit malicious patches that way.

To take a more realistic example, we could quickly learn a lot more than today about language acquisition if we could separate a few children from any human contact to study how they learn from controlled stimuli. Still, we don't do this research and look for much more complicated and lossy, but more humane, methods to study the same.


They proved nothing that wasn't already obvious. A malicious actor can get in vulnerabilities the same way a careless programmer can. Quick, call the press!

And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.


Agreed. So many security vulnerabilities have been created not by malicious actors, but by people who just weren't up to task. Buggy software and exhausted maintainers is nothing new.


What this proves to me is that perhaps lightweight contributions to the kernel should be done in safe languages that prevent memory leaks and with tooling that actively highlights memory safety issues like use after free. Broader rust adoption in the kernel cant come soon enough.

I also consider Greg’s response just as much a test of UMN’s internal processes as the researcher’s attempt at testing kernel development processes. Hopefully there will be lessons learned on both sides and this benign incident makes the world better. Nobody was hurt here.


I understand where you are coming from, and I agree that it's good that we are paying more attention to memory safety, but how would a memory safe language protect you from an intentionally malicious code commit? In order to enact their agenda they would need to have found a vulnerability in your logic (which isn't hard to do, usually). Memory safety does not prevent logic errors.

> Nobody was hurt here.

This is where you got me, because while it's clear to me that short-term damage has been done, in the long term I believe you are correct. I believe this event has made the world a safer place.


One could argue that when a safe language eliminates memory safety bugs (intentional or unintentional), it makes it easier for the reviewer to check for logic errors. Because you don't have to worry about memory safety, you can focus completely on logic errors.


I would agree that it does, and I do agree that we should try to reach that point. I just want to point out that I think it's dangerous to assume safety in general because one thing is assumed to be safe.


This is for me unrelated and even a little bit minimizing the issue here.

The purpose of the research was probably to show how easy it is to manipulate the Linux kernel in bad faith. And they did it. What are they gonna do about it besides banning the university?


I believe it comes down to having more eyes on the code.

If a corporation relies upon open sourced code that has historically been written by unpaid developers, if I was that corportion, I would start paying people to vet that code.


So you are just fine knowing that any random guy can sneak any code in the Linux kernel? Honestly, I was personally expecting a higher level of review and attention to such things, considering how used the product is. I don't want to look like the guy that doesn't appreciate what the OSS and FSF communities do everyday even unpaid. However this is unrelated. And probably this is what the researchers tried to prove (with unethical and wrong behavior).


I'm not fine with it. But those researchers are not helping at all.

And also, if I had to pick between a somewhat inclusive mode of work where some rando can get code included at the slightly increased risk of including malicious code, and a tightly knit cabal of developers mistrusting all outsiders per default: I would pick the more open community.

If you want more paranoia, go with OpenBSD. But even there some rando can get code submitted at times.


If you've ever done code review on a complex product, it should be quite obvious that the options are either to accept that sometimes bugs will make it in, or to commit once per week or so (not per person, one commit per week to the Linux kernel overall), once every possible test has been run on that commit.


I am not sure if these are the only options we have here. Did you see the list of commits that this bunch of guys sneaked in? It's quite big, it's not just 1-2. A smart attacker could have done 1 commit per month and would have been totally fine. All they needed apparently was a "good" domain name in their email. This is what I think is the root of the problem.


> So you are just fine knowing that any random guy can sneak any code in the Linux kernel?

I mean, it is no surprise. It is even worse with proprietary software, because you are much less likely to be aware of your own college/employee.

Hell, seeing that the actual impact is overblown in the paper, I think it is a really great percentage caught to be honest, assuming good faith from the contributor.


> However, they proved a big point: how "easy" it is to manipulate the most used piece of software on the planet.

What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?

Specially when code is pushed in bad faith?

I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?

[1] https://en.wikipedia.org/wiki/Underhanded_C_Contest


Bug bounties are a different beast. Here we are talking about a bunch of guys who deliberately put stuff into your next kernel release because they come from an important university, or whatever other reason. One of the reviewers in the thread admitted that they need to pay more attention to code reviews. That sounds to me like a good first step towards solving this issue. Is that enough, though? It's an unsolvable problem, but is the current solution enough?


> Bug bounties are a different beast.

Bug bounties are more than a different beast: they are a strawman.

Sneaking vulnerabilities through a code review is even a competitive sport, and it has zero to do with bug bounties.


Sorry I think I didn't understand/read correctly what it was about.

It's just f** brilliant! :)


What would be the security implications of these things:

* a black hat writes malware that proves to be capable of taking out a nation's electrical grid. We know that such malware is feasible.

* a group of teenagers is observed to drop heavy stones from a bridge onto a motorway.

* another teenager pointing a relatively powerful laser at the cockpit of a passenger jet which is about to land at night.

* an organic chemist is demonstrating that you can poison 100,000 people by throwing certain chemicals into a drinking water reservoir.

* a secret service subverting software of a big industrial automation company in order to destroy uranium enrichment plants in another country.

* somebody hacking a car's control software in order to kill its driver

What are the security implications of this? That more money should be spent on security? That we should stop to drive on motorways? That we should spent more money on war gear? Are you aware how vulnerable all modern infrastructure is?

And would demonstrating that any of these can practically be done be worth an academic paper? Aren't several of these really a kind of military research?

The Linux kernel community does spend a lot of effort on security and correctness of the kernel. They have a policy of maximum transparency which is good, and known to enhance security. But their project is neither a lab in order to experiment with humans, nor a computer war game. I guess if companies want to have even more security, for running things like nuclear power plants or trains on Linux, they should pay for the (legally required) audits by experts.


I agree with the sentiment. For a project of this magnitude maybe it comes to develop some kind of static analysis along with refactoring the code to make the former possible.

As per the attack surface described in the paper (section IV). Because (III, the acceptance process) is a manpower issue.


Ironically, one of their attempts were submitting changes that were allegedly recommended by a static analysis tool.


It's possible that they are developing a static analysis tool that is designed to find places where vulnerabilities can be inserted without looking suspicious. That's kind of scary.

Have they submitted patches to any projects other than the kernel?


Guess we have to wait for their next paper to find out.


As an Alumni of the University of Minnesota's program I am appalled this was even greenlit. It reflects poorly on all graduates of the program, even those uninvolved. I am planning to email the department head with my disapproval as an alumni, and I am deeply sorry for the harm this caused.


I am wondering if UMN will now get a bad name in Open Source and any contribution with their email will require extra care.

And if this escalate to MSM Media it might also damage future employment status from UMN CS students.

Edit: Looks like they made a statement. https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...


> Leadership in the University of Minnesota Department of Computer Science & Engineering learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel.

- Signed by “Loren Terveen, Associate Department Head”, who was a co-author on numerous papers about experimenting on Wikipedia, as pointed out by: https://news.ycombinator.com/item?id=26895969


Their name is not in the author list for the paper.

Edit: Parent comment originally referenced the paper that caused this mess.


Yep, sorry, I double-checked and edited it quickly. Sorry about that!


It should. Ethics begins at the top, and if the university has shown itself to be this untrustworthy then no trust can be had on them or any students they implicitly endorse.

As far as I'm concerned this university and all of its alumni are radioactive.


Their graduates have zero culpability here (unless they were involved). Your judgement of them is unfair.


> Their graduates have zero culpability here

It's not about guilt, it's about trust. They were trained for years in an institution that violates trust as a matter of course. That makes them suspect and the judgement completely fair.


Lots of universities have had scandals. I could probably dig one up from your alma mater. They're big places with long histories. Collective punishment achieves little productive and should be avoided.


Its not about collective punishment. Universities sell reputation, both good and bad. It just so happens that they sold bad reputation.


Collective punishment is a clear and unilateral signal that something is extremely wrong and not within the punisher's power to unwind properly (or prevent in the future). Until it's clear that this university can be trusted, they should not be. I would feel the same about any schools that I attended, and I would not have issues with blanket bans for them either if this was the kind of activity they got up to.


> They were trained for years in an institution that violates trust as a matter of course.

"As a matter of course" is a big leap here.


Their graduates might not have been directly involved, but it's not possible to ig ore that those graduates were the product of an academic environment where this kind of behavior was not only sanctioned from the top but also defended as an adequate use of resources.


Adequate use of resources seems like a bizarre reasoning. Do you also evaluate how a candidates alma mater compensates its football staff before you hire?


you actually believe that all of those adult engineers can't decide on their own?

you think students believe in everything that profs do/say?


This is only slightly better than judging from the skin color or location of birth.


Isn't academics part of how you are evaluating a candidate for a job ?


That's a bit much, surely. I think the ethics committee probably didn't do a great job in understanding that this was human research.


Ok...then is everybody who graduated from MIT radioactive, even if they graduated 50 years ago, since Epstein has been involved?

Your logic doesn't make ANY sense.


It makes perfect sense once you realize that universities are in the business of selling reputation.

When someone graduates from the university, that is the same as the university saying "This person is up to our standards in terms of knowledge, ethics and experience."

If those standards for ethics are very low, then it naturally taints that reputation they sold.


no, when somebody graduates from X school, then it means he was capable to either pass or cheat all exams.


Why is the university where you put the line? You could as well say every commit coming from Minnesota is radioactive or, why not, from the US.

It is unfair to judge a whole university for the behavior of a professor or a department. Although I'm far from having all the details, it looks to me like the university is taking the right measures to solve the problem, which they acknowledge. I would understand your position if they tried to hide this or negated it, but as far as I understood that's not the case at all. Did I miss something?


Linux kernel is blocking contributions from the university mail addresses, as this attack has been conducted by sending patches from there.

It doesn't block patch submissions from students of professors using their private email, since that assumes they are contributing as individuals, and not as employees or students.

It's as close as practically possible to blocking an institution and not the individuals.


I think that is a reasonable measure by the LK team. In my opinion, it is the right solution in the short term, and the decision can be revised if in the future some student or someone else have problems to submit non-malicious patches. But I was specifically referring to this comment:

> As far as I'm concerned this university and all of its alumni are radioactive.

That is not a practical issue, but a too broad generalization (although, I repeat, I may have missed something).


I don't read it like this. Alumnis and students are not banned from contributing, as long as they use their private emails. It's the university email domain that is "radioactive". The Assumption here is that someone who uses university email is submitting a patch on behalf of the said university, and that may be in a bad faith. It's up to the said university to show they have controls in place and that they are trustworthy.

It's the same as with employees. If I get a patch request from xyz@ibm.com I'll assume that it comes from IBM, and that person is submitting a patch on behalf of IBM, while for a patch coming from xyz@gmail.com I would not assume any IBM affiliation or bias, but assume person contributing as an individual.


> Alumnis and students are not banned from contributing, as long as they use their private emails. It's the university email domain that is "radioactive".

That's not what the comment I was responding to said. It was very clear: "As far as I'm concerned this university and all of its alumni are radioactive". It does not say every kernel patch coming from this domain is radioactive, it clearly says "all of its alumni are radioactive".

You said before that alumni from the university could submit patches with their private emails, but according to what djbebs said, he would not. Do we agree that this would be wrong?


What if the same unethical people who ran the study submit patches from their gmail accounts?


That seems to me like an unjustified and unjust generalization.


I think current context of the world as it is is full of unjustified and unjust generalization.

And as unfortunate as it sound it look like all victim of such generalization, the alumni would have to fight the prejudice associated to their choice of university.


That's a ridiculously broad assertion to make about the large number of staff and students who've graduated or are currently there, that is unwarranted and unnecessarily damaging to people who've done nothing wrong.


By that logic, whenever data is stolen I will blame thr nearest Facebook employee or ex-employee.

And any piss I find, i will blame on amazon


That's a witch hunt, and is not productive. A bad apple does not spoil the bunch, as it were. It does reflect badly on their graduate program to have retained an advisor with such poor judgement, but that isn't the fault of thousands of other excellent graduates.


It's discomforting to see "bad apple" metaphor being used to say "isolated instance with no influence to its surroundings".

That is exact opposite of how rot in literal bunch of apples behave. Spoil spreads throughout the whole lot very, very quickly.


Also the common phrase is “a bad apple spoils the bunch.”


Both variations are common. "It was just a few bad apples" is the one you more often see today. But it only became common after refrigeration made it so that few people now experience what is required to successfully pack apples for the winter.


Undoubtedly I am in the minority here, but I think it's less a question of ethics, and more a question of bad judgement. You just don't submit vulnerabilities into the kernel and then say "hey, I just deliberately submitted a security vulnerability".

The chief problem here is not that it bruises the egos of the Linux developers for being psyched, but that it was a dick move whereby people now have to spend time sorting this shit out.

Prof Liu miscalculated. The Linux developers are not some randos off the street where you can pay them a few bucks for a day in the lab, and then they go away and get on with whatever they were doing beforehand. It's a whole community. And he just pissed them off.

It is right that Linux developers impose a social sanction on the perpetrators.

It has quite possibly ruined the student's chances of ever getting a PhD, and earned Liu a rocket up the arse.


> it's less a question of ethics, and more a question of bad judgement.

I disagree. I think it's easier to excuse bad judgment, in part because we all sometimes make mistakes in complicated situations.

But this is an example of experimenting on humans without their consent. Greg KH specifically noted that the developers did not appreciate being experimented on. That is a huge chasm of a line to cross. You are generally required to get consent before experimenting on humans, and that did not happen. That's not just bad judgment. The whole point of the IRB system is to prevent stuff like that.


Ah, so people do actually use the expression backwards like that. I had seen many people complain about other people saying “just a few bad apples”, but I couldn’t remember actually seeing anyone use the “one/few bad apple(s)” phrase as saying that it doesn’t cause or indicate a larger problem.


> A bad apple does not spoil the bunch, as it were.

What? That's exactly how it works. A bad apple gives off a lot of ethylene which ripens (spoils) the whole bunch.


Ethylene comes from good apples too and is not a bad thing. The thing that bad apples have that spoils bunches is mold.


How not to get tenure 101


Based on my time in a university department you might want to cc whoever chairs the IRB or at least oversees its decisions for the CS department. Seems like multiple incentives and controls failed here, good on you for applying the leverage available to you.


I'm genuinely curious how this was positioned to the IRB and if they were clear that what they were actually trying to accomplish was social engineering/manipulation.

Being a public university, I hope at some point they address this publicly as well as list the steps they are (hopefully) taking to ensure something like this doesn't happen again. I'm also not sure how they can continue to employ the prof in question and expect the open source community to ever trust them to act in good faith going forward.


first statement + commentary from their associate department head: https://twitter.com/lorenterveen/status/1384954220705722369


Wow. Total sleazeball. This appears to not be his first time with using unintentional research subjects.

Source:

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C22&q=Lor...

This is quite literally the first point of the Nuremberg code research ethics are based on:

https://en.wikipedia.org/wiki/Nuremberg_Code#The_ten_points_...

This isn't an individual failing. This is an institutional failing. This is the sort of thing which someone ought to raise with OMB.

He literally points to how Wikipedia needed to respond when he broke the rules:

https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is_no...


As far as I can tell, the papers he co-authored on Wikipedia were unlike the abuse of the kernel contribution process that started last year in that they did not involve active experiment, but passive analysis of contribution history.

Doesn't mean there aren't ethical issues related to editors being human subjects, but you may want to be more specific.


I didn't see any unethical work in a quick scan of the Google Scholar listing. I saw various works on collaboration in Wikipedia.

What did you see that offended you?


You realise that the GP went through the trouble to point out that research on people should involve consent, and that they [wikipedia] needed to release a statement saying this. What does that tell you about the situation that gave rise to that statement?


Got it, and @Tobu's comment describes the issue perfectly. Thanks!


They claim they got the IRB to say it's IRB-exempt.


Which would suggest the IRB’s oversight is broken in that institution somehow, right?


Well, the university of Minnesota managed to escape responsibility after multiple suicides and coercion of subjects of psychiatric research. From one regent: “[this] has not risen to the level of our concern”.

https://www.startribune.com/markingson-case-university-of-mi...


Wow, very interesting read (not finished yet though), thank you. To me, this seems like it should be considered as part of UNM's trustworthiness as a whole and completely validates GKH's decision (not that any was needed).


A lot of IRBs are a joke.

The way I've seen Harvard, Stanford, and a few other university researchers dodge IRB review is by doing research in "private" time in collaboration with a private entity.

There is no effective oversight over IRBs, so they really range quite a bit. Some are really stringent and some allow anything.


>It reflects poorly on all graduates of the program

how it does?


I hope they take this bad publicity and stop (rather than escalating stupidity by using non university emails).

What a joke - not sure how they can rationalize this as valuable behavior.


It was a real world penetration test that showed some serious security holes in the code analysis/review process. Penetration tests are always only as valuable as your response to them. If they chose to do nothing about their code review/analysis process, with these vulnerabilities that made it in (intentional or not), then yes, the exercise probably wasn't valuable.

Personally, I think all contributors should be considered "bad actors" in open source software. NSA, some university mail address, etc. I consider myself a bad actor, whenever I write code with security in mind. This is why I use fuzzing and code analysis tools.

Banning them was probably the correct action, but not finding value requires intentionally ignoring the very real result of the exercise.


I agree. They should take this as a learning opportunity and see what can be done to improve security and detect malicious code being introduced into the project. What's done is done, all that matters is how you proceed from here. Banning all future commits from UMN was the right call. I mean it seems like they're still currently running follow up studies on the topic.

However I'd also like to note that in a real world penetration test on an unwitting and non-consensual company, you also get sent to jail.

Everybody wins! The team get valuable insight on the security of the current system and unethical researchers get punished!


A non-consensual pentest is called a "breach". At that point it's no longer testing, just like smashing a window and entering your neighbour's house is not a test of their home security system but just breaking and entering.


A real world penetration test is coordinated with the entity being tested.


Yeah - and usually stops short of causing actual damage.

You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

In this case they merged actual bugs and now they have to revert that stuff which depending on how connected those commits are to other things could cost a lot of time.

If they were doing this in good faith, they could have stopped short of actually letting the PRs merge (even then it's rude to waste their time this way).

This just comes across to me as an unethical academic with no real valuable work to do.


> You don't get to rob a bank and then when caught say "you should thank us for showing your security weaknesses".

Yeah, there’s a reason the US response to 9/11 wasn’t to name Osama bin Laden “Airline security researcher of the Millenium”, and it isn’t that “2001 was too early to make that judgement”.


But bad people don’t follow some mythical ethical framework and announce they’re going to rob the bank prior to doing it. There absolutely are pen tests conducted where only a single person out of hundreds is looped in. Is it unethical for supervisors to subject their employees and possibly users to those such environments? Since you can’t prevent this behavior at large, I take solace that it happened in a relatively benign way rather than having been done by a truly malicious actor. No civilians were harmed in the demonstration of the vulnerability. Security community doesn't get to have their cake and eat it too. All this responsible disclosure “ethics” is nonsense. This is full disclosure, it’s how the world actually works. The response from the maintainers to me indicates they are frustrated at the perceived waste of their time, but to me this seems like a justified use of human resources to draw attention to a real problem that high profile open source projects face. If you break my trust I’m not going to be happy either and will justifiably not trust you in the future, but trying to apply some ethical framework to how “good bad actors” are supposed to behave is just silly IMO. And the “ban the institution” feels more like an “I don't have time for this” retaliation than an “I want to effectively prevent this behavior in the future” response that addresses the reality. For all we know Linus and Greg could have and still might be onboard with the research and we’re just seeing the social elements of the system now tested. My main point is maybe do a little more observing and less condemning. I find the whole event to be a fascinating test of one of the known vulnerabilities large open source efforts face.


Strong disagree on this.

We live in a society, to operate open communities there are trade-offs.

If you want to live in a miserable security state where no action is allowed, refunds are never accepted, and every actor is assumed hostile until proven otherwise, then you can - but it comes at a serious cost.

This doesn't mean people shouldn't consider the security implications of new PRs, but it's better to not act like assholes with the goal being a high-trust society, this leads to a better non-zero-sum outcome for everyone. Banning these people was the right call they don't deserve any thanks.

In some ways their bullshit was worse than a real bad actor actually pursuing some other goal, at least the bad actor has some reason outside of some dumb 'research' article.

The academics abused this good-will towards them.

What did they show here that you can sneak bugs into an open source project? Is that a surprise? Bugs get in even when people are not intentionally trying to get them in.


Of course everyone knows bugs make it in software. That’s not the point and I find it a little concerning that there’s a camp of people who are only interested in the zzz I already knew software had bugs assessment. Yes the academics abused their goodwill. And in doing so they raised awareness around around something that sure many people know is possible. The point is demonstrating the vuln and forcing people to confront reality.

I strive for a high trust society too. Totally agree. And acknowledging that people can exploit trust and use it to push poor code through review does not dismantle a high trust operation or perspective. Trust systems fail when people abuse trust so the reality is that there must be safeguards built in both technically and socially in order to achieve a suitable level of resilience to keep things sustainable.

Just look at TLS, data validation, cryptographic identity, etc. None of this would need to exist in a high trust society. We could just tell people who we are, trust other not to steal our network traffic, never worry about intentionally invalid input. Nobody would overdraft their accounts at the ATM, etc. I find it hard to argue for absolute removal of the verify step from a trust but verify mentality. This incident demonstrated a failure in the verify step for kernel code review. Cool.


This is how security people undermine their own message. My entire job is being "tge trust but verify" stick in the mud, but everyone knows it when I walk in the room. I don't waste peoples time, and I stop short of actually causing damage by educating and forcing an active reckoning with reality.

You can have your verify-lite process, but you must write down that that was your decision, and if appropriate, revisit and reaffirm it over time. You must implement controls, measures and processes in such a way as to minimize the deleterious consequences to your endeavor. It's the entire reason Quality Assurance is a pain in the ass. When you're doing a stellar job, everyone wonders why you're there at all. Nobody counts the problems that didn't happen or that you've managed to corral through culture changes in your favor, but they will jump on whatever you do that drags the group down. Security is the same. You are an anchor by nature, the easiest way to make you go away is to ignore you.

You must help, first and foremost. No points for groups that just add more filth to wallow through.


The result is to make sure not to accept anything with the risk of introducing issues.

Any patch coming from somebody having intentionally introduced an issue falls into this category.

So, banning their organization from contributing is exactly the lesson to be learned.


I agree, but I would say the better result, most likely unachievable now, would be to fix the holes that required a humans feelings to ensure security. Maybe some shift towards that direction could result from this.


Next time you rob a bank, try telling the judge it was a real world pentest. See how well that works out for you.


> It was a real world penetration test that showed some serious security holes in the code analysis/review process.

So you admit it was a malicious breach? Of course it isn't a perfect process. Everyone knows it isn't absolutely perfect. What kind of test is that?


What exactly did they find?


I would implore you to maintain the ban, no matter how hard the university tries to make ammends. You sent a very clear message that this type of behavior will not be tolerated, and organizations should take serious measures to prevent malicious activities taking place under their purview. I commend you for that. Thanks for your hard work and diligence.


I'd disagree. Organizations are collections of actors, some of which may have malicious intents. As long as the organization itself does not condone this type of behavior, has mechanisms in place to prevent such behavior, and has actual consequences for malicious actors, then the blame should be placed on the individual, not the organization.

In the case of research, universities are required to have an ethics board that reviews research proposals before actual research is conducted. Conducting research without an approval or misrepresenting the research project to the ethics board are pretty serious offenses.

Typically for research that involves people, participants in the research require having a consent form that is signed by participants, alongside a reminder for participants that they can withdraw that consent at any time without any penalties. It's pretty interesting that in this case, there seemed to have been no real consent required, and it would be interesting to know whether there has been an oversight by the ethics board or a misrepresentation of the research by the researchers.

It will be interesting to see whether the university applies a penalty to the professor (removal of tenure, termination, suspension, etc.) or not. The latter would imply that they're okay with unethical or misrepresented research being associated with their university, which would be pretty surprising.

In any case, it's a good thing that the Linux kernel maintainers decided that experimenting on them isn't acceptable and disrespectful of their contributions. Subjecting participants to experiments without their consent is a severe breach of ethical duty, and I hope that the university will apply the correct sanctions to the researchers and instigators.


Good points. I should have qualified my statement by saying that IMO the ban should stay in place for at least five years. A prison sentence, if you will, for the offense that was committed by their organization. I completely agree with you though that no organization can have absolute control over the humans working for them, especially your point about misrepresenting intentions. However, I believe that by handing out heavy penalties like this, not only will it make organizations think twice before approving questionable research, it will also help prevent malicious researchers from engaging in this type of activity. I don't imagine it's going to look great being the person who got an entire university banned from committing to the Linux kernel.

Of course, in a few years this will all be forgotten. It begs the question... how effective is it to ban entire organizations due to the actions of a few people? Part of me thinks that it would very good to have something like this happen every five years (because it puts the maintainers on guard), but another part of me recognizes that these maintainers are working for free, and they didn't sign up to be gaslighted, they signed up to make the world a better place. It's not an easy problem.


I agree. I don't think any of the kernel developers ever signed up for reviewing malicious patches done by people who managed to sneak their research project past the ethics board, and it's not really fair to them to have to deal with that. I'm pretty sure they have enough work to do already without having to deal with additional nonsense.

I don't think it's unreasonable for maintainers of software to ignore or outright ban problematic users/contributors. It's up to them to manage their software project the way they want, and if banning organizations with malicious actors is the way to do it, the more power to them.


It turns out that the Associate Department Head was engaged in similar "research" on Wikipedia over a dozen years ago, and that also caused problems. The fact that they are here again suggests a broader institutional problem.


Looks like the authors have Chinese names [1]. Should they ban anyone with Chinese names, too, for good measure? Or maybe collective punishment is not such a good idea?

[1] https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...


I have to ask: were they not properly reviewed when they were first merged?

Also to assume _all_ commits made by UMN, beyond what's been disclosed in the paper, are malicious feels a bit like an overreaction.


Thanks for your important work, Greg!

I'm currently wondering how much of these patches could've been flagged in an automated manner, in the sense of fuzzing specific parts that have been modified (and a fuzzer that is memory/binary aware).

Would a project like this be unfeasible due to the sheer amount of commits/day?


Thank you for all your excellent work!


> should be aware that future submissions from anyone with a umn.edu address should be by default-rejected

Are you not concerned these malicious "researches" will simply start using throwaway gmail addresses?


That’s not likely to work after a high profile incident like this, in the short term or the long term. Publication is, by design, a de-anonymizing process.


Are throwaway gmail addresses nearly as 'trusted'?


Putting the ethical question of the researcher aside, the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process. If you think the risk of malicious patches from this person have got in is high, it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

While I think the researcher's actions are out of line for sure. This "I will ban you and revert all your stuff" retaliation seems emotional overaction.


> This "I will ban you and revert all your stuff" retaliation seems emotional overaction.

Fool me once. Why should they waste their time with extra scrutiny next time? Somebody deliberately misled them, so that's it, banned from the playground. It's just a no-nonsense attitude, without which you'd get nothing done.

If you had a party in your house and some guest you don't know and whom you invited in assuming good faith, turned out to deliberately poop on the rug in your spare guest room while nobody was looking .. next time you have a party, what do you do? Let them in but keep an eye on them? Ask your friends to never let this guest alone? Or just simply to deny entrance, so that you can focus on having fun with people you trust and newcomers who have not shown any malicious intent?

I know what I'd do. Life is too short for BS.


> Why should they waste their time with extra scrutiny next time?

Because well funded malicious actors (government agencies, large corporations, etc) exist and aren't so polite as to use email addresses that conveniently link different individuals from the group together. Such actors don't publicize their results, aren't subject to IRB approval, and their exploits likely don't have such benign end goals.

As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.


we don't have the full communication and I understand that the intention is to be stealthy (why use an university email that can be linked to the previous research then?). However the researcher's response seems to be disingenuous:

> I sent patches on the hopes to get feedback. We are not experts in the Linux kernel and repeatedly making these statements is disgusting to hear.

this is after they're caught, why continue lying instead of apologizing and explain? Is the lying also part of the experiments?

On top of that, they played cards, you can see why people would be triggered by this level of dishonesty:

> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies


From reading other comments about the context surrounding these events, it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake and was then caught up in the controversy surrounding his advisor's past research.

Or perhaps it really is a second attempt by his advisor at an evil plot to sneak more buggy patches into the kernel for research purposes? Either way, the response by the maintainers seems rather disproportionate to me. And either way, I'm ultimately grateful for the (apparently unwanted?) attention being drawn to the (apparent lack of) security surrounding the Linux kernel patch review process.


> it sounds to me like this probably was an actual newbie who made an honest (if lazy) mistake

Who then replies with a request for "cease and desist"? Not sure that's the right move for a humble newbie.


They should not have experimented on human subjects without consent, regardless of whether the result is considered benign.

Yes, malicious actors have a head start, because they don't care about the rules. It doesn't mean that we should all kick the rules, and compete with malicious actors on this race to the bottom.


I'm not aware of any law requiring consent in cases such as this, only conventions enforced by IRBs and journal submission requirements.

I also don't view unannounced penetration testing of an open source project as immoral, provided it doesn't consume an inordinate amount of resources or actually result in any breakage (ie it's absolutely essential that such attempts not result in defects making it into production).

When the Matrix servers were (repeatedly) breached and the details published, I viewed it as a Good Thing. Similarly, I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing given how widely deployed it is. Frankly I don't care about the sensibilities of you or anyone else - at the end of the day I want my devices to be secure and at this point they are all running Linux.


I don’t see where I claim that this is a legal matter. There are many things which are not prohibited by law that you can do to a fellow human being that are immoral and might result in them blacklisting you forever.

That you care about something or not also seems to be irrelevant, unless you are part of either the research or the kernel maintainers. It’s not about your or my emotional inclination.

Acquiring consent before experimenting in human subject is an ethical requirement for research, regardless of whether is a hurdle for the researchers. There is a reason that IRB exists.

Not to mention that they literally proved nothing, other than that vulnerable patches can be merged into the kernel. But did anybody that such a threat is impossible anyway? The kernel has vulnerabilities and it will continue to have them. We already knew that.


>I view non-consensual and unannounced penetration testing of the Linux kernel as a Good Thing...

So what other things do you think appropriate to not engage in acquiring consent to do based on some perceived justification of ubiquity? It's a slippery slope all the way down, and there is a reason for all the ceremony and hoopla involved in this type of thing. If you cannot demonstrate mastery of doing research on human subjects and processes the right way, and show you've done your footwork to consider the impact of not doing it that way (i.e. IRB fully engaged, you've gone out of your way to make sure they understand, and at least reached out to one person in the group under test to give a surreptitious heads up (like Linus)), you have no business playing it fast and loose, and you absolutely deserve censure.

No points awarded for half-assing. Asking forgiveness may oft times be easier than asking permission, but in many areas, the impact to doing so goes far beyond mere inconvenience to the researcher in the costs it can extract.

>at the end of the day I want my devices to be secure and at this point they are all running Linux.

That is orthogonal to the outcome of the research that was being done, as by definition running Linux would include running with a new vulnerability injected. What you really want is to know your device is doing what you want it to, and none of what you don't. Screwing with kernel developers does precious little to accomplish that. Same logic applies with any other type of bug injection or intentioned software breakage.


> I'm not aware of any law requiring consent in cases such as this

In the same way there is no law requiring Linux kernel maintainers to review patches send by this university.

"it was not literally illegal" is not a good reasoning for why someone should not be banned.


> As far as I'm concerned the University of Minnesota did a public service here by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of software. We ought to have more such unannounced penetration tests.

This "attack" did not reveal anything interesting. It's not like any of this was unknown. Of course you can get backdoors in if you try hard enough. That does not surprise anybody.

Imagine somebody goes with an axe, breaks your garage door, poops on your Harley, leaves, and then calls you and tells you "Oh, btw, it was me. I did you a service by facilitating a mildly sophisticated and ultimately benign attack against the process surrounding an absolutely critical piece of your property. Thank me later." And then they expect you to get let in when you have a party.

It doesn't work that way. Of course the garage door can be broken with an axe. You don't need a "mildly sophisticated attack" to illustrate that while wasting everybody's time.


You’re completely right, except in this case it’s banning anyone who happened to live in the same house as the offender, at any point in time...


By keeping the paper, UMN is benefiting (in citations and research result count). Universities are supposed to have processes for punishing unethical research. Unless the University retracts the paper and fires the researcher involved, they have not made amends.


IP bans often result in banning an entire house.

"It was my brother on my unsecured computer" is an excuse I've heard a few times by people trying to shirk responsibility for their ban-worthy actions.

Geographic proximity to bad actors is sometimes enough to get caught in the crossfire. While it might be unfair, it might also be seen as holding a community and it's leadership responsible for failing to hold members of their community responsible and in check with their actions. And, fair or not, it might also be seen as a pragmatic option in the face of limited moderation tools and time. If you have a magic wand to ban only the bad-faith contributions by the students influenced by the professor in question, I imagine the kernel devs will be more than happy to put it to use.

Is it really just the one professor, though?


No, it's not. It's banning anyone who hides behind their UMN email address. Because its been proving now the UMN.edu commits have bad actors.


To continue the analogy, it would be like finding out that the offender’s friends knew they were going to do that and were planning on recording the results. Banning all involved parties is reasonable.


I'd amend to:

"... planning on recording the event to show it on YouTube for ad revenue and Internet fame."

In this case, the offender's friends are benefiting from the research. I think that needs to be made important. The university benefits from this paper being published, or at least expected to. That should not be overlooked.


The fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

Basically, yes. The kernel review process does not catch 100% of intentionally introduced security flaws. It isn't perfect, and I don't think anyone is claiming that it is perfect. Whenever there's an indication that a group has been intentionally introducing security flaws, it is just common sense to go back and put a higher bar on reviewing it for security.


Not all kernel reviewers are being paid by their employer to review patches. Kernel reviews are "free" to the contributor because everyone operates on the assumption that every contributor wants to make Linux better by contributing high-quality patches. In this case, multiple people from the University have decided that reviewers' time isn't valuable (so it's acceptable to waste it) and that the quality of the Kernel isn't important (so it's acceptable to make it worse on purpose). A ban is a completely appropriate response to this, and reverting until you can review all the commits is an appropriate safety measure.

Whether or not this indicates flaws in the review process is a separate issue, but I don't know how you can justify not reverting all the commits. It'd be highly irresponsible to leave them in.


I guess what I am trying to get at is that this researcher's action does have its merit. This event does rise awareness of what sophisticated attacker group might try to do to kernel community. Admitting this would be the first step to hardening the kernel review process to prevent this kind of harm from happening again.

What I strongly disapprove of the researcher is that apparently no steps are taken to prevent real world consequences of malicious patches getting into kernel, I think the researcher should:

- Notify the kernel community promptly once malicious patches got past all review processes.

- Time these actions well such that malicious patches won't not get into a stable branch before they could be reverted.

----------------

Edit: reading the paper provided above, it seems that they did do both actions above. From the paper:

> Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

So, unless the kernel maintenance team have another side of the story. The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.


That paper came out a year ago, and they got a lot of negative feedback about it, as you might expect. Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

This time two reviewers noticed that the patch was useless, and then Greg stepped in (three weeks later) saying that this was a repetition of the same bad behavior from the first study. This got a response from the author of the patch, who said that this and other statements were “wild accusations that are bordering on slander”.

https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


> Now they appear to be doing it again. It’s a different PHD student with the same advisor as last time.

I'd hate to be the PhD student that wastes away half a dozen years of his/her life writing a document on how to sneak buggy code through a code review.

More than being pointless and boring, it's a total CV black hole. It's the worst of both worlds: zero professional experience to show for, and zero academic portfolio to show for.


True. They would be better off competing in the Underhanded C Contest (http://www.underhanded-c.org/).


We threw people off buildings to gauge how they would react, but were able to catch all 3 subjects in a net before they hit the ground.

Just because their actions didn’t cause damage doesn’t mean they weren’t negligent.


Strangers submitting patches to the kernel is completely normal, where throwing people off is not. A better analogy would involve decades of examples of bad actors throwing people off the bridge, then being surprised when someone who appears friendly does it.


Your analogy also isn't the best because it heavily suggests the nefarious behavior is easy to identify (throwing people off a bridge). This is more akin to people helping those in need to cross a street. At first, it is just people helping people. Then, someone comes along and starts to "help" so that they can steal money (introduce vulnerabilities) to the unsuspecting targets. Now, the street-crossing community needs to introduce processes (code review) to look out for these bad actors. Then, someone who works for the city and is wearing the city uniform (University of Minnesota CS department) comes along saying there here to help and the community is a bit more trustful as they have dealt with other city workers before. The city worker then steals from the people in need and then proclaims "Aha, see how easy it is!" No one is surprised and just thinks they are assholes.

Sometimes, complex situations don't have simple analogies. I'm not even sure mine is 100% correct.


While submitting patches is normal submitting malicious patches is abnormal and antisocial. Certainly bad actors will do it, but by that logic these researchers are bad actors.

Just like bumping into somebody on the roof is normal, but you should always be aware that there’s a chance they might try to throw you off. A researcher highlighting this fact by doing it isn’t helping, even if they mitigate their damage.

A much better way to show what they are attempting to is to review historic commits and try to find places where malicious code slipped through, and how the community responded. Or to solicit experimenters to follow normal processes on a fake code base for a few weeks.


> Strangers submitting patches to the kernel is completely normal, where throwing people off is not.

Strangers submitting patches might be completely normal.

Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal. At all.

There are far more news reports of deranged people throwing strangers under traffic, subways, and trains, than there are reports of malicious actors trying to sneak vulnerable patches.


> Malicious strangers trying to sneak vulnerabilities by submitting malicious patches devised to exploit the code review process is not normal.

How could you possibly know that? In fact, I would suggest that you are completely and obviously wrong. Government intelligence agencies exist (among other things) and presumably engage in such behavior constantly. The reward for succeeding is far too high to assume that no one is trying.


We damaged the brake cables mechanics were installing into people's cars to find out if they were really inspecting them properly prior to installation!


To add... Ideally, they should have looped in Linus, or someone high-up in the chain of maintainers before running an experiment like this. Their actions might have been in good faith, but the approach they undertook (including the email claiming slander) is seriously irresponsible and a sure shot way to wreck relations.


Greg KH is "someone high-up in the chain." I remember submitting patches to him over 20 years ago. He is one of Linus's trusted few.


Yes, and the crux of the problem is that they didn’t get assent/buy-in from someone like that before running the experiment.


> This event does rise awareness of what sophisticated attacker group might try to do to kernel community.

The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)


> The limits of code review are quite well known

That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.

> it appears very questionable what scientific knowledge is actually gained here

The knowledge isn't from the study existing, but the analysis of the data collected.

I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.


> The questions of ethics could only go as far as "wasting kernel community's time" rather than creating real world loop holes.

Under that logic, it's ok for me to run a pen test against your computers, right? ...because I'm only wasting your time.... Or maybe to hack your bank account, but return the money before you notice.

Slippery slope, my friend.


Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

> Under that logic, it's ok for me to run a pen test against your computers, right?

I think the standard for an individual user should be different than that for the organization who is, in the end, responsible for the security of millions of those individual users. One annoys one person, one prevents millions from being annoyed.

Donate to your open source projects!


> Ethics aside, warning someone that a targeted penetration test is coming will change their behavior.

They could discuss the idea and then perform the test months later? With the amount of patches that had to be reverted as precaution the test would have been well hidden in the usual workload even if the maintainers knew that someone at some point in the past mentioned the possibility of a pen test. How long can the average human stay vigilant if you tell them they will be robbed some day this year?


That's why for pen testing, you still warn people, but you do it high enough the chain that the individual behaviors and responses are not affected.


Does experimenting on people without their knowledge or consent pose an ethical question?


Obviously.


I think the question may have been rhetorical, and the intended answer the opposite of yours: No, it doesn't pose a question, since it obviously shouldn't be done.


I wouldn't put it past them to have a second unpublished paper, for the "we didn't get caught" timeline.

It would give the University some notoriety to be able to claim "We introduced vulnerabilities in Linux". It would put them in good terms with possible propietary software sponsors, and the military.


> the fact you want to "properly review them at a later point in time" seems to suggest a lack of confidence in the kernel review process.

I don't think this necessarily follows. Rather it is fundamentally a resource allocation issue.

The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated. Instead it's a more nebulous standard of "reasonable assurance", where "reasonable" is a variable function of what must be sacrificed to perform a more thorough review, how critical the patch appears at first impression, and information relating to provenance of the patch.

By assimilating new information about the provenance of the patch (that it's coming from a group of people known to add obfuscated bugs), that standard rises, as it should.

Alternatively stated, there is some desired probability that an approved patch is bug-free (or at least free of any bugs that would threaten security). Presumably, the review process applied to a patch from an anonymous source (meaning the process you are implying suffers from a lack of confidence) is sufficient such that the Bayesian prior for a hypothetical "average anonymous" reviewed patch reaches the desired probability. But the provenance raises the likelihood that the source is malicious, which drops the probability such that the typical review for an untrusted source is not sufficient, and so a "proper review" is warranted.

> it means that an unknown attacker deliberately concerting complex kernel loop hole would have a even higher chance got patches in.

That's hard to argue with, and ironically the point of the research at issue. It does imply that there's a need for some kind of "trust network" or interpersonal vetting to take the load off of code review.


> The kernel team obviously doesn't have sufficient resources to conclusively verify that every patch is bug-free, particularly if the bugs are intentionally obfuscated.

Nobody can assure that.


In a perfect world, I would agree that the work of a researcher who's not an established figure in the kernel community would be met with a relatively high level of scrutiny in review.

But realistically, when you find out a submitter had malicious intent, I think it's 100% correct to revisit any and all associated submissions since it's quite a different thing to inspect code for correctness, style, etc. as you would in a typical code review process versus trying to find some intentionally obfuscated security hole.

And, frankly, who has time to pick the good from the bad in a case like this? I don't think it's an overreaction at all. IMO, it's a simplification to assume that all associated contributions may be tainted.


Why? Linux is not the state. There is no entitlement to rights or presumption of innocence.

Linux is set up to benefit the linux development community. If UMinn has basically no positive contributions, a bunch of neutral ones and some negative ones banning seems the right call.

Its not about fairness, its about if the hurts outweigh the benefits.


Not only that, good faith actors who are associated with UMN can still contribute, just not in their official capacity as UMN associates (staff, students, researchers, etc).


> Since this researcher is apparently not an established figure in the kernel community, my expectation is the patches have gone through the most rigorous review process

I think the best way to make this expectation reality is putting in the work. The second best way is paying. Doing neither and holding the expectation is a way to exist certainly, but has no impact on the outcome.


> seems to suggest a lack of confidence in the kernel review process

The reviews were done by kernel developers who assumed good faith. That assumption has been proven false. It makes sense to review the patches again.


I mean, it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches. Review processes obviously aren't perfect, but usually patches aren't constructed to sneak sketchy code though. You'd usually approach a review in good faith.

Given that some patches may have made it though with holes, you pull them and re-approach them with a different mindset.


> You'd usually approach a review in good faith.

> it's the linux kernel. Think about what it's powering and how much risk there is involved with these patches

Perhaps the mindset needs to change regarding security? Actual malicious actors seem unlikely to announce themselves for you.


Doesn't this basically prove the original point that if someone or an organization wished to compromise linux, they could do so with crafted bugs in patches?


Just wanted you to know that I think you're an amazing programmer


This might not be on purpose. If you look at their article they're studying how to introduce bugs that are hard to detect not ones that are easy to detect.


> Thanks for the support.

THANK YOU! After reading the email chain, I have a much greater appreciation for the work you do for the community!


My deepest thanks for all your work, as well as for keeping the standards high and the integrity of the project intact!


I would be interested how many committers actually work at private and state intelligence?


you know what they say, curiosity killed the cat


Well, you or whoever was the responsible maintainer completely failed in reviewing these patches, which is your whole job as a maintainer.

Just reverting those patches (which may well be correct) makes no sense, you and/or other maintainers need to properly review them after your previous abject failure at doing so, and properly determine whether they are correct or not, and if they aren't how they got merged anyway and how you will stop this happening again.

Or I suppose step down as maintainers, which may be appropriate after a fiasco of this magnitude.


On the contrary, it would be the easy, lazy way out for a maintainer to say “well this incident was a shame now let’s forget about it.” The extra work the kernel devs are putting in here should be commended.

In general, it is the wrong attitude to say, oh we had a security problem. What a fiasco! Everyone involved should be fired! With a culture like that, all you guarantee is that people cover up the security issues that inevitably occur.

Perhaps this incident actually does indicate that kernel code review procedures should be changed in some way. I don’t know, I’m not a kernel expert. But the right way to do that is with a calm postmortem after appropriate immediate actions are taken. Rolling back changes made by malicious actors is a very reasonable immediate action to take. After emotions have cooled, then it’s the right time to figure out if any processes should be changed in the future. And kernel devs putting in extra work to handle security incidents should be appreciated, not criticized for their imperfection.


Greg explicitly stated "Because of this, all submissions from this group must be reverted from the kernel tree and will need to be re-reviewed again to determine if they actually are a valid fix....I will be working with some other kernel developers to determine if any of these reverts were actually valid changes, were actually valid, and if so, will resubmit them properly later. For now, it's better to be safe."


If the IRB is any good the professor doesn't get that. Universities are publish or perish, and the IRB should force the withdrawal of all papers they submitted. This is might be enough to fire the professor with cause - including remove any tenure protection they might have - which means they get a bad reference.

I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time. (there is the possibility that these are good faith patches and someone in the linux community just hates this person - seems unlikely but until a proper independent investigation is done I'll leave that open.)


See page 9 of the already published paper:

https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...

> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.


> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.

I'm not sure how it affects things, but I think it's important to clarify that they did not obtain the IRB-exempt letter in advance of doing the research, but after the ethically questionable actions had already been taken:

The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning. ... We would like to thank the people who suggested us to talk to IRB after seeing the paper abstract.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


I'm a bit shocked that the IRB gave an exemption letter - are they hoping that the kernel maintainers won't take the (very reasonable) step towards legal action?


What "legal action" do you think applies here?


Intentional misrepresentation that causes harm is commonly referred to as “fraud.”


I don't think any legal actions need to be taken. UMN can longer participate. Tough shit.


Is it illegal to intentionally create security risks?


I honestly do not know


I'd guess they may not have understood what actually happened, or were leaning heavily on the IEEE reviewers having no issues with the paper, as at that point it'd already been excepted.


> We send the emails to the Linux communityand seek their feedback.

That's not really what they did.

They sent the patches, the patches where either merged or rejected.

And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose until they got caught and people started reverting all the patches from their university and banned the whole university.


This is not what happened according to them:

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

> (4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.


It'd be great if they pointed to those "please don't merge" messages on the mailing list or anywhere.

Seems like there are some patches already on stable trees [1], so they're either lying, or they didn't care if those "don't merge" messages made anybody react to them.

1 - https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...


The paper doesn't cite specific commits used. It's possible that any of the commits in stable are actually good commits and not part of the experiment. I support the ban/revert, I'm just pointing out there's a 3rd option you didn't touch on.


Patches with built-in bugs made it to stable: https://lore.kernel.org/linux-nfs/YIAta3cRl8mk%2FRkH@unreal/.


Here's the commit specifically identified by Leon Romanovsky as having a "built-in bug"

https://github.com/torvalds/linux/commit/8e949363f017


That commit is from Aditya Pakki who I don't believe is affiliated with the paper in question, whose only authors are Qiushi Wu, and Kangjie Lu.


We have 4 people, with the students Quishu Wu and Aditya Pakki intruducing the faulty patches, and the 2 others, Prof Kangjie Lu and Ass.Prof Wengwen Wang patching vulnerabilities in the same area. Banning the leader seems ok to me, even if he produced some good fixes and SW to detect it. The only question is Wang who is now in Georgia, and was never caught. Maybe he left Lu at umn because of his questionable ethics.


At least one of Wang’s patches has been double reviewed and the reversion NACK’d - in other words it was a good patch.


I've looked at all of Wang's patches and they seemed to be all good.

The main culprit seems to be only Quishu Wu. He is also the one who wrote the paper.


Aditya Pakki is an RA under Kangjie Lu.


Also, they are talking of three cases. However, the list of patches to be reverted by gregkh is far longer than three, more than a hundred. Most of the first batch look sufficiently similar that I would guess all of them are part of this "research". So the difference in numbers alone points to them most probably lying.


I was more ambivalent about their "research" until I read that "clarification." It's weaselly bullshit.

>> The work taints the relationship between academia and industry

> We are very sorry to hear this concern. This is really not what we expected, and we strongly believe it is caused by misunderstandings

Yeah, misunderstandings by the university that anyone, ever, in any line of endeavor would be happy to be purposely fucked with as long as the perpetrator eventually claims it's for a good cause. In this case the cause isn't even good, they're proving the jaw-droppingly obvious.


The first step of an apology is admitting the misdeed. Here they are explicitly not acknowledging that what they did was wrong, they are still asserting that this was a misunderstanding.


Even their choice of wording ("We are very sorry to hear this concern.") is the blend of word fuckery that conveys the idea they care nothing about what they did or why it negatively affected others.


>We are very sorry to hear this concern.

..."Because if we're lucky tomorrow, we won't have to deal with questions like yours ever again." --Firesign Theater, "I Think We're All Bozos on the Bus"


> they're proving the jaw-droppingly obvious.

Yet we do nothing about it? I wouldn't call that jaw-droppingly obvious, if anything, without this, I'm pretty sure that anyone would argue that it would be caught way before making it way into stable.


I've literally never come across an open source project that was thought to have a bullet proof review process or had a lack of people making criticisms.

What they do almost universally lack is enough people making positive contributions (in time, money, or both).

This "research" falls squarely into the former category and burns resources that could have been spent on the latter.


This is zero percent different from a bad actor and hopefully criminal. I think a lot of maintainers work for large corporations like Microsoft, Oracle, Ubuntu, Red Hat, etc... I think these guys really stepped in it.


> And they never let anybody knew that they had introduced security vulnerabilities on the kernel on purpose...

Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".

Had this not been caught, it would've exposed a major flaw in the process.

> ...until they got caught and people started reverting all the patches from their university and banned the whole university.

Either these patches are valid fixes, in which case they should remain, or they are intentional vulnerabilities, in which case they should've already been reviewed and rejected.

Reverting and reviewing them "at a later date" just makes me question the process. If they haven't been reviewed properly yet, it's better to do it now instead of messing around with reverts.


This reminds me of that story about Go Daddy sending everyone "training phishing emails" announcing that they had received a company bonus - with the explanation that this is ok because it is a realistic pretext that real phishing may use.

While true, it's simply not acceptable to abuse trust in this way. It causes real emotional harm to real humans, and while it also may produce some benefits, those do not outweigh the harms. Just because malicious actors don't care about the harms shouldn't mean that ethical people shouldn't either.


This isn't some employer-employee trust relationship. The whole point of the test is that you can't trust a patch just because it's from some university or some major company.


The vast majority of patches are not malicious. Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action. Sending a buggy patch that creates a vulnerability by accident is not a malicious action.

Given the completely unavoidable limitations of the review and bug testing process, a maintainer has to react very differently when they have determined that a patch is malicious - all previous patches past from that same source (person or even organization) have to be either re-reviewed at a much higher standard or reverted indiscriminately; and any future patches have to be rejected outright.

This puts a heavy burden on a maintainer, so intentionally creating this type of burden is a malicious action regardless of intent. Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.


> The vast majority of patches are not malicious.

The vast majority of drunk drivers never kill anyone.

> Sending a malicious patch (one that is known to introduce a vulnerability) is a malicious action.

I disagree that it's malicious in this context, but that's irrelevant really. If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken. That's what we're trying to figure out here, and there's no better way to do it than replicate the same conditions under which such patches would ordinarily be reviewed.

> Especially given that the intent was useless in the first place - everyone knows that patches can introduce vulnerabilities, either maliciously or by accident.

Yes, everyone knows that patches can introduce vulnerabilities if they are not found. We want to know whether they are found! If they are not found, we need to figure out how they slipped by and how to prevent that from happening in the future.


Since humanity still hasn't fixed the problem of drunk drivers I guess I'll start driving drunk on the weekends to illustrate the flaws of the system.


> If the patch gets through, then that proves one of the most critical pieces of software could relatively easily be infiltrated by a malicious actor, which means the review process is broken.

That is a complete misunderstanding of the Linux dev process. No one expects the first reviewer of a patch (the person that the researchers were experimenting on) to catch any bug. The dev process has many safeguards - several reviewers, testing, static analysis tools, security research, distribution testing, beta testers, early adopters - that are expected to catch bugs in the kernel at various stages.

Trying to deceive early reviewers into accepting malicious patches for research purposes is both useless research and hurtful to the developers.


Open source runs on trust, of both individuals and institutions. There’s no alternative. Processes like code review can supplement but not replace it.


So my question is, what is a kernel? Is it a security project? Should security products rely on trust, or assume malicuous intent?


Open source products rely on trust. There is no way to build a trust-less open source product. Of course, the old mantra of trust, but verify is very important as well.

But the linux kernel is NOT a security product - it is a kernel. It can be used in entirely disconnected devices that couldn't care less about security, as well as in highly secure infrastructure that powers the world. The ultimate responsibility of delivering a secure product based on Linux lies with the people delivering a secure product based on the kernel. The kernel is a essentially library, not a product. If someone is assuming they can build a secure product by trusting Linux to be "secure" than they are simply wrong, and no amount of change in the Linux dev process will fix their assumption.

Of course, you want the kernel to be as secure as possible, but you also want many other things from the kernel as well (it should be featureful, it should be backwards compatible with userspace, it should run on as many architectures as needed, it should be fast and efficient, it should be easy to read and maintain etc).


> Yes, that's the whole point!

Well, in real life, you can't go punch someone in the face to teach them a "point". If you do so, you'll get punished.

> Reverting and reviewing them "at a later date" just makes me question the process.

I don't think anybody realistically thought that the kernel review process is rock solid against malicious anyway. What exactly does the paper expose?


> Yes, that's the whole point! The real malicious actors aren't going to notify anyone that they're injecting vulnerabilities either. They may be plants at reputable companies, and they'll make it look like an "honest mistake".

This just turns the researchers into black hats. They are just making it look like "a research paper."


The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research.

How is this not human research? They experimented on the reactions of people in a non-controlled environment.


Sounds like the IRB of UMn needs some scrutiny as well.


[flagged]


> Yes, the review process does involve humans

It doesn’t just “involve humans” it is first and foremost the behavior of specific humans.

> but the humans (reviewers) are not the research subject.

The study is exactly studying their behavior in a particular context. They are absolutely the subjects.


Not sure why you are so obsessed with this. Yes this process does involve humans, but the process has aspects can be examined as independent of humans.

This study does not care about the reviewers, it cares about the process. For example, you can certainly improve the process without replacing any reviewers. It is just blatantly false to claim the process is all about humans.

Another example, the review process can even be totally conducted by AIs. See? The process is not all about humans, or human behavior.

To make this even more understandable, considering the process of building a LEGO, you need human to build a LEGO, but you can examine the process of building the LEGO without examine the humans who build the LEGO.


This study does not care about the reviewers, it cares about the process. For example, you can certainly improve the process without replacing any reviewers. It is just blatantly false to claim the process is all about humans.

This was all about the reaction of humans. They sent in text with a deceptive description and tried to get a positive answer even though the text was not wholly what was described. It was a psych study in an uncontrolled environment with people who did not know they were participating in a study.

How they thought this was acceptable with their own institutions Participant's Bill of Rights https://research.umn.edu/units/hrpp/research-participants/pa... is a true mystery.


No. This is not all about the reaction of humans. This is not a psych study. I have explained this clearly in previous comments. If you believe the process of doing something is all about humans, I have nothing to add.



How does this link have anything to do with my comments?


It provides details of why it was a human experiment.


Which tweet?


People are obsessed because you're trying to excuse the researchers behavior as ethical.

"Process" in this case is just another word for people because ultimately, the process being evaluated here is the human interaction with the malicious code being submitted.

Put another way, let's just take out the human reviewer, pretend the maintainers didn't exist. Does the patch get reviewed? No. Does the patch get merged into a stable branch? No. Does the patch get evaluated at all? No. The whole research paper breaks down and becomes worthless if you remove the human factor. The human reviewer is _necessary_ for this research, so this research should be deemed as having human participants.


See my top comment. I didn't "try to excuse the researchers behavior as ethical".


You did. You just won't accept it because you don't want to. Every time you try to draw the focus of the conversation to "it's a process study" you're trying to diminish the severity of what the researchers did here.

How was this study conducted? For every patch that the researchers sent, what process did it go through?

The answer is, it was reviewed and accepted by a human. That's it. Full stop. There's your human subject right there in the middle of your research work. It's not possible to conduct this research without that human subject interacting with your research materials. You do not get to discount that human participation because "Oh well we COULD replace them with an AI in the future". Well your study didn't, which means it needs to go through the human subjects review process.

When you claim that this study was about a process, you're literally taking the researchers side. That's what they've been insisting on as the reason why this study is ethical and they did not need to inform or obtain consent from the kernel development team. That's the excuse they used to get out of IRB's review process so they can be considered "not a human subjects research". That's the excuse they needed so they can proceed without having to get a signed consent form. They did all of this so they could conduct a penetration test without the organization they were attacking knowing about it.

You don't seem to be able to comprehend why or how the maintainers feel deceived here, or that their feelings are legitimate. If you did, you wouldn't keep banging on about "oh this is just a process study, the people don't matter, it's all isolated from humans". Funny enough, the people who DID interact with this research DID feel they mattered and DID feel deceived. The whole point of IRB was to prevent exactly this; researchers conducting unethical research which would only come to light after the study concluded and the injured parties complained (and deceit IS a form of harm). For research which is supposed to be isolated from humans and thus didn't see the need in obtaining a signed consent form, that's not really the outcome you expect to see if everything was on the up and up. Another form of harm from this study, the maintainers now have to go over everything they submitted again to ensure there's nothing else to be worried about. That's a lot of wasted man hours and definitely constitutes harm as well. All of University of Minnesota now has less access to the project after getting banned, even more collateral damage and harm caused to their own institution.

Let's be honest. If the researchers were able to sneak their code into a stable, or distribution version of the kernel, they'd be praising themselves to high heaven. Look at how significant our results were, we fucked up all of Linux! Only reason they didn't is because at least they can recognize that would be going a step too far. They're just looking for excuses to not get punished at this point. Same with the IRB. The IRB is now trying to wiggle out of the situation by insisting everything is ok. The IRB is also made up of professors who have a reputation to maintain! They know they let something through that should never have been approved in it's current form. Most human subject research NEVER get this kind of blowback and the fact that this one did means they screwed up and they know it.

No ethics review board considers a multi page, multi forum, lengthy discussion on the ethics of a study they approved as a good sign. Honestly, any study that gets this much attention would be considered a huge success in any other situation.


"The answer is, it was reviewed and accepted by a human. That's it. Full stop. There's your human subject right there in the middle of your research work. "

Thats not the correct or relevant criteria. If you were correct, testing airport security and testing AntiMoney Laundering checks at a bank would amount to human experiments. In fact its hard to think of any study of the real world that would not become a "human experiment".

"When you claim that this study was about a process, you're literally taking the researchers side."

Thats some seriously screwed up logic right here.

"Weinstein was a Nazi and a serial killer, if you disagree with me you are taking his side"


Um, academics aren't allowed to assemble bombs and then try and sneak them onto planes with the excuse that it's not a human trial. That'd be absurd.

It's easy to think of studies that don't involve humans so that statement is just wilful obfuscation. Physics, chemistry, heck lots of biology, and of course computer science are primarily made of studies on objects rather than people. Of those that are done on people they are almost always done on people who know they are the subject of an experiment. Very few studies are like this one.


I am sorry, you arument is all over the place. What on earth are you arguing? That human trial does not excuse what would otherwise be a crime? That airport security is not tested with real bombs? That every study outside of natural sciences is a human expriment?

Studies of airport security are done all the time, thats how we know its terribly ineffective. The staff of the airport are not told about them, they are not human experiments.

The experiments on people have a spesific definition that goes beyond "a human is present"


Airport security staff consent to this type of testing at hiring time so testing can be random, and not just anyone can try to sneak a weapon through security to see if it's caught as "a test".

Perhaps a similar approach that allows randomness with some sort of agreement with the maintainers could have prevented this issue while preserving the integrity of the study.


No. Study the process is not study the humans involved in the process. No matter how many words you put in there, A != B.


Unfortunately “no” doesn’t constitute a rebuttal, and the responding commenter makes many valid points.

It is self-evident that this study tangibly involved people in the scope, those people did not provide consent prior, and now openly state their grievances. It is nothing short of arguing in bad faith to claim otherwise.


[flagged]


At this point you’re doing nothing more than reiterating falsehoods, and it appears you’ve nothing constructive to add.


Repeating the same thing over and over does not make it a fact.

Maybe the stated aim of the research was to study the process. But what they actually did was study how the people involved implemented it.

Being publicly manipulated into merging buggy patches, and wasting hours of people's time are two pretty obvious effects this study had that could cause some amount of distress and thus it cannot be dismissed as simply "studying the process".


This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).

It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.


How in the world is conducting behavioral research on kernel maintainers to see how they respond to subtly-malicious patches not "human subject research"?


In the restricted sense of Title 45, Part 46, it's probably not quite human subject research (see https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... ).

Of course, there are other ethical and legal requirements that you're bound to, not just this one. I'm not sure which requirements IRBs in the US look into though, it's a pretty murky situation.


How so?

It seems to qualify per §46.102(e)(1)(i) ("Human subject means a living individual about whom an investigator [..] conducting research: (i) Obtains information [...] through [...] interaction with the individual, and uses, studies, or analyzes the information [...]")

I don't think it'd qualify for any of the exemptions in 46.104(d): 1 requires an educational setting, 2 requires standard tests, 3 requires pre-consent and interactions must be "benign", 4 is only about the use of PII with no interactions, 5 is only about public programs, 6 is only about food, 7 is about storing PII and not applicable and 8 requires "broad" pre-consent and documentation of a waiver.


rather than arguing about the technical details of the law, let me just clarify: IRBs would actively reject a request to review this. It's not in their (perceived) purview.

It's not worth arguing about this; if you care, you can try to change the law. In the meantime, IRBs will do what IRBs do.


If the law, as written, does actually classify this as human research, it seems like the correct response is to sue the University for damages under that law.

Since IRBs exist to minimize liability, it seems like that would be that fastest route towards change (assuming you have legal standing )


Woah woah woah, no need to whip out the litigation here. You could try that, but I am fairly certain you would be unsuccessful. You would be thrown out with "this does not qualify under the law" before it made it to court and it wouldn't have much bearing except to bolster the university.


It obviously qualifies and the guy just quoted the law at you to prove it.

Frankly universities and academics need to be taken to court far more often. Our society routinely turns a blind eye to all sorts of fraudulent and unethical practices inside academia and it has to stop.


That's still 10 thousand words you're linking to…

I had a look at section §46.104 https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... since it mentioned exemptions, and at (d) (3) inside that. It still doesn't apply: there's no agreement to participate, it's not benign, it's not anonymous.


If there's some deeply legalistic answer explaining how the IRB correctly interpreted their rules to arrive at the exemption decision, I believe it. It'll just go to show the rules are broken.

IRBs are like the TSA. Imposing annoyance and red tape on the honest vast-majority while failing to actually filter the 0.0001% of things they ostensibly exist to filter.


are you expecting that science and institutions are rational? If I was on the IRB, I wouldn't have considered this since it's not a sociological experiment on kernel maintainers, it's an experiment to inject vulnerabilities in a source code. That's not what IRBs are qualified to evaluate.


> it's an experiment to inject vulnerabilities in a source code

I'm guessing it passed for similar reasoning, along with the reviewers being unfamiliar with how "vulnerabilities are injected." To get the bad code in, the researcher needed to have the code reviewed by a human.

So if you rephrase "inject vulnerability" as "sneak my way past a human checkpoint", you might have a better idea of what they were actually doing, and might be better equipped to judge its ethical merit -- and if it qualifies as research on human subjects.

To my thinking, it is quite clearly human experimentation, even if the subject is the process rather than a human individual. Ultimately, the process must be performed by a human, and it doesn't make sense to me that you would distinguish between the two.

And the maintainers themselves express feeling that they were the subject of the research, so there's that.


Testing airport security by putting dangerous goods in your luggage is not human experimentation. Testing a Bank's security is not human experimentation. Testing border securiry is not.

What makes people revieing linux kernel more 'human' than any of the above?


Tell that to the person on the hook if or when they get caught.


It's not an experiment in computer science; these guys aren't typing code into an editor and testing what the code does after they've compiled it. They're contributing their vulnerabilities to a community of developers and testing whether these people accept it. It is absolutely nothing else than a sociological experiment.


This reminds me of a few passages in the SSC post on IRBs[0].

Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?

All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.

All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.

If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.

[0] https://slatestarcodex.com/2017/08/29/my-irb-nightmare/


I guess I have a different perspective. I know a fair number of world class scientists; like, the sort of people you end up reading about as having changed the textbook. One of these people, a well-known bacteriologist, brought his intended study to the IRB for his institution (UC Boulder), who said he couldn't do it because of various risks due to studying pathogenic bacteria. The bacteriologist, who knew far more about the science than the IRB, explained everything in extreme detail and batted away each attempt to shut him down.

Eventually, the IRB, unhappy at his behavior, said he couldn't do the experiment. He left for another institution (UC San Diego) immediately, having made a deal with the new dean to go through expedited review. It was a big loss for Boulder and TBH, the IRB's reasoning was not sound.


Communities aren’t people? What in the actual fuck is going on with this university’s IRB?!


They weren't studying the community, they were studying the patching process used by that community, which a normal IRB would and should consider to be research on a process and therefore not human Research. That's how they presented it to the IRB so it got passed even if what they were claiming was clearly bullshit.

This research had the potential to cause harm to people despite not being human research and was therefore ethically questionable at best. Because they presented the research as not posing potential harm to real people that means they lied to the IRB, which is grounds for dismissal and potential discreditation of all participants (their post-graduate degrees could be revoked by their original school or simply treated as invalid by the educational community at large). Discreditation is unlikely, but loss of tenure for something like this is not out of the question, which would effectively end the professor's career anyway.


> This research had the potential to cause harm to people

I don't buy it, and you fail to back that claim up at all.


At a minimum, is needlessly increasing the workload of an unwitting third party considered a harm? I ask, because I’d be pretty fucking mad if someone came along and added potentially hundreds of man-hours of work in the form of code review to my life.


Considering that the number of patches submitted was quite limited I don't think the original research paper would qualify as a DoS attack. The workload imposed by the original research appears to have been negligible compared to the kernel effort as a whole, no more than any drive by patch submission might result in. So no, I wouldn't personally view that as harmful.

As to the backdated review now being undertaken, as far as I'm concerned that decision is squarely on the maintainers. (Honestly it comes across as an emotional outburst to me.)


If you steal only $5 from me, I am still harmed. If you break my computer monitor, then even if I can afford a replacement, I am still harmed.


Wasting time is nor considered stealing. If it were, there is a long queue to collect money from: all the add agencies, telephone menues where you have to go 10 level deep before you speak to a person, anyone who is bothering people on the street with questions, anyone making your possessions dirty would be a criminal. Anyone going on a date that doesnt work out would be a criminal.


> Wasting time is nor considered stealing.

Nor is breaking my computer monitor.

Theft was meant as an example of a type of harm, not a complete list of all types of harm.

Something doesn’t have to be illegal to be harmful.


> Wasting time is nor considered stealing.

Sure, but I'm still going to be pretty annoyed with you. And if you've wasted my time by messing with a system or process under my control then I'm probably going to block you from that system or process.

As a really prosaic example, I've blocked dozens - if not hundreds - of recruiter email addresses on my work email account.


In my experience in university research, the correct portrayal of the ethical impact is the burden of the researchers unfortunately, and the most plausible explanation in my view given their lack of documentation of the request for IRB exemption would be that they misconstrued the impact of the research.

It seems very possible to me that an IRB wouldn't have accepted their proposed methodology if they hadn't received an exemption.


> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

Is there anyone on hand who could explain how what looks very much like a social engineering attack is not "human research"?


This is, at the very least, worth an investigation from an ethics committee.

First of all, this is completely irresponsible, what if the patches would've made their way into a real-life device? The paper does mention a process through which they tried to ensure that doesn't happen, but it's pretty finicky. It's one missed email or one bad timezone mismatch away from releasing the kraken.

Then playing the slander victim card is outright stupid, it hurts the credibility of actual victims.

The mandate of IRBs in the US is pretty weird but the debate about whether this was "human subject research" or not is silly, there are many other ethical and legal requirements to academic research besides Title 45.


> there are many other ethical and legal requirements to academic research besides Title 45.

Right. It's not just human subjects research. IRBs vet all kinds of research: polling, surveys, animal subjects research, genetics/embryo research (potentially even if not human/mammal), anything which could be remotely interpreted as ethically marginal.


If we took the case into the real world and it became "we decided to research how many supports we could remove from this major road bridge before someone noticed", I'd hope the IRB wouldn't just write it off as "not human research so we don't care".


I agree. I personally don't care if it meets the official definition of human subject research. It was unethical, regardless of whether it met the definition or not. I think the ban is appropriate and wouldn't lose any sleep if the ban also enacted by other open-source projects and communities.

It's a real shame because the university probably has good, experienced people who could contribute to various OSS projects. But how can you trust any of them when the next guy might also be running an IRB exempt security study.


Okay, by that logic we should ban anything that comes out of Facebook


There are a lot of people who in fact do consider “research” that comes out of social media companies to be both ethically and, in many cases, procedurally tainted, and thus unusable and unpublishable as-is.


>It's one missed email or one bad timezone mismatch away from releasing the kraken.

I don't think code commits to the Linux kernel make it to live systems that fast?

I do agree with the sentiment, though. It's grossly irresponsible to do that without asking at least someone in the kernel developer's group. People don't dig being used as lab rats, and now the whole uni is blocked. Well, tough shit.


No, but they're very high-traffic and if the "this was a deliberately bad patch" message is sent off-list, only to the maintainer, things can go south pretty easily. Off-list messages are easy to miss on inboxes whose email is in MAINTAINERS and receive a lot of spam, you can email someone right as they're going on vacation and so on. That's one of the reasons why a lot of development happens on a mailing list.


> I hope we hear from the IRB in about a year stating exactly what happened. Real investigations of bad conduct should take time to complete correctly and I want them to do their job correctly so I'll give them that time

That'd be great, yup. And the linux kernel team should then strongly consider undoing the blanket ban, but not until this investigation occurs.

Interestingly, if all that happens, that _would_ be an intriguing data point in research on how FOSS teams deal with malicious intent, heh.


Personally, I think their data points should include "...and we had to explain ourselves to the FBI."


What about IEEE and the peer reviewers who didn't object to their publications?

I think the real problem is rooted more fundamentally in academia than it seems. And I think it has mostly to do with a lack of ethics!


I'm amazed this passed IRB. Consider the analogy:

We presented students with an education protocol designed to make a blind subset of them fail tests. Then measured if they failed the test to see if they independently learned the true meaning of the information.

Under any sane IRB you would need consent of the students. This is failure on so many levels.

(edit to fix typo)


I'm really not sure what the motive to lie is. You got caught with your hand in the cookie jar, time to explain what happened before they continue to treat you like a common criminal. Doing a pentest and refusing to state it was a pentest is mind boggling.

Has anyone from the "research" team commented and confirmed this was even them or a part of their research? It seems like the only defense is from people who did google-fu for a potentially outdated paper. At this point we can't even be sure if this isn't a genuinely malicious actor using comprimised credentials to introduce vulnerabilities.


It's also not a pen test. Pen testing is explicitly authorized, where you play the role as an attacker, with consent from your victim, in order to report security issues to your victim. This is just straight-up malicious behavior, where the "researchers" play the role as an attacker, without consent from their victim, for personal gain (in this case, publishing a paper).


Because of the nature of the research an argument can be made that it was like a bug bounty (not defending them just putting my argument) but they should have come clean when the patched was merged and told the community about the research or at least submitted the right patch.

Intentionally having bugs in kernel only you know about is very bad.


The primary difference being the organization being tested explicitly sets up a bug bounty with terms, as opposed to this.


I'll take People Who Don't Understand Consent for $400, Alex.


This is the rare HN joke that not only is hilarious, but susinctly makes the core point that is being disagreed about clear


This is a disturbingly frequent thing occurrence here.


Hearing how you phrased it reminds me of a study that showed how parachutes do not in fact save lives (the study was more to show the consequences of extrapolating data, so the result should not be taken seriously):

https://www.bmj.com/content/363/bmj.k5094


The original referenced paper is also very good: http://elucidation.free.fr/parachuteBMJ.pdf (can't find a better formatted link, sorry)

Conclusions: As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials.Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

With the footnote: Contributors: GCSS had the original idea. JPP tried to talk him out of it. JPP did the first literature search but GCSS lost it. GCSS drafted the manuscript but JPP deleted all the best jokes. GCSS is the guarantor, and JPP says it serves him right


I liked this bit, from the footnotes: "Contributors: RWY had the original idea but was reluctant to say it out loud for years. In a moment of weakness, he shared it with MWY and BKN, both of whom immediately recognized this as the best idea RWY will ever have."


This is now my second favourite paper after the atlantic salmon in fmri


I still prefer the legal article examining the Fourth Amendment as it pertains to Jay-Z's 99 Problems.

http://pdf.textfiles.com/academics/lj56-2_mason_article.pdf


My favorite is "Possible Girls":

https://philpapers.org/archive/sinpg


I'm a big fan of Doug Zongker's excellent paper on chicken:

https://isotropic.org/papers/chicken.pdf


My gateway pub to this type of research was the Stork paper: https://pubmed.ncbi.nlm.nih.gov/14738551/


link?



Apparently there was an article as followup:

"Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction" (2010, in Journal of Serendipitous and Unexpected Results)

We had the good fortune to have discussion of the study with comments from the author a few years back:

https://news.ycombinator.com/item?id=15598429


Well part of the experiment is to see how deliberate malicious commits are handled. Banning is the result. They got what they wanted. Play stupid game. Win stupid pri[z]e.


Isn't trying to break security of the "entire internet" some kind of crime (despite whatever the excuse is)?

People got swatted for less.


Interestingly enough, this is more a case of being a dick. That is not illegal. If an AG does not levy a charge, no crime has been committed.

This does not at all mean the behavior in question should be condoned. This fails the sniff test worse than thioacetone.


Well said.


Nit: The expression is "Play stupid games, win stupid prizes."

As heard frequently on ASP, along with "Room Temperature Challenge."


https://twitter.com/UMNComputerSci/status/138496371833373082...

"The University of Minnesota Department of Computer Science & Engineering takes this situation extremely seriously. We have immediately suspended this line of research."


But this raises an obvious question: Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities? If we have learned anything from the SolarWinds hack, it is that if there is a way to introduce a vulnerability then someone will do it, sooner or later. And they won't publish a paper about it, so that shouldn't be the only way to detect it!


So, it turns out that sometimes programmers introduce bugs into software. Sometimes intentionally, but much more commonly accidentally.

If you've got a suggestion of a way to catch those bugs, please be more specific about it. Just telling people that they need "better protection" isn't really useful or actionable advice, or anything that they weren't already aware of.


That question has been obvious for quite some time. It is always possible to introduce subtle vulnerabilities. Research has tried for decades to come up with a solution, to no real avail.


Assassinating the researchers doesn't help.


The Linux team found the source of a security threat and have taken steps to prevent that security threat from continuing to attack them.

You can't break into someone's house through their back window, tell the owners what you did, and not expect to get arrested.

People don't scream "how are we going to know that people can break into houses through broken windows without these heros!?"


Does nobody here even understand what actually happened?

Really losing my faith in the accuracy of HN if such a huge thread is full of misinformation.

Basically (as I understand it, feel free to correct me) this is what happened:

Researcher emailed maintained with flawed code, maintainer LGTMed it, researcher told maintainer that the code is buggy and not to merge it. The researchers confirmed that the code was not merged or commited anywhere. Paper gets published. Nothing of note happens.

Now, one of the researchers grad students has submitted stuff to linux oh his own volition- he does not appear to be associated with the previous research. These commits are "obviously bad" according to linux maintainers and claim that the grad student is just continuing the "merge bad shit" research. These commits do not appear to be intentionally flawed but rather newbie mistakes (so claims the student)- which is why he feels the linux community is unwelcoming to newcomers.

Now how on earth did that warp to whatever everyone here is smoking?


You too have missed some of the details, but then so have many others.

The paper you’re referring to was from last year. Two of the three patches that they emailed in under fake author names were rejected; they wrote a paper about the experience. All that happened as a result was that everybody told them that it was a terrible idea, and they tweaked the wording of the paper a bit.

Now _this_ year, a different PHD student with the same advisor posted a really dubious patch which would introduce one or more use–after–free bugs. This patch was also rejected by the maintainers. Greg noticed that it looks like another attempt to do the same kind of experiment again. Nobody but them know if that’s true or not, but the student reacted by calling it “slander”, which was not very advisable.

The methodology in the original paper had one redeeming feature; after any patch was accepted, they would immediately email back withdrawing the patch. That doesn’t appear to have happened in this case, but then this patch was rejected.

As a result of this, all future contributions from people affiliated with UMN are being rejected, and all past contributions (about 250) are being reviewed. Most of those are simply being backed out wholesale, unless someone speaks up for individual changes. A handful of those changes have already been vouched for.

That is pretty drastic, because there will certainly be acceptable patches that will need to be re–reviewed and possibly recommitted. On the other hand, if you discover a malicious actor, wouldn’t you want to investigate everything they’ve been involved with? On the gripping hand, there are such things as autoimmune diseases.

I guess we’ll have to see how it plays out.


> Now how on earth did that warp to whatever everyone here is smoking?

There's no other option when someone on the same research team later sends them 4 diffs, 3 of which have security holes, than to assume they're still doing research in the same area.

This is what happens when you do a social experiment without at least informing someone in the organization beforehand. There's no way to verify whether it was well intentioned diffs or not. So you must assume it's not.


Its not someone on the same team. Its someone working underneath one of the research members- a grad student who likely had no knowledge of what his supervisor did.

https://lore.kernel.org/lkml/YIBBt6ypFtT+i994@pendragon.idea...

> These are two different projects. The one published at IEEE S&P 2021 has > completely finished in November 2020. My student Aditya is working on a new > project that is to find bugs introduced by bad patches. Please do not link > these two projects together. I am sorry that his new patches are not > correct either. He did not intentionally make the mistake.


There's a reply to the LKML from the researcher in question admitting that the new student is also working under him doing research. He claims it's not related, but it's not clear how much his word is worth now...

https://lore.kernel.org/lkml/YIBBt6ypFtT+i994@pendragon.idea...


I read everything I could find, in short, researchers did not give any options for maintainers not to participate in research.

The best analogy I could come with so far is; someone offered you compelling job offer, and when you where ready to sign up they would be yeah, that was research project, sorry - would you be ok with such behavior ?

This not ok, because you did not consent to waste your time on someone else's research project.


They addressed this in the paper by making the change small (5 lines). Obvsiously time is still wasted, but the team felt that the research warranted it. This is up to debate and should not be used solely as a reason to crucify them.


> Really losing my faith in the accuracy of HN if such a huge thread is full of misinformation.

What is the point of dubbing yourself the arbiter of the moral high ground and spreading mis-information in the very next breath?

I am less puzzled by you spreading misinformation than I am by the fact you have this outrage at the very thing you are doing and don't hesitate to attack the character of people you disagree with.

> A number of these patches they submitted to the kernel were indeed successfully merged to the Linux kernel tree.

It turns out the researchers DID allow the bad faith commits to be merged and that is a big problem that is still being unwound.

https://fosspost.org/researchers-secretly-tried-to-add-vulne...


This seems like exactly what happened to me. I do still think the researcher should have gotten approval from maintainers or the foundation before going ahead, and the way he did the research was pretty shitty.

But you also forgot the part where Greg throws a hissy fit and decides to revert every commit from umn emails, including 3+year old commits that legitimately fix security vulns.[0] Great job keeping mainline bug free with your paranoid witchhunt!

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


That one has already been vouched for; I doubt it will actually be reverted: https://lore.kernel.org/lkml/b27a43bb-36bc-4b9-42de-c39a5b68...

If you know of any others that shouldn’t be reverted, you should email the list and point them out.


> Doesn't Linux need better protection against someone intentionally introducing security vulnerabilities?

Yes, it does.

Now, how do you do that other than having fallible people review things?


The problem with such experiment is that it can be a front. If you are a big entity, gov, or whatever, and you need to insert a vulnerability in the kernel, you can start a "research project". Then you try to inject it with this pretense, and if it fails, you can always say "my bad, it was for science".


I had a uni teacher who thought she was a genius because her research team powdered wikipedia with fake information while timing how long it took to remove them.

"Earth is center of universe" took 1000 years to remove from books, I'm not sure what her point was :D


Joke's on you - this was really sociology research on anger response levels of open source communities when confronted with things that look like bad faith.


WaitASecond...are you saying that this was an experiment to find out how the maintainers would react to being experimented on? ;)


Setting aside the ethical aspects which others have covered pretty thoroughly, they may have violated 18 U.S.C. §1030(a)(5) or (b). This law is infamously broad and intent is easier to "prove" than most people think, but #notalawyer #notlegaladvice. Please don't misinterpret this as a suggestion that they should or should not be prosecuted.


So, the patch was about a possible double-free, detected presumably from a bad static analyzer. Couldn't this patch have been done in good faith? That's not at all impossible.

However, the prior activity of submitting bad-faith code is indeed pretty shameful.


I'm not a linux kernel maintainer but it seems like the maintainers all agree it's extremely unlikely a static analyzer could be so wrong in so many different ways.


Interestingly, the Sokal Squared guy got banned from future research for "unauthorized human experimentation".

It's a different university, but I wonder if these people will see the same result.


I think this hasn't gone far enough. The university has shown that it is willing to allow its members to act in bad faith for their own interests, under the auspices of acting ethically for scientific reasons. The university itself cannot be trusted _ever again_.

Black list the whole lot from everything, everywhere. Black hole that place and nuke it from orbit.


Perhaps the Linux kernel team should actively support a Red Team to do this with a notification when it would be merged into the stable branch.


What would be the point? Of course people can miss things in code review. Yet the Linux developer base and user base has decided that generally an open submission policy has benefits that outweigh the risks.

Should every city park with a "no alcohol" policy conduct red teams on whether it's possible to smuggle alcohol in? Should police departments conduct red teams to see if people can get away with speeding?


Let's say that no one has ever seen someone speeding or drinking the park. But then someone announces that they just did it, got away with it, and the system isn't effective at catching folks that violate the policies. It might make sense to figure out how you could change the way the system works to stop people from violating the policy. One way to do that is to replicate the violation and see what measures could be introduced to decrease the likely-hood. I would say it is very much akin to the companies that test to see if your employees can be phished or the pen testers to see if you can be hacked. Other important things that people want to protect have these teams to make them a harder target and I think in the case of something as important as the Linux Kernel it might pay dividends.


Not that I approve of the methods, but why would an IRB be involved in a computer security study? IRBs are for human subjects research. If we have to run everything that looks like any kind of research through IRBs, the Western gambit on technical advantage is going to run into some very hard times.


The subjects were the kernel team. They should have had consent to be part of this study. It's like red team testing, someone somewhere has to know about it and consent to it.


How IEEE accepted this paper is a mystery, from twitter feeds, seems like at least one complaint was filled with IEEE, paper still was accepted.


It wasn’t a real experiment, it was a legitimate attempt to insert bugs into the code base and this professor was going to go on speaking tours to self promote and talk about how easy it was to crack Linux. If it looks like grift it’s probably grift. This was never about science.


> The professor gets exactly what they want here, no?

I don't think they're a professor are they? Says they're a PhD student?


Yet another reason to absolutely despise the culture within academia. The US Federal government is subsidizing a collection of pathologically toxic institutions, and this is one of many results, along with HR departments increasingly mimicking the campus tribalism.


That's quite a leap of logic you have going on there. How is the US Federal government at fault for this?


Who do you think subsidizes and guarantees student loan debt that allowed academic institutions to raise their prices at 4 times the rate of inflation?


To be clear, the quoted text in your post is presumably your own words, not a quote?


> The thread then gets down to business and starts coordinating revert patches for everything committed by University of Minnesota email addresses.

What's preventing those bad actors from not using a UMN email address?


Nothing. However if they can't claim ownership of the drama they have caused it's not useful for research that's publishable so it does nix these idiots from causing further drama while working at this institution. For now.


They don't need to claim ownership of the drama to write the paper, in fact, my first thought was that they would specifically try to avoid taking ownership and instead write a paper "discovering" the vulnerability(ies).


> What's preventing those bad actors from not using a UMN email address?

Technically none, but by banning UMN submissions, the kernel team have sent an unambiguous message that their original behaviour is not cool. UMN's name has also been dragged through the mud, as it should be.

Prof Lu exercised poor judgement by getting people to submit malicious patches. To use further subterfuge knowing that you've been already been called out on it would be monumentally bad.

I don't know how far Greg has taken this issue up with the university, but I would expect that any reasonable university would give Lu a strong talking-to.


If they submit them from personal or anonymous email the patches may have come under more sucutiny.

They gain some trust comming from university email addresses


Exactly. Users contributing from the University addresses were borrowing against the reputation of the institution. That reputation is now destroyed and each individual contributor must build their own reputation to earn trust.


I can't understand this. Why would a university be more trustworthy? Foreign actors have been known to plant students, for malicious intent.

edit: Reference to combat the downvote: https://www.nbcnews.com/news/china/american-universities-are...


Nothing. I think the idea is 60% deterrence via collective punishment - "if we punish the whole university, people will be less likely to do this in future" - and 40% "we must do something, and this is something, therefore we must do it".


see https://lore.kernel.org/linux-nfs/YIAmy0zgrQW%2F44Hz@kroah.c...

If they just want to be jerks, yes. But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.


Were all of the commits from UMN emails GPG signed with countersigned/trusted keys?


How would you catch those?


Literally nothing. Instead of actual actions to improve the process it's only feel-good actions without any actual benefit to the kernel's security.


The point is to make it very obviously not worth it to conduct this kind of unethical research. I don't think UMN is going to be eager to have this kind of attention again. People could always submit bogus patches from random email addresses - this removes the ability to do it under the auspices of a university.


The ethical aspect is separate from the practical aspect that is kernel security.

Sabotage is a very real risk but we're discussing ethics of demonstrating the risk instead of potential remediation, that's dangerous and foolish.


> this removes the ability to do it under the auspices of a university

It really doesn't though. You can claim ownership of that email address in the published manuscript. For that matter, you could even publish the academic article under a pen name if you wanted to. But after seeing how the maintainers responded here, you'd better make sure that any "real" contributions you make aren't associated with the activity in any way.


I think you're getting heavily downvoted with your comments on this submission because you seem to be missing a critical sociological dimension of assumed trust. If you submit a patch from a real name email, you get an extra dimension of human trust and likewise an extra dimension of human repercussions if your actions are deemed to be malicious.

You're criticizing the process, but the truth is that without a real name email and an actual human being's "social credit" to be burned, there's no proof these researchers would have achieved the same findings. The more interesting question to me is if they had used anonymous emails, would they have achieved the same results? If so, there might be some substance to your contrarian views that the process itself is flawed. But as it stands, I'm not sure that's the case.

Why? Well, look at what happened. The maintainers found out and blanket banned bad actors. Going to be a little hard to reproduce that research now, isn't it? Arbitraging societal trust for research doesn't just bring ethical challenges but /practical/ ones involving US law and standards for academic research.


> actual human being's "social credit" to be burned

How are kernel maintainers competent in detecting a real person vs. fake real person? Why is there any inherit trust?

It's clear the system is fallible, but at least now people are humbled enough to not instantly dismiss the risk.

> The maintainers found out and blanket banned bad actors.

With collateral damage.


the mail server is usually a pretty good indicator. I'm not an expert but you generally can't get a university email address without being enrolled.


Additionally some universities use a subdomain for student addresses, only making top level email addresses available to staff and a small selection of PhD students who needs it for their research.


So a malicious actor can just take a single online summer class. Bonus points if they manage to use a fake ID to enroll at the university.


Again, we're entering the territory of fraud and cybercrime, whether its white collar crime or not. Nothing wrong with early detection and prevention against that. But as it pertains to malicious actors inside the country, the high risk of getting caught, prosecuted, and earning semi-permanent permanent blackball on your record that would come up in any subsequent reference check (and likely blackball you from further employment) is a deterrent. Which is exactly what these researchers are finding out the hard way.

Anonymity is de facto, not de jure. It's also a privilege for many collaboration networks and not a right. If abused, it will simply be removed.


> If abused, it will simply be removed.

Given what the Linux kernel runs these days, that would probably be advisable. (I'm a strong proponent of anonymity, but I also have a preference that my devices not be actively sabotaged.)

> we're entering the territory of fraud and cybercrime

So what? The fact that it's illegal doesn't nullify the threat. For that matter, it's not even a crime if a state agency is the perpetrator. These researchers drew attention to a huge (IMO) security issue. They should be thanked and the attack vector carefully examined.


I think you're focusing too much on the literal specifics of the "attack vector" and not enough on the surrounding context, or the real world utility/threat. You're not accurately putting yourself in the shoes of someone who would be using it and asking whether it has a sufficient cost benefit ratio to merit being used. Isn't that what you mean by "carefully examined?"

If you want to talk about a state level actor, I hate to break it to you, but they have significantly more powerful and stealthier 0-day exploits that are a lot easier to exploit than a tactic like this. Guess what's the last thing you want to have happen when you commit cybercrime? Do it in public with where there's an immutable record that can be traced back to you, and cause a giant public hubbub, maybe? So, I can't imagine how someone could think there's anything noteworthy about this unless they were unaware of that.

That's somewhat the unintentional humor and irony of this situation -- all the researchers accomplished was proving that they were not just unethical but incompetent.


Upon further reflection, I think what I wrote regarding anonymity specifically was in error. I don't think removing it would serve much (if any) practical purpose from a security standpoint.

However, I don't agree that what happened was abuse or that it should be deterred. Responding in a hostile manner to an isolated demonstration of a vulnerability isn't constructive. People rightfully get angry when large companies try to bully security researchers.

You question if this vulnerability is worth worrying about due to the logistics of exploiting it in practice. Regardless of whether it's worth the effort to exploit I'd still rather it wasn't there (that goes for all vulnerabilities).

I think it would be much easier to exploit than you're letting on though. Modern attacks frequently chain many subtle bugs together. Being able to land a single, seemingly inconsequential bug in a key location could enable an otherwise impossible approach.

It seems unlikely to me that the immutable record you mention would ever lead back to a competent entity that didn't want to be found. There's no need for anything more than an ephemeral identity that successfully lands one or two patches and then disappears. The patches also seem unlikely to draw suspicion in the first place, even after the exploit itself becomes known.

In fact it occurs to me that a skilled and amoral developer could likely land multiple patches with strategic (and very subtle) bugs from different identities. These could then be "discovered" and sold on the black market. I see no convincing reason to discount the possibility of this already being a common occurrence.

The only sensible response I can think of is a focus on static analysis coupled with CTF challenges to beat those analysis methods.


But the attack vector appears to be "people needing to publish a paper". Like.. what? They should be thanked for being a threat?


You keep posting all over this discussion about how the Linux maintainers are making a poor choice and shooting the messenger.

What would you like them to do instead or in addition to this?


Indeed the situation is bad, nothing can be done. At the very least as long as they can make unintentional vulnerabilities, they are defenseless against intentional ones, and fixing only the former is already a very big deal.


> What would you like them to do instead or in addition to this?

Update the processes and tools to try and catch such malicious infiltrators. Lynching researchers isn't fixing the actual issue right now.


I saw at least one developer lamenting that they were going to potentially bring up mechanisms for having to treat every committer as malicious by default instead of not at the next kernel summit, so it's quite possible that's going to take place.


> lamenting that they were going to potentially bring up mechanisms for having to treat every committer as malicious by default

I think "lamenting" is very much the wrong attitude here. Given all the things that make use of Linux today that seems like the only sane approach to me.


If people insisted on a kernel like that, that's what they'd fund and use. But apparently that's not their priority.


> Update the processes and tools to try and catch such malicious infiltrators.

How?


That's what I'm saying kernel maintainers should figure out.


Step 1: When a malicious infiltrator is identified, mount their head on a spike as a warning to others.


Well, it seems unlikely that any other universities will fund or support copy cat studies. And I don't mean in the top-down institutional sense I mean in the self-selecting sense. Students will not see messing with the linux kernel as being a viable research opportunity and will not do it. That doesn't seem to be 'feel-good without any actual benefit to the kernel's security'. Sounds like it could function as an effective deterent.


Isn't this reaction a bit like the emperor banishing anyone who tells him that his new clothes are fake? Are the maintainers upset that someone showed how easy it is to subvert kernel security?


It’s more like the emperor banning a group of people who put the citizens in danger just so they could show that it could be done. The researchers did something unethical and acted in a self-serving manner. It’s no surprise that someone would get kicked out of a community after seriously breaking the trust of that community.


More like the emperor banishing anyone who tries to sell him fake clothes to prove that the emperor will buy fake clothes.


The middle ground would be if the Emperor jailed the tailors of the New Clothes after he had shown off the clothes at the Parade, in front of the whole city.


Yeah,maybe it's fragile security.Fortunately,the problem has been found, and 'attackers' aren't real enemy.


Later down thread from Greg K-H:

> Because of this, I will now have to ban all future contributions from your University.

Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.

EDIT: Searching through the source code[1] reveals contributions to the kernel from umn.edu emails in the form of an AppleTalk driver and support for the kernel on PowerPC architectures.

In the commit traffic[2], I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018. In 2018, Wenwen Wang was submitting patches; during this time he was a postdoc at UMN and co-authored a paper with Liu[4].

Prior to 2018, commits involving UMN folks appeared in 2014, 2013, and 2008. None of these people appear to be associated with Liu in any significant way.

[1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22

[2]: https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...

[3]: https://www-users.cs.umn.edu/~kjlu/

[4]: http://cobweb.cs.uga.edu/~wenwen/


> I think all patches have come from people currently being advised by Kangjie Liu[3] or Liu himself dating back to Dec 2018

New plan: Show up at Liu's house with a lock picking kit while he's away at work, pick the front door and open it, but don't enter. Send him a photo, "hey, just testing, bro! Legitimate security research!"


If they wanted to do security research, they could have done so in the form of asking the reviewers to help; send them a patch and ask 'Is this something you would accept?', instead of intentionally sending malicious commits and causing static on the commit tree and mailing lists.


Even better

Notify someone up the chain that you want to submit malicious patches, and ask them if they want to collaborate.

If your patches make it through, treat it as though they essentially just got red teamed, everyone who reviewed it and let it slip gets to have a nervous laugh and the commit gets rejected, everyone having learned something.


Exactly what I was thinking. This should have been set up like a normal pen test, where only seniors very high up the chain are in on it.


I wonder if informing anyone of the experiment would be frowned upon as it might affect the outcome? However, this research doesn’t appear to be fastidious about scientific integrity so maybe you are right.


Wouldn't that draw more attention to the research patches, compared to a "normal" lkml patch? If you (as a maintainer) expected the patch to be malicious, wouldn't you be extra careful in reviewing it?


You probably can learn more and faster about new drugs by testing them in humans rather than rats. However, science is not above ethics. That is a lesson history has taught us in the most unpleasant of ways.


You don't have to say you are studying the security implications, you could be say you are studying something else like turn around time for patches, or level of critique, or any number of things.


Yes you do. In no circumstances is it ethical to do penetrating tests without approval.


In the thread you're in, the assumption is that the patches are never actually submitted.


Dd they keep track of and submit a list of additions to revert after they managed to get it added?

From the looks of it they didn't even when it was heading out to stable releases?

That's just using the project with no interest in not causing issues.


Yeah, so an analogy would be to put human feces into food and then see if the waiter is going to actually give it to the dinning customer. And then if they do, just put a checkmark on a piece of paper and then leave without warning someone that they're about to eat poop.


This is funny, but not at all a good analogy. There's obviously not remotely as much public interest or value in testing the security of this professor's private home to justify invading his privacy for the public interest. On the other hand, if he kept dangerous things at home (say, BSL-4 material), then his house would need 24/7 security and you'd probably be able to justify testing it regularly for the public's sake. So the argument here comes down to which extreme you believe the Linux kernel is closer to.


> This is funny, but not at all a good analogy

Yeah, for one thing, to be a good analogy, rather than lockpicking without entering when he’s not home and leaving a note, you’d need to be an actual service worker for a trusted home service business and use that trust to enter when he is home, conduct sabotage, and not say anything until the sabotage is detected and traced back to you and cited in his cancelling the contract with the firm for which you work, and then cite the “research” rationale.

Of course, if you did that you would be both unemployed and facing criminal charges in short order.


Your strawman would be more of a steelman if you actually addressed the points I was making.


Everyone has been saying "This affects software that runs on billions of machines and could cause untold amounts of damage and even loss of human life! What were the researchers thinking?!" and I guess a follow-up thought, which is that "Maintainers for software that runs on billions of machines, where bugs could cause untold amounts of damage and even loss of human life didn't have a robust enough system to prevent this?" never occurs to anyone. I don't understand why.


It's occurred to absolutely everyone. What doesn't seem to have occurred to many people is that there is no such thing as a review process robust enough to prevent malicious contributions. Have you ever done code review for code written by mediocre developers? It's impossible to find all of the bugs without spending 10x more time than it would take to just rewrite it from scratch yourself. The only real alternative is to not be open source at all and only allow contributions from people who have passed much more stringent qualifications.

There is no such thing as a process that can compensate for trust mechanisms. Or if you want to view it that way, ignoring the university's protests and blanket-banning all contributions made by anybody there with no further investigation is part of the process.


People are well aware of theoretical risk of bad commits by malicious actors. They are justifiably extremely upset that someone is intentionally changing this from a theoretical attack to a real life issue.


I'm not confused about why people are upset at the researchers that introduced bugs and did it irresponsibly. I'm confused about why people aren't upset that an organization managing critical infrastructure is so under prepared at dealing with risks posed by rank amateurs, which they should've known about and had a mechanism of dealing with for years.

What this means is that anyone who could hijack a university email account, or could be a student at a state university for a semester or so, or work at a FAANG corporation could pretty much insert backdoors without a lot of scrutiny in a way that no one detects, because there aren't robust safeguards in place to actually verify that commits don't do anything sneaky beyond trusting that everyone is acting in good faith because of how they act in a code review process. I have trouble understanding the thought process that ends up basically ignoring the maintainers' duty to make sure that the code being committed doesn't endanger security or lives because they assumed that everything was 'cool'. The security posture in this critical infrastructure is deficient and no one wants to actually address it.


> I have trouble understanding the thought process that ends up basically ignoring the maintainers' duty to make sure that the code being committed doesn't endanger security or lives because they assumed that everything was 'cool'. The security posture in this critical infrastructure is deficient and no one wants to actually address it.

They're banning a group known to be bad actors. And proactively tearing out the history of commits related to those known actors, before reviewing each commit.

That seems like the kernel team are taking a proactive stance on the security side of this. The LKML thread also talks about more stringent requirements that they're going to bring in, which was already going to be brought up at the next kernel conference.

None of these things seem like ignoring any of the security issues.


After absorbing what the researchers did, I believe it's time to skip right over the second part and just concentrate on why so many critical systems are run on unforked Linux.


I remember a true story (forget by whom) where the narrator set up a simple website for some local community activity. A stranger hacked and defaced the website, admitted to doing so without revealing his identity. His position in contacting the author of the website was, "I did you a favor (by revealing how vulnerable it was)." The person telling the story reacted, "yes, but... you were the threat you're warning me of." It didn't result in the author recreating the site on a more secure platform, it only resulted in him deciding it was not worth the trouble to provide this free service any longer.


It wasn't intended to be serious. But on the other hand, he has now quite openly and publicly declared himself to be part of a group of people who mess around with security related things as a "test".

He shouldn't be surprised if it has some unexpected consequences to his own personal security, like some unknown third parties porting away his phone number(s) as a social engineering test, pen testing his office, or similar.


There's also not nearly as much harm as there is in wasting maintainer time and risking getting faulty patches merged.


Put a flaming bag of shit on the doorstep, ring the doorbell, and write a paper about the methods Liu uses to extinguish it?


I wouldn't be surprised if the good, conscientious members of the UMN community showed up at his office (or home) door to explain, in vivid detail, the consequences of doing unethical research.


The actual equivalent would be to steal his computer, wait a couple days to see his reaction, get a paper published, then offer to return the computer.


> Understandable from gkh, but I feel sorry for any unrelated research happening at University of Minnesota.

That's the university's problem to fix.


If this experience doesn't change not only the behavior of U of M's IRB but inform the behavior of every other IRB, then nothing at all is learned from this experience.

Unless both the professors and leadership from the IRB aren't having an uncomfortable lecture in the chancellor's office then nothing at all changes.


What's the recourse for them though? Just beg to have the decision reversed?


The main thing you want here is a demonstration that they realize they fucked up, realize the magnitude of the fuckup, and have done something reasonable to lower the risk of it happening again, hopefully very low.

Given that the professor appears to be a frequent flyer with this, the kernel folks banning him and the university prohibiting him from using Uni resources for anything kernel related seems reasonable and gets the point across.


Expel the students and fire the professor. That will demonstrate their commitment to high ethical standards.


Or fire the IRB people who approved it, and the professor(s) who should've known better. Expelling students seems a bit unfair IMO.


The students do need a bit of punishment - they are adults who chose to act this way. In this context though, switching their advisor and requiring a different research track would be sufficient - that's a lot of work down the drain and a hard lesson. I agree that expulsion would be unfair - (assuming good faith scholarship) the advisor/student relationship is set up so that the students can learn to research effectively (which includes ethically) with guidance from a trusted researcher at a trusted institution. If the professor suggests or OKs a plan, it is reasonable for the students to believe it is a acceptable course of action.


If the student blatantly lied about why and how he made those commits then that’s grounds for expulsion though.


1. What the student code at umn says and what i think the student deserves are vastly different things.

2. Something being grounds for expulsion and what a reasonable response would be are vastly different things.

3. The rules saying "up to and including" (aka grounds for) and the full range of punishment are not the same - the max of a range is not the entirety of the range.

4. So what?


The student doubled down on his unethical behavior by writing that his victim was “making wild accusations that are bordering on slander.”

You can’t make a silk purse out of a sow’s ear.


You are mixing up two students. The one who complained about "bordering on slander" had nothing to do with the research paper at issue, other than having the same advisor as the author.


I agree in this case the driver of the behavior seems to be the professor, but graduate researchers are informed about ethical research and there many ways students alone can cause harm through research potentially beyond the supervision of the university and even professor. It's usually much more neglible than this, but everyone has a responsibility in abiding by ethical norms.


Dunking on individual maintainers for academic bragging rights seems pretty unfair, too.


Expelling the students seem overkill - they have advisors that should be fired for allowing it to happen


The comment about IRB —- institutional research board —- is clear, I think.


The suggestion about the IRB was made by a third party. Look at the follow up comment from kernel developer Leon Romanovsky.

> ... we don't need to do all the above and waste our time to fill some bureaucratic forms with unclear timelines and results. Our solution to ignore all @umn.edu contributions is much more reliable to us who are suffering from these researchers.


To follow up on my comment here, I think Greg KH's later responses were more reasonable.

> ... we have the ability to easily go back and rip the changes out and we can slowly add them back if they are actually something we want to do.

> I will be working with some other kernel developers to determine if any of these reverts were actually valid changes, were actually valid, and if so, will resubmit them properly later. ... future submissions from anyone with a umn.edu address should be by default-rejected unless otherwise determined to actually be a valid fix


Probably that, combined with "we informed the professor of {serious consequences} should this happen again".


Well, yes? Seems like recourse in their case would be to make a convincing plea or plan to rectify the problem that satisfies decision makers in the linux project.


This is not responsible research. This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.

There are a lot of people to feel bad for, but none is at the University of Minnesota. Think of the Austrians.


No, it's totally okay to feel sorry for good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith. It's sad that the actions of irresponsible researchers and associated review boards affect people who had nothing to do with professor Lu's research.

It's not wrong for the kernel community to decide to blanket ban contributions from the university. It obviously makes sense to ban contributions from institutions which are known to send intentionally buggy commits disguised as fixes. That doesn't mean you can't feel bad for the innocent students and professors.


> good, conscientious researchers and students at the University of Minnesota who have been working on the kernel in good faith

All you have to do is look at the reverted patches to see that these are either mythical or at least few and far in between.


To be clear, Linux kernel patches from good UMN researchers and students are rare. We have plenty of great people at the University of Minnesota, they just don't work on the Linux kernel.

It's justifiable and natural for our name to be dragged in the mud here, but as a run of the mill software engineer who graduated from UMN, I hope our reputation isn't soured too much.


Sure, I hope it was clear from my original comment I only question whether the UMN contributors to the kernel are acting in good faith. I have identified other questionable patches personally, out of curiosity. Naturally I tend to attribute them to ignorance rather than malice... except, that bad actors intentionally pushing bad patches to an OSS project will inevitably rely on people assuming ignorance rather than malice. This has been well-understood for decades.


Didn't they just blanket revert all patches from University of Minnesota?

https://news.ycombinator.com/item?id=26889550


Someone in this HN thread found kernel patches (at a guess, not among those now reverted?) from UMN dating back to 2008, -09, and -13 (IIRC). Probably by totally unrelated people.

So at least definitely not "totally mythical".


> This is similar to initiating fluid mechanics experiments on the wings of a Lufthansa A320 in flight to Frankfurt with a load of Austrians.

This analogy is invalid, because:

1. The experiment is not on live, deployed, versions of the kernel.

2. There are mechanisms in place for preventing actual merging of the faulty patches.

3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.

All of the above is not true for the in-flight airline.

However - I'm not claiming the experiment was not ethically faulty. Certainly, the U Minnesota IRB needs to issue a report and an explanation on its involvement in this matter.


> 1. The experiment is not on live, deployed, versions of the kernel.

The patches were merged and the email thread discusses that the patches made it to the stable tree. Some (many?) distributions of Linux have and run from stable.

> 2. There are mechanisms in place for preventing actual merging of the faulty patches.

Those mechanisms failed.

> 3. Even if a patch is merged by mistake, it can be easily backed out or replaced with another patch, and the updates pushed anywhere relevant.

Arguably. But I think this is a weak argument.


> The patches were merged

The approved methodology - described in the linked paper - was that when a patch with the introduced vulnerabilities is accepted by its reviewer, the patch submitter indicates that the patch introduces a vulnerability exists, and sends a no-vulnerability version. That's what the paper describes.

If the researchers did something other than what the methodology called for (and what the IRB approved), then perhaps the analogy may be valid.


There are literally mails in that list pointing out that commits made it to stable. At least read the damn thing before repeating the professor's/student's nonsense lies.


The mails in the pointed-to threads indicate that commits by those UMin people made it to stable; it does not say that commits which introduce bugs made it to stable - it is following a decision/suggestion to back out all patches by these people to the kernel.

There is further indication that the patches to revert are not mostly/not at all vulnerability-introducing patches in a message by "Steve" which says:

> The one patch from Greg's reverts that affects my code was actually a legitimate fix

So, again, while it is still theoretically possible that vulnerabilities were introduced into stable, that is not known to be the case.


You seem to think this experiment was performed on the Linux kernel itself. It was not. This research was performed on human beings.

It's irrelevant whether any bugs were ultimately introduced into the kernel. The fact is the researchers deliberately abused the trust of other human beings in order to experiment on them. A ban on further contributions is a very light punishment for such behavior.


You seem to think I condone the experiment because I described an analogy as invalid.


How would you feel about researchers delivering known-faulty-under-some-conditions AoA sensors to Boeing, just to see if Boeing's QA process would catch those errors before final assembly?


I would feel that you are switching analogies...

This analogy is pretty valid, the in-flight-experiment analogy is invalid.


Is it? Linux underpins many medical devices - it too could lead to life and death consequences


You are ignoring the problematic experiment's methodology, which explicitly prevents the problematic patches from making it in by drawing attention to the vulnerability after they are accepted.

Now, if this protocol were not followed, that's a different story, but we do not know this to be the case.


So is Greg K-H a liar when he said some were accepted into stable?


He didn't say vulnerabilities were accepted into stable, he said patches by these people were accepted.


I would feel that I'm wasting time that I could be using to find out why Boeing makes this possible (or any other corporate or government body with a critical system).


It's important to note that they used temporary emails for the patches in this research. It's detailed in the paper.

The main problem is that they have (so far) refused to explain in detail how the patches where reviewed and how. I have not gotten any links to any lkml post even after Kangjie Lu personally emailed me to address any concerns.


Seems like a bit of a strong response. Universities are large places with lots of professors and people with different ideas, opinions, views, and they don't work in concert, quite the opposite. They're not some corporation with some unified goal or incentives.

I like that. That's what makes universities interesting to me.

I don't like the standard here of of penalizing or lumping everyone there together, regardless of they contribute in the past, now, in the future or not.


The goal is not penalizing or lumping everyone together. The goal is to have the issue fixed in the most effective manner. It's not the Linux team's responsibility to allow contributions from some specific university, it's the university's. This measure enforces that responsibility. If they want access, they should rectify.


I would then say that the goal and the choice aren't aligned because "penalizing or lumping everyone together" is exactly the choice made.


They would presumably reconsider blanket ban, if the university says they will prohibit these specific researchers from committing to Linux.


If a company that sold static analysis products did this as part of a marketing campaign, would you likewise have so many reservations about blacklisting contributions from that company, or would you still be insisting on picking out individual employees?

It's pretty obvious what would happen if a firm tried this: they'd be taken to court and probably imprisoned, as this is a clear violation of the law (which is pretty broadly to capture any attempted interference with the correct operation of a computer program).


the university can easily resolve the issue by firing the professors


do you know that would resolve the issue? this just seems like idle, retributive speculation.


The University can presumably not in fact do this.


Tenure does not generally prohibit for-cause termination, and there is a whole pile of cause here.


The people who are effected by the rule or discouraged by it cannot do so.


One way to get everyone in a university on the same page is to punish them all for the bad actions of a few. It appears like this won't work here because nobody else is contributing and so they won't notice.


It's not the number of people directly affected that will matter, it's the reputational problems of "umn.edu's CS department got the entire UMN system banned from submitting to the Linux kernel and probably some other open source projects."


And anyone without much power to effect change SOL.

I know the kernel doesn't need anyone's contributions anyhow, but as a matter of policy this seems like a bad one.


This was approved by the university ethics board so if trust of the university is by part because the actions of the students need to pass an ethics bar it makes sense to remove that trust until the ethics committee has shown that they have improved.


The ethics board is most likely not at fault here. They were simply lied to, if we take Lu's paper serious. I would just expell the 3 malicious actors here, the 2 students and the Prof who approved it. I don't see any fault in Wang yet.

The damage is not that big. Only 4 committers to linux in the last decade, 2 of them, the students, with malicious backdoors, the Prof not with bad code but bad ethics, and the 4th, the Ass Prof did good patches and already left them.


So the pen-test on the ethics board showed that they had not institutionalized proper safeguards regarding malicious actors? (And not even a paper on this… ;-) )


I'd concur: the university is the wrong unit-of-ban.

For example: what happens when the students graduate- does the ban follow them to any potential employers? Or if the professor leaves for another university to continue this research?

Does the ban stay with UMN, even after everyone involved left? Or does it follow the researcher(s) to a new university, even if the new employer had no responsibility for them?


On the other hand: What obligation do the Linux kernal maintainers have to allow UMN staff and students to contribute to their project?


> Does the ban stay with UMN, even after everyone involved left?

It stays with the university until the university provides a good reason to believe they should not be particularly untrusted.


If they use a different email but someone knows they work at the university?

It's a chain that gets really unpleasant.


It's the university that allowed the research to take place. It's the university's responsibility to fix their own organisation's issues. The kernel has enough on their plate than to have to figure out who at the university is trustworthy and who isn't considering their IRB is clearly flying blind.


that is completely irrelevant. they are acting under the university, and their "Research" is backed by university and approved by university's department.

if university has a problem, then they should first look into managing this issue at their end, or force people to use personal email ids for such purposes


I don't feel sorry at all. If you want to contribute from there, show that the rogue professor and their students have been prevented from doing further malicious contributions (that is probably at least: from doing any contribution at all during a quite long period -- and that is fair against repeated infractions), and I'm sure that you will be able to contribute back again under the University umbrella.

If you don't manage to reach that goal, too bad, but you can contribute on a personal capacity, and/or go work elsewhere.


How could a single student or professor possibly achieve that? Under the banner of "academic freedom" it is very hard to get someone fired because you don't like their research.

It sounds like you're making impossible demands of unrelated people, while doing nothing to solve the actual problem because the perpetrators now know to just create throwaway emails when submitting patches.


It definitely would suck to be someone at UMN doing legitimate work, but I don't think it's reasonable to ask maintainers to also do a background check on who the contributor is and who they're advised by.


I find it hard to believe this research passed IRB.


It didn't. Rather, it didn't until after it had been conduct.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


How thorough is IRB review? My gut feeling is that these are not necessarily the most conscientious or informed bodies. Add into the mix a proposal that conceals the true nature of what's happening.

(All of this ASSUMING that the intent was as described in the thread.)


It varies a lot. A professor I worked for was previously at a large company in an R&D setting. He dealt with 15-20 different IRB's through various research partnerships, and noted Iowa State (our university) as having the most stringent requirements he had encountered. In other universities, it was pretty simple to submit and get approval without notable changes to the research plan. If they were unsure on something, they would ask a lot of questions.

I worked on a number of studies through undergrad and grad school, mostly involving having people test software. The work to get a study approved was easily 20 hours for a simple "we want to see how well people perform tasks in the custom software we developed. They'll come to the university and use our computer to avoid security concerns about software security bugs". You needed a script of everything you would say, every question you would ask, how the data would be collected, analyzed, and stored securely. Data retention and destruction policies had to be noted. The key linking a person's name and their participant ID had to be stored separately. How would you recruit participants, the exact poster or email you intend to send out. The reading level of the instructions and the aptitude of audience were considered (so academic mumbo jumbo didn't confuse participants).

If you check the box that you'll be deceiving participants, there was another entire section to fill out detailing how they'd be deceived, why it was needed for the study, etc. Because of past unethical experiments in the academic world, there is a lot of scrutiny and you typically have to reveal the deception in a debriefing after the completion of the study.

Once a study was accepted (in practice, a multiple month process), you could make modifications with an order of magnitude less effort. Adding questions that don't involve personal information of the participant is a quick form and an approval some number of days later.

If you remotely thought you'd need IRB approval, you started a conversation with the office and filled out some preliminary paperwork. If it didn't require approval, you'd get documentation stating such. This protects the participants, university, and professor from issues.

--

They took it really seriously. I'm familiar with one study where participants would operate a robot outside. An IRB committee member asked what would happen if a bee stung the participant? If I remember right, the resolution was an epipen and someone trained in how to use it had to be present during the session.


They are probably more familiar with medical research and the types of things that go wrong there. Bad ethics in medical situations is well understood, including psychology. However it is hard to figure out how a mechanical engineer could violate ethics.


I had to do human subjects research training in grad school, just to be able to handle test score data for a math education project. I literally never saw an actual student the whole time I was working on it.


To be fair, the consequences of unethical research in medicine or psychology can be much more dire than what happened here.


Perhaps more dire than what actually happened, but, can you imagine the consequences if any of those malicious patches had actually stuck around in the kernel? Keep in mind when you think about this that Android, which has an 87% market share globally in smartphones[0] runs on top of a modified Linux kernel.

--

[0]: https://www.statista.com/statistics/272307/market-share-fore...


seems extreme. one unethical researcher blocks work for others just because they happen to work at the same employer? they might not even know the author of the paper...


The university reviewed the "study" and said it was acceptable. From the email chain, it looks like they've already complained to the university multiple times, and have apparently been ignored. Banning anyone at the university from contributing seems like the only way to handle it since they can't trust the institution to ensure its students are doing unethical experiments.


Plus, it sets a precedent: if your university condones this kind of "research", you will have to face the consequences too...


Well, the decision can always be reversed, but on the outset I would say banning the entire university and publicly naming them is a good start. I don't think this kind of "research" is ethical, and the issue needs to be raised. Banning them is a good opener to engage the instiution in a dialogue.


It seems fair enough to me. They were curious to see what happens, this happens. Giving them a free pass because they're a university would be artificially skewing the results of the research.

Low trust and negative trust should be fairly obvious costs to messing with a trust model - you could easily argue this is working as intended.


They reported unethical behavior to the university and the university failed to prevent it from happening again.


It is an extreme response to an extreme problem. If the other researchers don't like the situation? They are free to raise the problem to the university and have the university clean up the mess they obviously have.


Well, shit happens. Imaging doctors working in organ transplants, and one of them damages trust of people by selling access to organs to rich patients. Of course that damages the field for everyone. And to deal with such issues, doctors have some ethics code, and in many countries associations which will sanction bad eggs. Perhaps scientists need something like that, too?


The University approved this research. How can one trust anything from that university now?


It approved the research, which I don't find objectionable.

The objectionable part is that the group allegedly continued after having been told to stop by the kernel developers.


Why is that objectionable, do actual bad actors typically stop trying after being told to do so?


Which just demonstrates that these guys are actual bad actors, so blocking everyone at the university seems like a reasonable attempt at stopping them.


It's objectionable because of severe consequences beyond just annoying people. If there was a malicious purpose, not just research, you could bring criminal charges against them.

In typical grey hat research you get pre-approval from target company leadership (engineers don't know) to avoid charges once discovered.


That's not really how it works. Nobody's out there 'approving' research (well, not seemingly small projects like this), especially at the university level. Professors (all the way down to PhD students!) are usually left to do what they like, unless there are specific ethical concerns that should be put before a review panel. I suppose you could argue that this work should have been brought before the ethics committee, but it probably wasn't, and in CS there isn't a stringent process like there is in e.g. psychology or biology.


Wrong!

If you read the research paper linked in the lkml post, the authors at UMN state that they submitted their research plan to the University of Minnesota Institutional Research Board and received a human subjects exempt waiver.


A human subjects determination isn’t really an approval, just a note that the research isn’t HSR, which it sounds like this wasn’t.


It absolutely was human subject research. Try for yourself! Here's the NIH's rubric:

https://grants.nih.gov/policy/humansubjects/hs-decision.htm

for q1, the study collects data like observations of behavior, so we must answer yes.

for q2, none of the exemptions apply - it's not an educational setting, they're not sending a survey, it's not an observation of the public - they're interacting, and it's clear that these interactions are not benign - they have clear impact on the community. None of these exemptions apply.

Based on this flow, it's clear the study involves "human research".


Well it was, but not the type of thing that HSR would normally worry about.


That waiver was issued incorrectly. See my post in this same thread on why - essentially, if you do the NIH test, it's HSR.


The emails suggest this work has been reported in the past. A review by the ethics committee after the fact seems appropriate, and it should’ve stopped a repeat offence.


Forking the kernel should be sufficient for research.


Not if the research involves the reviewing aspects of open source projects.


Apparently they aren't doing human experiments, it's only processes and such. So they can easily emulate the processes in-house too!


This research is specifically about getting patches accepted into open source projects, so that wouldn't work at all.


For other research happening in the university. This particular research is trivial anyway, see https://news.ycombinator.com/item?id=26888417


Not a big loss: these professors likely hate open source. [edit: they do not. See child comments.]

They are conducting research to demonstrate that it is easy to introduce bugs in open source...

(whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

[removed this ranting that does not apply since they are contributing a lot to the kernel in good ways too]


> Not a big loss: these professors likely hate open source.

> They are conducting research to demonstrate that it is easy to introduce bugs in open source...

That's a very dangerous thought pattern. "They try to find flaws in a thing I find precious, therefore they must hate that thing." No, they may just as well be trying to identify flaws to make them visible and therefore easier to fix. Sunlight being the best disinfectant, and all that.

(Conversely, people trying to destroy open source would not publicly identify themselves as researchers and reveal what they're doing.)

> whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

How do we know that? We know things by regularly testing them. That's literally what this research is - checking how likely it is that intentional vulnerabilities are caught during review process.


Ascribing a salutary motive to sabotage is just as dangerous as assuming a pernicious motive. Suggesting that people "would" likely follow one course of action or another is also dangerous: it is the oldest form of sophistry, the eikos argument of Corax and Tisias. After all, if publishing research rules out pernicious motives, academia suddenly becomes the best possible cover for espionage and state-sanctioned sabotage designed to undermine security.

The important thing is not to hunt for motives but to identify and quarantine the saboteurs to prevent further sabotage. Complaining to the University's research ethics board might help, because, regardless of intent, sabotage is still sabotage, and that is unethical.


The difference between:

"Dear GK-H: I would like to have my students test the security of the kernel development process. Here is my first stab at a protocol, can we work on this?"

and

"We're going to see if we can introduce bugs into the Linux kernel, and probably tell them afterwards"

is the difference between white-hat and black-hat.


It should probably be a private email to Linus Torvalds (or someone in his near chain of patch acceptance), that way some easy to scan for key can be introduced in all patches. Then the top levels can see what actually made it through review, and in turn figure out who isn't reviewing as well as they should.


Yes, someone like Greg K-H. I'm not up to date on the details, but he should be one of most important 5 people caring for the kernel tree, this would've been the exact person to seek approval.


Auditability is at the core of its advantage over closed development.

Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

To adress your first critic: benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm. Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.


> Auditability is at the core of its advantage over closed development.

That's an assertion. A hypothesis is verified through observing the real world. You can do that in many ways, giving you different confidence levels in validity of the hypothesis. Research such as the one we're discussing here is one of the ways to produce evidence for or against this hypothesis.

> Submitting bugs is not really testing auditability, which happens over a longer timeframe and involves an order of magnitude more eyeballs.

It is if there's a review process. Auditability itself is really most interesting before a patch is accepted. Sure, it's nice if vulnerabilities are found eventually, but the longer that takes, the more likely it is they were already exploited. In case of an intentionally bad patch in particular, the window for reverting it before it does most of its damage is very small.

In other words, the experiment wasn't testing the entire auditability hypothesis. Just the important part.

> benevolence, and assuming everyone wants the best for the project, is very important in these models, because the resources are limited and dependent on enthusiasm

Sure. But the project scope matters. Linux kernel isn't some random OSS library on Github. It's core infrastructure of the planet. Assumption of benevolence works as long as the interested community is small and has little interest in being evil. With infrastructure-level OSS projects, the interested community is very large and contains a lot of malicious actors.

> Blacklisting bad actors (even if they have "good reasons" to be bad) is very well justified.

I agree, and in my books, if a legitimate researcher gets banned for such "undercover" research, it's just the flip side of doing such experiment.


I will not adress everything but only this point:

Before a patch is accepted, "auditability" is the same in OSS vs in proprietary, because both pools of engineers in the review groups have similar qualifications and approximatively the same number of people are involved.

So, the real advantage of OSS is on the auditability after the patch is integrated.


> So, the real advantage of OSS is on the auditability after the patch is integrated.

If that's the claim, then the research work discussed here is indeed not relevant to it.

But also, if that's the claim, then it's easy to point out that the "advantage" here is hypothetical, and not too important in practice. Most people and companies using OSS rely on release versions to be stable and tested, and don't bother doing their own audit. On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

It is therefore the window between patch submission and its inclusion in a stable release (which may involve accepting the patch to a development/pre-release tree), that's of critical importance for OSS - if vulnerabilities that are already known to some parties (whether the malicious authors or evil onlookers) are not caught in that window, the threat vector here becomes real, and from a risk analysis perspective, negates some of the other benefits of using OSS components.

Nowhere here I'm implying OSS is worse/better than proprietary. As a community/industry, we want to have an accurate, multi-dimensional understanding of the risks and benefits of various development models (especially when applied to core infrastructure project that the whole modern economy runs on). That kind of research definitely helps here.


> On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

On this specific point, it only holds if you restrict the assertion to 'intentional submission of vulnerabilities by outsiders'. I don't work in fintech, but I've read allegations that insider-created vulnerabilities and backdoors are a very real risk.


> On the other hand, intentional vulnerability submission is an unique threat vector that OSS has, and which proprietary software doesn't.

Very fair point. Inside threat also exists in corporations, but it's probably harder.


If the model assumes benevolence how can it possibly be viable long-term?


Like that: malevolent actors are banned as soon as detected.


What do you suppose is the ratio of undetected bad actors / detected bad actors? If it is anything other than zero I think the original point holds.


Most everything boils down to trust at some point. That human society exists is proof that people are, or act, mostly, "good", over the long term.


> That human society exists is proof that people are, or act, mostly, "good", over the long term.

That's very true. It's worth noting that various legal and security tools deployed by the society help us understand what are the real limits to "mostly".

So for example, the cryptocurrency crowd is very misguided in their pursuit of replacing trust with math - trust is the trick, the big performance hack, that allowed us to form functioning societies without burning ridiculous amounts of energy to achieve consensus. On the other hand, projects like Linux kernel, which play a core role in modern economy, cannot rely on assumption of benevolence alone - incentives for malicious parties to try and mess with them are too great to ignore.


> It's likely a university with professors that hate open source.

This is a ridiculous conclusion. I do agree with the kernel maintainers here, but there is no way to conclude that the researchers in question "hate open source", and certainly not that such an attitude is shared by the university at large.


[Edit: they seem to truly love OSS. See child comments. Sorry for my erroneous judgement. It reminded too much of anti-opensource FUD, I'm probably having PTSD of that time...]

I fixed my sentence.

I still think that these professors, either genuinely or by lack of willingness, do not understand the mechanism by which free software warrants its greater quality compared to proprietary ones (which is a fact).

They just remind me the good old days of FUD against open source by Microsoft and its minions...


What papers or statements has this professor made to support that kind of allegation? Can you provide some links or references, please?


I don't have the name of the professor.

[Edited: it seems like they do love OSS and contribute a lot. See child comments.]

I had based my consideration on the way they are testing the open-source development model.

These professors actually love OSS... but they need to respect kernel maintainers request to stop these "experiments".


From the researchers:

> In the past several years, we devote most of our time to improving the Linux kernel, and we have found and fixed more than one thousand kernel bugs; the extensive bug finding and fixing experience also allowed us to observe issues with the patching process and motivated us to improve it. Thus, we consider ourselves security researchers as well as OSS contributors. We respect OSS volunteers and honor their efforts.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

It feels to me you have jumped to an untenable conclusion without even considering their point of view.


Yes. I removed a lot of my ranting accordingly. Thanks, and sorry.


Thank you, appreciated.


My analysis of that exact same quote was that it was insincere cover to allow them to continue operating an anti-OSS agenda which was made clear in the paper itself.


At least in the university where I did my studies, each professor had their own way of thinking and you could not group them into any one basket.


Fair point.

I'll just leave my comment as it is. The university administration still bears responsibility in the fact that they waived the IRB.


From the link, not sure if accurate:

> Those commits are part of the following research:

> https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

> They introduce kernel bugs on purpose. Yesterday, I took a look on 4 accepted patches from Aditya and 3 of them added various severity security "holes".


Interestingly, that paper states that they introduced 3 patches with bugs, but after acceptance, they immediately notified the maintainers and replaced the patches with correct, bug-free ones. So they claim the bugs never hit any git tree. They also state that their research had passed the university IRB. I don't know if that research relates to what they are doing now, though.


> the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards

That's not true at all. There are many internet-critical projects with tons of holes that are not found for decades, because nobody except the core team ever looks at the code. You have to actually write tests, do fuzzing, static/memory analysis, etc to find bugs/security holes. Most open source projects don't even have tests.

Assuming people are always looking for bugs in FOSS projects is like assuming people are always looking for code violations in skyscrapers, just because a lot of people walk around them.


> (whereas we know that the strength of open source is its auditability, thus such bugs are quickly discovered and fixed afterwards)

Which is why there have never been multi-year critical security vulnerabilities in FOSS software.... right?

Sarcasm aside, because of how FOSS software is packaged on Linux we've seen critical security bugs introduced by package maintainers into software that didn't have them!


You need to compare what happens with vulnerabilities in OSS vs in proprietary.

A maintainer pakage is just one more open source software (thus also in need of reviews and audits)... which is why some people prefer upstream-source-based distribs, such as Gentoo, Arch when you use git-based AUR packages, or LFS for the hardcore fans.


> You need to compare what happens with vulnerabilities in OSS vs in proprietary.

Yes, you do need to make that comparison. Taking it as a given without analysis is the same as trusting the proprietary software vendors who claim to have robust QA on everything.

Security is hard work and different from normal review. The number of people who hypothetically could do it is much greater than the number who actually do, especially if there isn’t an active effort to support that type of analysis.

I’m not a huge fan of this professor’s research tactic but I would ask what the odds are that, say, an intelligence agency isn’t doing the same thing but with better concealment. Thinking about how to catch that without shutting down open-source contributions seems like an important problem.


Some clarifications since they are unclear in the original report.

- Aditya Pakki (the author who sent the new round of seemingly bogus patches) is not involved in the S&P 2021 research. This means Aditya is likely to have nothing to do with the prior round of patching attempts that led to the S&P 2021 paper.

- According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.

Greg has all reasons to be unhappy since they were unknowingly experimented on and used as lab rats. However, the round of patches that triggered his anger *are very likely* to have nothing to do with the three intentionally incorrect patch attempts leading to the paper. Many people on HN do not seem to know this.

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


Aditya's advisor [1] is one of the co-authors of the paper. He at least knew about this work and was very likely involved with it.

[1] https://adityapakki.github.io/assets/files/aditya_cv.pdf


There is no doubt that Kangjie is involved in Aditya's research work, which leads to bogus patches sent to Linux devs. However, based on my understanding of how CS research groups usually function, I do not think Kangjie knew the exact patches that Aditya sent out. In this specific case, I feel Aditya is more likely the one to blame: He should have examined these automatically generated patches more carefully before sending them in for reviewing.


Kangjie should not have approved any research plan involving kernel patches knowing that he had already set that relationship on fire in order to get a prior paper published.


>based on my understanding of how CS research groups usually function

If you mean supervisors adding their names on to publications without having contributed any work, than this is not only limited to CS research groups. Authorship misrepresentation is widespread in academia and unfortunately mostly being ignored. Those who speak up are being singled out and isolated instead.


I would say it's less authorship misrepresentation and more an established convention that's well-known to people within the field. Lead authors go first, then supporting contributors, and finally advisors at the end.


Even if they didn't write or perform part of the research, they did act in an advisory or consulting fashion, and therefore could significantly shape the research. Maybe there should be a different way to credit that, but right now convention is to put them in a low position in the list of authors.


Where I went, it was so widespread that it was considered normal and nobody would even think about speaking out about it. Only with hindsight did I realize how despicable it was.


At my last job a low quality paper was written about "my work" despite me saying that it's too early. It was based on my idea and resulted in our research groups grant participation being extended, despite our professor saying that the grant won't be extended since he switched universities and continent.

It was written while I was working on the software and my name was then put in third position on the list of authors. Only way I was able to defend myself was, asking them to remove my name from the paper which resulted in the paper not being published.


[deleted]


Uh yeah no. Renaissance Technologies LLC is a hedge fund. The Renaissance listed on his resume is related to gaming, not securities trading.


Aditya's story about the new patches is that he was writing a static analysis tool and was testing it by... submitting PRs to the Linux kernel? He's either exploiting the Linux maintainers to test his new tool, or that story's bullshit. Even taking his story at face value is justification to at least ban him personally IMO.


People do this with static analysis tools all the time. It’s obnoxious but not necessarily malicious.


To be clear: asking Linux maintainers to verify the results of static analysis tools they wrote themselves, without admitting to it until they're accused of being malicious?


Asking Linux maintainers to apply patches or fix “bug” resulting from home grown static analysis tools, which usually flag all kinds of things that aren’t bugs. This happens regularly.


As someone who used to maintain a large C++ codebase, people usually bug-dump static analysis results rather than actually submitting fixes, but blindly "fixing" code that a static analysis tool claims to have issue with is not surprising to see either.

If the patches were accepted, the person could have used those fixes to justify the benefits of the static analysis tool they wrote.


They generally state that this was found with so-and-so static analysis tool. And, as GKH pointed out in the thread, the resulting patches generally make changes that match a certain pattern, not the random uselessness that Aditya's patches were.


I'm unclear why one cannot do all these tests with a fork.


Sounds like these commits aren't related to that paper, they're related to the next paper he's working on, and the next one is making the same category error about human subjects in his study.


This is Aditya Pakki's webiste:

https://adityapakki.github.io/

In this "About" page:

https://adityapakki.github.io/about/

he claims "Hi there! My name is Aditya and I’m a second year Ph.D student in Computer Science & Engineering at the University of Minnesota. My research interests are in the areas of computer security, operating systems, and machine learning. I’m fortunate to be advised by Prof. Kangjie Lu."

so he in no uncertain terms is claiming that he is being advised in his research by Kangjie Lu. So it's incorrect to say his patches have nothing to do with the paper.


I would encourage you not to post people's contact information publicly, specially in a thread as volatile as this one. Writing "He claims in his personal website" would bring the point across fine.

This being the internet, I'm sure the guy is getting plenty of hate mail as it is. No need to make it worse.


They are named in the comment above. Aditya Pakki's personal website is the first result upon Googling that name.

I doubt HN has the volume of readership/temperament to lead to substantial hate mail (unlike, say, Twitter).


I would hope most here are above spamming hatemail to things they aren't involved with...


> So it's incorrect to say his patches have nothing to do with the paper.

Professors usually work on multiple projects, which involve different grad students, at the same time. Aditya Pakki could be working on a different project with Kangjie Lu, and not be involved with the problematic paper.


Sucks to be him then. He can thank his professor.


Based on the tone of his email, I would say that the ban is not entirely undeserved.


Clearly in over his head though.


> S&P 2021 paper did not introduce any bugs into Linux kernel.

I used to work as an auditor. We were expected to conduct our audits to neither expect nor not expect instances of impropriety to exist. However, once we had grounds to suspect malfeasance, we were "on alert", and conduct tests accordingly.

This is a good principle that could be applied here. We could bat backwards and forwards about whether the other submissions were bogus, but the presumption must now be one of guilt rather than innocence.

Personally, I would have been furious and said, in no uncertain terms, that the university keep a low profile and STFU lest I be sufficiently provoked to taking actions that lead to someone's balls being handed to me on a plate.


What sort of lawsuit might they bring against a university whose researchers deliberately inserted malicious code into software that literally runs a good portion of the world?

I'm no lawyer, but it seems like there'd be something actionable.

On a side note, this brings into question any research written by any of the participating authors, ever. No more presumption of good faith.


>What sort of lawsuit might they bring against a university whose researchers deliberately inserted malicious code into software that literally runs a good portion of the world?

I am also not a lawyer, but aside from any civil action, the conduct looks like it might be considered criminal under the Computer Fraud and Abuse Act:

"Whoever knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer;"

https://www.law.cornell.edu/uscode/text/18/1030#a_5


Not just this world, other worlds too [0].

The first extraterrestrial software crime?

[0] https://www.theverge.com/2021/2/19/22291324/linux-perseveran...


> According to the authors' clarification [1], the S&P 2021 paper did not introduce any bugs into Linux kernel. The three attempts did not even become Git commits.

Except that at least one of those three, did [0]. The author is incorrect that none of their attempts became git commits. Whatever process that they used to "check different versions of Linux and further confirmed that none of the incorrect patches was adopted" was insufficient.

[0] https://lore.kernel.org/patchwork/patch/1062098/


> The author is incorrect that none of their attempts became git commits

That doesn't appear to be one of the three patches from the "hypocrite commits" paper, which were reportedly submitted from pseudononymous gmail addresses. There are hundreds of other patches from UMN, many from Pakki[0], and some of those did contain bugs or were invalid[1], but there's currently no hard evidence that Pakki was deliberately making bad-faith commits--just the association of his advisor being one of the authors of the "hypocrite" paper.

[0] https://github.com/torvalds/linux/commits?author=pakki001@um...

[1] Including his most recent that was successfully applied: https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...


But Kanjie Lu, Pakki’s advisor, was one of the authors. The claim that “ You, and your group, have publicly admitted to sending known-buggy patches” may not be totally accurate (or it might be—Pakki could be on other papers I’m not aware of), but it’s not totally inaccurate either. Most academic work is variations on a theme, so it’s reasonable to be suspect of things from Lu’s group.


As Greg KH notes, he has no time to deal with such BS, when suggested to write a formal complain. He has no time to play detectives: you are involved in a group that does BS and this smell like BS again, banned.

Unfair? Maybe: complain to your advisor.


It shouldn’t be up to the victim to sort that out. The only thing that could perhaps have changed here is for the university wide ban to have been announced earlier. Perhaps the kernel devs assumed that no one would be so shameless as to continue to send students back to someone they had already abused.


The person in power here is Greg KH. It seems like he can accept/reject/ban anyone for any reason with little recourse for the counter-party. I'm willing to withhold judgement on these allegations until the truth comes out. Seems like many here want retribution before any investigation.


You realise that the evidence is already there plain for everyone to see. It's even laid out in the email thread we're commenting on. This sort of linguistic weaseling doesn't help anyone, least of all folks who may not understand the entire context of what went down here (which is much worse than it might seem on the face of it).


In an evolving situation, I hesitate to immediately accept something as fact just because its in a comment. I don't believe in online "mob justice", nor in guilt by association.

You seem convinced, I am not. And as such, we just have a disagreement, no biggie. :)

>This sort of linguistic weaseling doesn't help anyone, least of all folks who may not understand the entire context of what went down here (which is much worse than it might seem on the face of it).

I am expressing an opinion, just as you are.


There's only one way the kernel dev team can afford to look at this: A bad actor tried to submit malicious code to the kernel using accounts on the U of M campus. They can't afford to assume that the researchers weren't malicious, because they didn't follow the standards of security research and did not lay out rules of engagement for the pentest. Because that trust was violated, and because nobody in the research team made the effort to contact the appropriate members of the dev team (in this case, they really shoulda taken it to Torvalds), the kernel dev team can't risk taking another patch from U of M because it might have hidden vulns in it. For all we know, Aditya Pakki is a pseudonym. For all we know, the researchers broke into Aditya's email account as part of their experiment--they've already shown that they have a habit of ignoring best practices in infosec and 'forgetting' to ask permission before conducting a pentest.


I agree, the kernel team shouldn't make decisions based on the intents to submit such patches.

Like you can go to any government building with a threat of bombs but claiming it is only an experiment to find security loophole.


It is worth noting that Pakki is one of the paper’s writer’s (Lu) assistants.

https://adityapakki.github.io/experience/


From his message, the ones that triggered his anger were patches he believed to be obviously useless and therefore either incompetently submitted or submitted as some other form experimentation. After the intentionally incorrect patches, he could no longer allow the presumption of good faith.


It doesn't matter. I think this is totally appropriate. A group of students are submitting purposely buggy patches? It isn't the kernels team to sift through and distinguish they come down and nuke the entire university. This sends a message to any other University thinking of a similar stunt you try this bull hockey you and your entire university are going to get caught in the blast radius.

In short "f** around, find out"


On the plus side, I guess they get a hell of a result for that research paper they were working on.

"We sought to probe vulnerabilities of the open-source public-development process, and our results include a methodology for getting an entire university's email domain banned from contributing."


Given the attitude of "the researchers" and an earlier paper [1] so far, somehow I doubt they will act in good faith this time.

For instance:

"D. Feedback of the Linux Community. We summarized our findings and suggestions, and reported them to the Linux community. Here we briefly present their feedback. First, the Linux community mentioned that they will not accept preventive patches and will fix code only when it goes wrong. They hope kernel hardening features like KASLR can mitigate impacts from unfixed vulnerabilities. Second, they believed that the great Linux community is built upon trust. That is, they aim to treat everyone equally and would not assume that some contributors might be malicious. Third, they mentioned that bug-introducing patches is a known problem in the community. They also admitted that the patch review is largely manual and may make mistakes. However, they would do their best to review the patches. Forth, they stated that Linux and many companies are continually running bug-finding tools to prevent security bugs from hurting the world. Last, they mentioned that raising the awareness of the risks would be hard because the community is too large."

[1] https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i...


That is just appalling. I'm glad these jokers used their real names; it will be easier to avoid them in the future.


Which will (hopefully) not be accepted by any reputable journal.


I seriously doubt this policy would have been adopted if other unrelated groups at the same university were submitting constructive patches.


I read through that clarification doc. I don't like their experiment but I have to admit their patch submission process is responsible (after receiving a "looks good" for the bad patch, point out the flaw in the patch, give the correct fix and make sure the bad patch doesn't get into the tree).


This isn't friendly pen-testing in a community, this is an attack on critical infrastructure using a university as cover. The foundation should sue the responsible profs personally and seek criminal prosecution. I remember a bunch of U.S. contractors said they did the same thing to one of the openbsd vpn library projects about 15 years ago as well.

What this professor is proving out is that open source and (likely, other) high trust networks cannot survive really mendacious participants, but perhaps by mistake, he's showing how important it is to make very harsh and public examples of said actors and their mendacity.

I wonder if some of these or other bug contributors have also complained that the culture of the project governance is too aggressive, that project leads can create an unsafe environment, and discourage people from contributing? If counter-intelligence prosecutors pull on this thread, I have no doubt it will lead to unravelling a much broader effort.


Not everything can be fixed with the criminal justice system. This should be solved with disciplinary action by the university (and possibly will be [1]).

[1] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...


I am not knowledgeable enough to know if this intent is provable, but if someone can frame the issue appropriately, it feels like it could be good to report this to the FBI tip line so it is at least on their radar.


> The foundation should sue the responsible profs personally and seek criminal prosecution.

This is overkill and uncalled for.


Organizing an effort, with a written mandate, to knowingly introduce kernel vulnerabilities, through deception, that will spread downstream into other Linux distributions, likely including firmware images, which may not be patched or reverted for months or years - does not warrant a criminal investigation?

The foundation should use recourse to the law to signal they are handling it, if only to prevent these profs from being mobbed.


I think you are misunderstanding what happened. They emailed the patches to the maintainers, and when the maintainers responded "this looks good", then told them there was a bug in the patch. They never committed a bad patch to the source tree. The problem is they were deceptive in their initial email, not that they actually introduced kernel vulnerabilities. No bad code was ever committed, and they had a written mandate to verify that.



Both of those reverts suggest those were just non-malicious contributions that the maintainers reverted just in case (and reapplied after review). If that's the proof, then I think you are mistaken. Maybe put another way, if someone says "noptd has bad intentions, so I'm reverting all of noptd's contributions that were committed to stable" the reverts themselves are not proof that malicious commits made it to stable, and that noptd has bad intentions.


It doesn't sound like either of those reverts are necessarily for malicious patches. They are reverting all commits from umn.edu addresses regardless of their involvement with this professor.


It doesn't matter whether the patches made it in our not. Even the attempt is illegal in some jurisdictions.


Except Greg K-H disagrees with the students, stating it did make it to stable.

I trust Greg over the students.


Can you cite a source of Greg saying this? I read this article which is the closest I could find that reports this, https://www.zdnet.com/article/greg-kroah-hartman-bans-univer... which says,

"""Romanovsky reported that he had looked at four accepted patches from Pakki "and 3 of them added various severity security 'holes.'" Sudip Mukherjee, Linux kernel driver and Debian developer, followed up and said "a lot of these have already reached the stable trees." These patches are now being removed."""

However, if you click the links, you'll see that "have already reached stable trees" is about non-buggy patches, and "3 of them added various [holes]" are not one of those. So the articles seem to be intentionally deceiving the reader to think those are connected, when they're separate events. I actually feel like the media has been doing this (putting together non-related facts together in a way that readers reasonably infer a connection between the two).


I guess it was Romanovsky who said it: https://lore.kernel.org/linux-nfs/YH+zwQgBBGUJdiVK@unreal/

Wait so do you disagree with ZDnet too?


Again, there's nothing that says the patches with vulnerabilities made it to stable.

Did you read the ZDnet article and look at the links that in that article in the relevant paragraph? I'm not "disagreeing", I'm saying that they are misleading the reader (and it looks like many were fooled).

The two sentences they put together are not related, but put next to each other, they make it seem like they're related. We have to be careful when reading these articles. So the researchers have made commits to stable, and the researchers have introduced vulnerabilities, but they are not referring to the same patches. So no vulnerabilities have been committed to stable.


How exactly is a lawsuit overkill? If the researchers are in the right, the court will find in their favor.


And if they aren't and it doesn't, will the maintainers be happier? No, just older and poorer.


Here's a clarification from the Researchers over at UMN[1].

They claim that none of the Bogus patches were merged to the Stable code line :

>Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.

[1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

[2] : https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...


The response makes the researchers seem clueless, arrogant, or both - are they really surprised that kernel maintainers would get pissed off at someone deliberately wasting their time?

From the post:

  * Does this project waste certain efforts of maintainers?
  Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them
"Yes, this wastes maintainers time, but we decided we didn't care."


Fascinating that the research was judged not to involve human subjects....

As someone not part of academia, how could this research be judged to not involve people? It _seems_ obvious to me that the entire premise is based around tricking/deceiving the kernel maintainers.


Yeah, especially when the researcher begins gaslighting his subjects. He had the gall to call the maintainer's response "disgusting to hear", and then went on to ask for "the benefit of the doubt" after publishing a paper admitting that he decieved them.

For comparison, imagine that you attented a small conference and unknowingly became a test subject, and when you sit down the chair shocks you with a jolt of electricity. You jump out of your chair and exclaim, "This seat shocked me!" Then the person giving the presentation walks to your seat and sits down and it doesn't shock him (because he's the one holding the button), and he then accuses you of wasting everyone's time. That's essentially what happened here.


That was my thinking too, surely their school's IRB would have a field day with this. The question is whether they ran this by their IRB at all. If they did it, there would be implications on the ethics of everything coming out of UMN. If they didn't, then the same for their lab. I know at my school things were quite clear - if your work requires any interaction with any human not in your lab, you need IRB approval. This is literally just a social engineering experiment, so of course IRB should have reviewed it.

https://research.umn.edu/units/irb


They ran it by the IRB after publishing the paper, and the IRB issued a post-hoc exemption.

Disgusting.


And given the apparent failure of UMN's IRB, banning UMN from contributing to the Linux kernel honestly seems like a correct approach to resolve the underlying issue.


Probably science journals should suspend publication of any human research done in UMN. That might get this issue the attention it deserves. These were human trials without IRB pre-approval, but IRB condoned it afterward?


This is an indignant rebuttal, not an apology.

No one says "wasted their precious time" in a sincere apology. The word 'precious' here is exclusively used for sarcasm in the context of an apology, as it does not represent a specific technical term such as might appear in a gemology apology.


Your particular criticism is not fair in my opinion. Both researchers went to undergrad outside the U.S., so they may not speak English as a first language. Therefore, it's not fair to assume to intended that connotation.


I also think it's not fair criticism. While "precious" can indeed have sarcastic connotation, I don't detect that tone in the paragraph at all.


That single word alone is enough to alter the tone of the paragraph when read. That the rest of the paragraph is plausible does not excuse it.


I am from outside the US, and it’s perfectly fair to criticise a professionals ability to use language the way its supposed to be; that’s your job, if you can’t do that then don’t take the job.


Their English is perfectly competent, but I just don't think it's fair to assume that they know the connotation of every English turn of phrase, especially when that assumption is being using to castigate them as "sarcastic".


I disagree. The text of their paper and their emails show a firm grasp of english.


That would be considering they can say virtually anything and pretend when criticized that it was just a miscommunication problem because they don't speak English well enough. Which depending on the consequences, can not necessarily absolve from responsibility even if it would give elements for excuses -- well at least for the first time, certainly not if they continue their bullshit after it!

If they have a problem in mastering English, they can take lessons, and make native speaker review their communication in the meantime.

The benefit of the doubt can not stick for ever on people caught red-handed. It can be restored of course, but they are now in a position where they drastically shifted the perception by their own actions, and thus can't really complain of the results of their own doings. Yes, they can not make mistakes anymore, and everything they did in the past will be reviewed harshly, not for further condemning them without reasons, but just to be sure they did not actually break things while practicing their malicious activities.


I’m not inclined to be particularly forgiving, given the overall context of their behaviors and the ethical violations committed. I choose to consider that context when parsing their words. You must make your own decision in that regard.


I think this may be unintended. It is very hard to formulate a message that essentially says both "we recognize your time is valuable" and "we know we waste your time, but we decided it's not very important" at the same time, without it sounding sarcastic on some level. Inherent contradiction of the message would get through, regardless of the wording chosen.


“We determined after careful evaluation of the potential outcomes that the time wasted by kernel maintainers was, in total, sufficiently low that no significant impact would occur over a multi-day time scale.”

If I can come up with the scientific paper gibberish for that in real-time, and I don’t even write science papers, then these people who understand how to navigate an ethical review board process surely know how to massage an unpleasant truth into dry and dusty wording.

I think that they just screwed up and missed the word “precious” in editing, and thus got caught being dismissive and snide towards their experiment’s participants. Without that word, it’s a plausible enough paragraph. With it, it’s no longer plausibly innocent.


The quote is translated to English as "your puny concerns is nothing compared to our Science", so it only covers one of the two bases. To cover both, they had to include some explicit verbiage recognizing the value of time being wasted, and they went a little overboard with "precious", making it sound fake - as it actually was.


Or more charitably: "Yes, this spent some maintainers time, but only a small amount and it resulted in bugfixes, which is par for the course of contributing to linux"


Indeed! "we could not figure out a better solution in this study".

There IS a better solution: not to proceed with that "study" at all.


Well, or do what an ethical research would do and seek authorization from the board of Linux foundation before doing any (who knows potentially illegal social engineering attacks) on team members.


Next study would involve creating fake grant to study researcher review process of discerning which grant is a scam or not


Exactly. Since when is "people will sometimes believe lies" a uncertain question that needs experimental review to confirm?

Maybe that cop convicted yesterday was actually just a UMN researcher investigating the burning scientific question "does cutting off someone's airway for 9 minutes cause death?".


Careful, the prosecution’s witnesses testified on cross–examination that there was no evidence of bruising on Floyd’s neck, which is inconsistent with “cutting off someone's airway for 9 minutes”.


Your honor, I tried to find any solution for testing this new poison without poisoning a bunch of people, but I carefully considered it and I couldn't find any, so I went ahead and secretly poisoned them. Clearly, I am innocent! Though I sincerely apologize for any inconvenience caused.


> Unfortunately, yes

That is the perfect example of being arrogant


> We had carefully considered this issue, but could not figure out a better solution in this study.

Couldn't figure out that "not doing it" was an option apparently.


> clueless, arrogant, or both

I'm going to go with "both" here.


Had him as a TA, can confirm. Rudest and more arrogant TA I've ever worked with. Outright insulted me for asking questions as a transfer who had never used Linux systems before. Him implying he was ignorant and new is laughable when his whole demeanor was that you're an imbecile for not knowing how these things work.


In the end, the damage has been done and the Linux developers are now going back and removing all patches from any user with a @umn.edu email.

Not sure how the researchers didn't see how this would backfire, but it's a hopeless misuse of their time. I feel really bad for the developers who now have to spend their time fixing shit that shouldn't even be there, just because someone wanted to write a paper and their peers didn't see any problems either. How broken is academia really?


This, in of itself, is a finding. The researchers will justify their research with "we were banned which is a possible outcome of this kind of research..." I find this disingenuous. When a community of open source contributors is partially built on trust, then violators can and will be banned.

The researchers should have approached the maintainers got get buy in, and setup a methodology where a maintainer would not interfere until a code merge was immanent, and just play referee in the mean time.


I don’t mind them publishing that result, as long as they make it clear that everyone from the university was banned, even people not affiliated with their research group. Of course anyone can get around that ban just by using a private email address (and the prior paper from the same research group started out using random gmail accounts rather than @umn.edu accounts), but making this point will hopefully prevent anyone from trying the same bad ideas.


I feel the same way. People don't understand how it is difficult to be a maintainer. This is very selfish behaviour. Appreciate Greg's strong stance against it.


> I haven't been able to find out what the 3 patches which the reference are, but the discussions on Greg's UMN Revert patch [2] does indicate that some of the fixes have indeed been merged to Stable and are actually Bogus.

That's because those are two separate incidents. The study which resulted in 3 patches was completed some time last year, but this new round of patches is something else.

It's not clear whether the patches are coming from the same professor/group, but it seems like the author of these bogus patches is a Phd student working with the professor who conducted that study last year. So there is at least one connection.

EDIT: also, those 3 patches were supposedly submitted using a fake email address according to the "clarification" document released after the paper was published. So they probably didn't use a @umn.edu email at all.


The main issue here is that it wastes the time of the reviewers and they did not address it in their reply.


To help clarify for purposes of continuing the discussion the original research did address the issue of minimizing the time of the reviewers [1] [2]. Seems the maintainers were OK with that as no actions were taken other than an implied request to stop that kind of research.

Now a different researcher from UMN, Aditya Pakki, has submitted a patch which contains bugs that seems to be attempting to do the same type of pen testing although the PhD student denied it.

1. Section IV.A of the paper, as pointed out by user MzxgckZtNqX5i in this comment: https://news.ycombinator.com/item?id=26890872

> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

2. Clarifications on the “hypocrite commit” work (FAQ)

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

"* Does this project waste certain efforts of maintainers? Unfortunately, yes. We would like to sincerely apologize to the maintainers involved in the corresponding patch review process; this work indeed wasted their precious time. We had carefully considered this issue, but could not figure out a better solution in this study. However, to minimize the wasted time, (1) we made the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we tried hard to find three real bugs, and the patches ultimately contributed to fixing them."


Agreed. This feel more like an involuntary social experiment and it just uses up the Kernel maintainers bandwidth. Reviewing code is difficult, even more so when the committer is set out to introduce bad code in the first place.


It's disrespectful to people who are contributing their personal time while working for free on open source projects.

With more than 60% of all acedemic publications not being reproducible [1], one would think academia has better things to do than wasting other people's time.

[1] https://en.wikipedia.org/wiki/Replication_crisis


I wonder why they didn't just ask in advance. Something like 'we would like to test your review process over the next 6 months and will inform you before a critical patch hits the users', might have been a win-win scenario.


How can they be trusted though?


It seems to me like the other patches were submitted in good faith, but that the maintainer no longer trusts them because of the other bad commits.


The University of Minnesota's Department of Computer Science and Engineering released a statement [0] and "suspended this line of research".

[0] https://cse.umn.edu/cs/statement-cse-linux-kernel-research-a...


They don’t seem all that happy about it. :)


> We take this situation extremely seriously. We have immediately suspended this line of research. yeah those department heads seemed pretty pissed


I don’t read any emotion in that statement whatsoever.


According to the "Hypocrite Commits" paper by Qiushi Wu, UMN specifically approved this research, granting IRB exemption.

https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

See section VI-A (page 8).


That doesn't conflict with the statement. If the IRB looks at something and exempts it, it has no reason to report that to the hierarchy in any way, because that's a routine process.


Not sure how this university is run but this doesn't sound plausible to me.

>... learned today about the details of research being conducted by one of its faculty members and graduate students into the security of the Linux Kernel

And this sounds like mainly a lot of damage control is going to happen.

>We will report our findings back to the community as soon as practical.


Why does it sound implausible? In any uni I've interacted with, profs did pretty much their own thing and without a reason very little attention is paid to how they do it (or even what they do).


In the follow up chain it was stated that some of their patches made it to stable: https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

Can someone who's more invested into kernel devel find them and analyze their impact? That sounds pretty interesting to me.

Edit: This is the patch reverting all commits from that mail domain: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

Edit 2: Now that the first responses to the reversion are trickling in, some merged patched were indeed discovered to be malicious, like the following. Most of them seem to be fine though or at least non malicious. https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...


Let me play devil's advocate here. Such pen-testing is absolutely essential to the safety of our tech ecosystem. Countries like Russia, China and USA are without a doubt, doing exactly the same thing that this UMN professor is doing. Except that instead of writing a paper about it, they are going to abuse the vulnerabilities for their own nefarious purposes.

Conducting such pen-tests, and then publishing the results openly, helps raise awareness about the need to assume-bad-faith in all OSS contributions. If some random grad student was able to successfully inject 4 vulnerabilities before finally getting caught, I shudder to think how many vulnerabilities were successfully injected, and hidden, by various nation-states. In order to better protect ourselves from cyberwarfare, we need to be far more vigilant in maintaining OSS.

Ideally, such research projects should gain prior approval from the project maintainers. But even though they didn't, this paper is still a net-positive contribution to society, by highlighting the need to take security more seriously when accepting OSS patches.


The world works better without everyone being untrusting of everyone else, and this is especially true of large collaborative projects. The same goes in science - it has been shown over and over again that if researchers submit deliberately fraudulent work, it is unlikely to be picked up by peer review. Instead, it is simply deemed as fraud, and researchers that do that face heavy consequences, including jail time.

Without trust, these projects will fail. Research has shown that even in the presence of untrustworthy actors, trusting is usually still beneficial [1][2]. Instead, trust until you have reason to believe you shouldn't has been found to be an optimal strategy [2], so G K-H is responding exactly appropriately here. The linux community trusted them until they didn't, and now they are unlikely to trust them going forward.

[1] https://www.nature.com/articles/s41598-019-55384-4#Sec13 [2] https://medium.com/greater-than-experience-design/game-theor...


If an open-source project adopt a trusting attitude, nation-states can and will take advantage of this, in order to inject dangerous vulnerabilities. Telling University professors to not pen-test OSS does not stop nation-states from doing the same thing secretly. It just sweeps the problem under the rug.

Would I prefer to live in a world where everyone behaved in a trustworthy manner in OSS? Absolutely. But that is not the world we live in. A professor highlighting this fact, and forcing people to realize the dangers in trusting people, does more good than harm.

--------------

On a non-serious and humorous note, this episode reminds me of the Sokal Hoax. Most techies/scientists I've met were very appreciative of this hoax, even though it wasn't conducted with pre-approval from the subjects. It is interesting to see the shoe on the other foot

https://en.wikipedia.org/wiki/Sokal_affair


If that's the model Linux uses there's no doubt in my mind that the US, China, and probably Russia have vulnerabilities in the kernel.


And likely some of them know about each other's exploits, how to detect their use through honeypots, etc. It's a big playground of deception.


Pen testing is essential, yes, but there are correct and incorrect ways to do it. This was the latter. In fact attempts like this harm the entire industry because it reflects poorly on researchers/white hat hackers who are doing the right thing. For example, making sure your testing is non-destructive is the bare minimum, as is promptly informing the affected party when you find an exploit. These folks did neither.


Unrelated to the Linux kernel, there is a good example of how Mario Heiderich (probably the most knowledgeable person for XSS on the globe) purposefully introduced an XSS vuln into AngularJS through a patch after (!!!) checking it with the relevant authorities and even then it was a close-ish call: https://m.youtube.com/watch?v=wzrojHHyQwc


> this paper is still a net-positive contribution to society

There's claims that one vulnerability got committed and was not reverted by the research group. In fact the research group didn't even notice that it got committed. So I'd argue that this was a net negative to society because it introduced a live security vulnerability into linux.


It's always useful to search for, and upvote, a reasonable alternative opinion. Thank you for posting it.

There are a lot of people reading these discussions who aren't taking 'sides' but trying to think about the subject. Looking at different angles helps with thinking.


We already know that good faith can be abused, it's practically implied in the phrase itself. There is nothing of value to be learned from this "research".


This research implies that the linux team should not be operating on good faith.

A software as critical as linux should not be this easily compromised by a bunch of grads..

It's one of the core technologies of our computing.

Having a discussion around the ethics of this is great but it does not detract from the importance of bigger issue.


Hey, if people who rely on open source software really believe it's critical, they are more than welcome to hire engineers to audit every line they use a dozen times over. You could find many interested and capable people on this website. But the fact is a great deal of computing (and really "society") runs on an assumption of good faith, and it's just not an interesting discovery that anti-social behavior pisses people off.


No, this did not teach anyone anything new except that members of that UMN group are untrustworthy. Nothing else new was learned here at all.


Then do it through pen testing companies. Not official channels masquerading as research.


Any party caught willinging sabotaging such a prominent open source project would definitely face greater consequences than just a ban.


An excellent point, however without prior approval and safety mechanisms, they were absolutely malicious in their acts. Treating them as anything but malicious, even if "for the greater good of OSS" sets a horrible precedent. The road to hell is paved with good intentions is the quote that comes to mind. Minnesota got exactly what they deserve.


From https://lore.kernel.org/linux-nfs/CADVatmNgU7t-Co84tSS6VW=3N...,

> A lot of these have already reached the stable trees.

If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.


I tangentially followed the debacle unfold for a while and this particular thread now has lead to heated debates on some IRC channels I'm on.

While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

Two messages down in the same thread, it more or less culminates with the university e-mail suffix being banned from several kernel mailing lists and associated patches being removed[1], which might be an appropriate response to discourage others from similar stunts "for science".

[1] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


I'm confused. The cited paper contains this prominent section:

Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.

Are you saying that despite this, these malicious commits made it to production?

Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.

Are the authors lying?


>Are you saying that despite this, these malicious commits made it to production?

Vulnerable commits reached stable trees as per the maintainers in the above email exchange, though the vulnerabilities may not have been released to users yet.

The researchers themselves acknowledge the patches were accepted in the above email exchange, so it's hard to believe that they're being honest or are fully aware of their ethics violations/vulnerability introductions and that they would've prevented the patches from being released without gkh's intervention.


Ah, I must've missed that. I do see people saying patches have reached stable trees, but the researchers' own email is missing (I assume removed) from the archive. Where did you find it?


It's deleted so I was going off of the quoted text in Greg's response that their patches were being submitted without any pretext of "don't let this reach stable".

I trust Greg to have not edited or misconstrued their response.

https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


Yeah, I saw that. But the whole thing is a bit too unclear to me to know what happened.

I'm not saying this is innocent, but it's not at all clear to me that vulnerabilities were deliberately introduced with the goal of allowing them to reach a release.

Anyway, like I said, too unclear for me to have an opinion.


I'm a little confused what's unclear if you happened to see that comment - as mentioned elsewhere in this thread, the bad actors state in a clarification paper that no faulty commits reached a stable branch, in the original paper state that the no patches were being applied at all and that essentially state the research was all email communication AND worded it such that they 'discovered' bad commits rather than introduced them (seemingly just obtuse enough for a review board exemption on human subject research), despite submitting patches, acknowledging they submitted commits, and Leon and Greg finding several vulnerable commits that reached stable branches and releases. For example: https://github.com/torvalds/linux/commit/8e949363f017

While I'm sure a room of people might find it useful to psychoanalyze their 'unclear' but probably malicious intent, their actions are clearly harmful to researchers, Linux contributors, direct Linux users, and indirect Linux users (such as the billions of people who trust Linux systems to store or process their PII data).


The linked patch is pointless, but does not introduce a vulnerability.

Perhaps the researchers see no harm in letting that be released.


The linked one is harmless (well it introduces a race condition which is inherently harmful to leave in the code but I suppose for the sake of argument we can pretend that it can't lead to a vulnerability), but the maintainers mention vulnerabilities of various severity in other patches managing to reach stable. If they were not aware of the severity of their patches, then clearly they needed to be working with a maintainer(s) who is experienced with security vulnerabilities in a branch and would help prevent harmful patches from reaching stable.

It might be less intentionally harmful if we presume they didn't know other patches introduced vulnerabilities, but this is also why this research methodology is extremely reckless and frustrating to read about, when this could have been done with guard rails where they were needed without impacting the integrity of the results.


It seems that Greg K-H has now released a patch of "the easy reverts" of umn.edu commits... all 190 of them. https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

The final commit in the reverted list (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally from 2018-04-30.

EDIT: Not all of the 190 reverted commits are obviously malicious:

https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandalf...

https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.su...

https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3b...

https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...

What a mess these guys have caused.


They aren't lying, but their methods are still dangerous despite their implying the contrary. Their approach requires perfection on both the submitter and reviewer.

The submitter has to remember to send the "warning, don't apply patch" mail in the short time window between confirmation and merging. What happens if one of the students doing this work gets sick and misses some days of work, withdraws from the program, just completely forgets to send the mail?

What if the reviewer doesn't see the mail in time or it goes to spam?


GKH, in that email thread, did find commits that made it to production; most likely the authors just weren't following up very closely.


> Are the authors lying?

In short, yes. Every attempted defense of them has operated by taking their statements at face value. Every position against them has operated by showing the actual facts.

This may be shocking, but there are some people in this world who rely on other people naively believing their version of events, no matter how much it contradicts the rest of reality.


even if they didn't, they waste the community's time.

I think they are saying that it's possible that some code was branched and used elsewhere, or simply compiled into a running system by a user or developer.


Agreed on the time issue—as I noted above. I think it's still of a pretty different cost character to actually allowing malicious code to make it to production, but (as you note) it's hard to be sure that this would not make it to some non-standard branch, as well, so there are real risks in this approach.

Anyway, my point wasn't that this is free of ethical concerns, but it seems like they put _some_ thought into how to reduce the potential harm. I'm undecided if that's enough.


> I'm undecided if that's enough.

I don't think it's anywhere close to enough and I think their behavior is rightly considered reckless and unethical.

They should have contacted the leadership of the project to announce to maintainers that anonymous researchers may experiment on the contribution process, allowed maintainers to opt out, and worked with a separate maintainer with knowledge of the project to ensure harmful commits were tracked and reversions were applied before reaching stable branches.

Instead their lack of ethical considerations throughout this process has been disappointing and harmful to the scientific and open source communities, and go beyond the nature of the research itself by previously receiving an IRB exemption by classifying this as non-human research, and potentially misleading UMN on the subject matter and impact.


This is one of the commits that went live with "built-in bug" according to Leon:

https://github.com/torvalds/linux/commit/8e949363f017


I'm not convinced. Yes, there's a use after free (since fixed), but it's there before the patch too.


The particular patches being complained about seem to be subsequent work by someone on the team that wrote that paper, but submitted since the paper was published, ie, followup work.


'race conditions' like this one are inherently dangerous.


> While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.

I agree. I would say this is kind of a "human process" analog of your typical computer security research, and that this behavior is akin to black hats exploiting a vulnerability. Totally not OK as research, and totally reckless!


Yep. To take a physical-world analogy: Would it be okay to try and prove the vulnerability of a country's water supply by intentionally introducing a "harmless" chemical into the treatment works, without the consent of the works owners? Or would that be a go directly to jail sort of an experiment?

I share the researchers' intellectual curiosity about whether this would work, but I don't see how a properly-informed ethics board could ever have passed it.



The US navy did actually basically this with some pathogens in the 50s: https://en.wikipedia.org/wiki/Operation_Sea-Spray ; the idea of 'ethical oversight' was not something a lot of scientists operated under in those days.


> Would it be okay to try and prove the vulnerability of a country's water supply by intentionally introducing a "harmless" chemical into the treatment works, without the consent of the works owners?

The question should also be due to who's neglect they gained access to the "water supply". If you also truly want to make this comparison.


The question is also: "Will this research have benefits?" If the conclusion is "well, you can get access to the water supply and the only means to prevent it is to closely guard every brook, lake and river, needing half the population as guards". Well, then it is useless. And taking risks for useless research is unethical, no matter how minor those risks might be.


> If the conclusion is "well, you can get access to the water supply and the only means to prevent it is to closely guard every brook, lake and river, needing half the population as guards".

I don't think that was the conclusion.


And what was? I cannot find constructive criticism in the related paper or any of your comments.


Out of interest, is there any way to have some sort of automated way to test this weak link that is human trust? (I understand how absurd this question is)

It's awfully scary to think about how vulnerabilities might be purposely introduced into this important code base (as well as many other) only to be exploited at a later date for an intended purpose.

Edit: NM, see st_goliath response below

https://news.ycombinator.com/item?id=26888538


I assume that having these go into production could make the authors "hackers" according to law, no?

Haven't whitehat hackers doing unsolicited pen-testing been prosecuted in the past?


Are there any measures being discussed that could make such attacks harder in future?


Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).

As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.

What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.

For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.

Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?

Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.

But what technical measure would you propose that can effectively stop con men?


> Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

Yes, especially for critical projects?

> Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?

I don’t jump to the conclusion that the random contributor is evil. I do however think about the potential impact of the submitted patch, security or not, and I do assume a random contributor can sneak in subtle bugs, usually not intentionally, but simply due to a lack of understanding.


> > Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

>

> Yes, especially for critical projects?

People don't act that way I described intentionally, or because they are dumb.

Even if you go in with the greatest paranoia and the best of intentions, most of the time, most of the other people don't act maliciously and your paranoia eventually returns to a reasonable level (i.e. assuming that most people might not be malicious, but also not infallible).

It's a kind of fatigue. It's simply human. No matter how often you say "DUH of course they should".

In my entire life, I have only met a single guy who managed to keep that "everybody else is potentially evil" attitude up over time. IIRC he was eventually prescribed something with Lithium salts in it.


> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

I’m not a maintainer but naively I would have thought that the answer to this is “Yes”.

I didn’t mean any disrespect. I didn’t write “I can’t believe they haven’t implemented a perfect technical process that fully prevents these attacks”.

I just asked if there are any ideas being discussed.

Two things can be true at the same time: 1. What the “researchers” did was unethical. 2. They uncovered security flaws.


> Such as? Should we assume that every patch was submitted in bad faith and tries to sneakily introduce bugs?

Do the game theory. If you do assume that, you'll always be wrong. But if you don't assume it, you won't always be right.


Force the university to take reponsibility for screening their researchers. i.e. a blanket ban, scorched earth approach punishing the entire university's reputation is a good start.

People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.

Harsh, "over reaction" punishment is the only solution.


The only real fix for this is to improve tooling and/or programming language design to make these kinds of exploits more difficult to slip past maintainers. Lots of folks are working in that space (see recent discussion around Rust), but it's only becoming a priority now that we're seeing the impact of decades of zero consideration for security. It'll take a while to steer this ship into the right direction, and in the meantime the world continues to turn.


The University and researchers involved are now default-banned from submitting.

So yes.


If they're public IRC channels, do you mind mentioning them here? I'm trying to find the remnant. :)


There’s no research going on here. Everyone knows buggy patches can get into a project. Submitting intentionally bad patches adds nothing beyond grandstanding. They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

From FOSDEM 2014, NSA operation ORCHESTRA annual status report. It’s pretty entertaining and illustrates that this is nothing new.

https://archive.fosdem.org/2014/schedule/event/nsa_operation... https://www.youtube.com/watch?v=3jQoAYRKqhg


> They could perform analysis of review/acceptance by looking at past patches that introduced bugs without being the bad actors that they apparently are.

Very good point.


It may be unethical from an academic perspective, but I like that they did this. It shows there is a problem with the review process if it is not catching 100% of this garbage. Actual malicious actors are certainly already doing worse and maybe succeeding.

In a roundabout way, this researcher has achieved their goal, and I hope they publish their results. Certainly more meaningful than most of the drivel in the academic paper mill.


It more shows up a very serious problem with the incentives present in scientific research and a poisonous culture which obviously seems to reward malicious behavior. Science enjoys a lot of freedom and trust from citizens but this trust must not be misused. If some children playing throw fireworks under your car, or mix sugar into the gas tank, just to see how you react, this would have negative community effects, too. Adult scientists should be totally aware of that.

This will lead in effect to that even valuable contributions from universities will be seen with more suspicion and will be very damaging in the long run.


>It shows there is a problem with the review process if it is not catching 100% of this garbage

What review process catches 100% garbage? It's a mechanism to catch 99% of garbage -- otherwise Linux kernel would have no bugs.


It does raise questions though. Should there be a more formal scrutiny process for less trusted developers? Some kind of background check process?

Runs counter to how open source is ideally written, but for such a core project, perhaps stronger checks are needed.


These researchers were in part playing on the reputation of their university, right? Now people at that university are no longer trusted. I'm not sure a more formal scrutiny process will bring about better results, I think it would be reasonable to see if the university ban is sufficient to discourage similar behavior in the future.


I'm not sure what we learned. Were we under the impression that it's impossible to introduce new (security) bugs in Linux?


> Were we under the impression that it's impossible to introduce new (security) bugs in Linux?

I've heard it many times that they're thoroughly reviewed and back doors are very unlikely. So yes, some people were under the impression.


And this was caught, albeit after some delay, so that impression won't change.


uhhh...from who?


The paper indicates that the goal is to prove that OSS in particular is vulnerable to this attack, but it seems that any software development ecosystem shares the same weaknesses. The choice of an OSS target seems to be one of convenience as the results can be publicly reviewed and this approach probably avoids serious consequences like arrests or lawsuits. In that light, their conclusions are misleading, even if the attack is technically feasible. They might get more credibility if they back off the OSS angle.


Not really. You can't introduce bugs like this into my companies code base because the code is protected from random people on the internet accessing it. So your first step would be to find an exploitable bug in github, but then you are bypassing peer review as well to get in. (Actually I think we would notice that, but that is more because of a process we happen to have that most don't)


Actually you can, just get hired first.


Exactly my point.


> It shows there is a problem with the review process if it is not catching 100% of this garbage.

Does that add anything new to what we know since the creation of the "obfuscated C contest" in 1984?


> It shows there is a problem with the review process if it is not catching 100% of this garbage.

It shows nothing of the sort. No review process is 100% foolproof, and opensource means that everything can be audited if it is important to you.

The other option is closed source everything and I can guarentee that review processes let stuff through, even if its only "to meet deadlines" and you will unlikely be able to audit it.


Unable to follow the kernel thread (stuck in an age between twitter and newsgroups, sorry), but...

did these "researchers" in any way demonstrate that they were going to come clean about what they had done before their "research" made to anywhere close to release/GA?


By your logic, you allow recording people without their consent, experimenting on PTSD by inducing PTSD without people consent, or medical experimentation without the subject consent.

Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".


I think that the patches that hit stable were actually OK, based on the apparent intent to 'test' the maintainers and notify them of the bug and submit the valid patch after, but the thought process from the maintainers is:

"if they are attempting to test us by first submitting malicious patches as an experiment, we can't accept what we have accepted as not being malicious and so it's safer to remove them than to keep them".

my 2c.


The earlier patches could in theory be OK, but they also might combine with other or later patches which introduce bugs more stealthily. Bugs can be very subtle.

Obviously, trust should not be the only thing that maintainers rely on, but it is a social endeavour and trust always matters in such endeavors. Doing business with people you can't trust makes no sense. Without trust I agree fully that it is not worth the maintainer's time to accept anything from such people, or from that university.

And the fact that one can do damage with malicious code is nothing new at all. It is well known and nothing new that bad code can ultimately kill people. It is also more than obvious that I can ring the door of my neighbor, ask him or her for a cup of sugar, and blow a hammer over their head. Or people can go to a school and shoot children. Does anyone in his right mind has to do such damage in order to prove something? No. Does it prove anything? No. Does the fact that some people do things like that "prove" that society is wrong and trust and collaboration is wrong? What an idiocy, of course not!


It is worrying to consider that in all likelihood, some people with actually malicious motives, rather than clinical academic curiosity, have probably introduced introduced serious security bugs into popular FOSS projects such as the Linux kernel.

Before this study came out, I'm pretty sure there were already known examples of this happening, and it would have been reasonable to assume that some such vulnerabilities existed. But now we have even more reason to worry, given that they succeeded doing this multiple times as a two person team without real institutional backing. Imagine what a state-level actor could do.


The same can be said about any software, really. It’s all too easy for a single malicious dev to introduce security bugs in pretty much any project they are involved.


I wonder whether they broke any laws intentionally putting bugs in software that is critical to national security.


Greg does not joke around: https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

    [PATCH 000/190] Revertion of all of the umn.edu commits


Seriously.... I am undecided one way or another on reverting everything, but I am happy that someone is looking closely at this.

One thing I hope they have considered is a possible intent to get a particular important patch from umn.edu reverted to reintroduce a kernel bug. Discrediting all commits from the organization could inadvertently lead to the reintroduction of legacy exploits.


How does the kernel still run after reverting like this?


I was wondering the same thing. From the Patch itself:

> This patchset has the "easy" reverts, there are 68 remaining ones that need to be manually reviewed. Some of them are not able to be reverted as they already have been reverted, or fixed up with follow-on patches as they were determined to be invalid. Proof that these submissions were almost universally wrong.


In all likelihood, it'll run just fine.

Skimming through subject lines of 190 commits being reverted here, every single one of them is along the lines of "add refcount/NULL/etc check and conditionally do (or do not) de-allocate memory before error-path return". I.e. worst case - this will reintroduce some rare memory leak or memory lifecycle bug.

Also, all of patches in question are in drivers. So depending on hardware used, any given system's user is likely to only have to worry about 2-3, maybe 5 of the patches, not all 190.


The answer is somewhere between "it's 'only' 190 patches" and "Greg posting this patch series doesn't mean it's applied to stable yet"


>Some of them are not able to be reverted as they already have been reverted, or fixed up with follow-on patches as they were determined to be invalid. Proof that these submissions were almost universally wrong.


Greg should invoice the university for the time spent fixing this mess


How does something like this get through IRB - I always felt IRB was over the top - and then they approve something like this?

UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.


They obtained an "IRB-exempt letter" because their IRB found that this was not human research. It's quite likely that the IRB made this finding based on a misrepresentation of the research during that initial stage; once they had an exemption letter the IRB wouldn't be looking any closer.


Not necessarily. And the conflation of IRB-exemption and not human subjects research is not exactly correct.[0]

Each institution, and each IRB is made up of people and a set of policies. One does not have to meaningfully misrepresent things to IRBs for them to be misunderstood. Further, exempt from IRB review and 'not human subjects research' are not actually the same thing. I've run into this problem personally - IRB declines to review the research plan because it does not meet their definition of human subjects research, however the journal will not accept the article without IRB review. Catch-22.

Further, research that involves deception is also considered a perfectly valid form of research in certain fields (e.g., Psychology). The IRB may not have responded simply because they see the complaint as invalid. Their mandate is protecting human beings from harm, not random individuals who email them from annoyance. They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous). Someone not liking a study is not research misconduct and if the IRB determined within their processes that it isn't even human subjects research, there isn't a lot they can do here.

I suspect that this is just one of those disconnects that happens when people talk across disciplines. no misrepresentation was needed, all that was needed was for someone reviewing this, who's background is medicine and not CS, to not understand the organizational and human processes behind submitting a software 'patch'.

The follow up behavior...not great...but the start of this could be a serious of individually rational actions that combine into something problematic because they were not holistically evaluated in context.

[0] https://oprs.usc.edu/irb-review/types-of-irb-review/


Yes, your comment is the only one across the two threads which understands the nuance of the definition of human subjects research. This work is not "about" human subjects, and even the word "about" is interpreted a certain way in IRB review. If they interpret the research to be about software artifacts, and not human subjects, then the work is not under IRB purview (it can still be determined to be exempt, but that determination is from the IRB and not the PI).

However, given that, my interpretation of the federal common rule is that this work would indeed fit the definition of human subjects research, as it comprises an intervention, and it is about generalizable human procedures, not the software artifact.


Other note...different irbs treat not research vs exempt differently.

One institution I worked with conflated “exempt” and “not human subjects research” and required the same review of both.

Another institution separated them and would first establish if something was human subjects research. If it was, they would then review whether it was exempt from irb review based on certain categories. If they determined it was not human subjects research they would not review whether it met the exempt criteria, because in their mind they could not make such a determination for research that did not involve human subjects


I agree with your last paragraph, although I can totally understand how somebody who doesn’t know much about programming or open source would see otherwise.


> Further, research that involves deception is also considered a perfectly valid form of research in certain fields

The type of deception that is allowable in such cases is lying to participants about what it is that is being studied, such as telling people that they are taking a knowledge quiz when you are actually testing their reaction time.

Allowable deception does not include invading the space of people who did not consent to be studied under false pretenses.


> They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous).

It sounds pretty callous if that jet engine gets mounted on a plane that carries humans. In this hypothetical the IRB absolutely should have a hand in stopping research that has a methodology that includes sabotaging a jet engine that could be installed on a passenger airplane.

Waiving it off as an inanimate object doesn't feel like it captures the complete problem, given that there are many safety critical systems that can depend on the inanimate object.


Your extrapolation provides clear context about how this can harm people, which is within an irb purview and likely their ability to understand.

I’m not saying it is okay, I’m simply saying how this could happen.

It requires understanding the connection between inanimate object and personal harm, which in this case is 1)non obvious and 2)not even something I necessarily accept within a common rule definition of harm.

Annoyance or inconvenience is not a meaningful human harm within the irb framework

But, fundamentally, the irb did not see this as human research. You and I and the commenters see how that is wrong. That is where their evaluation ended...they did not see human involvement right or wrong.

And irb is part of the discussion of research ethics, it is not the beginning nor the end of doing ethical research.


Here is a case, where one university's (Portland State University) IRB saw that sending satire articles to social science journals "violated ethical guidelines on human-subjects research".

https://en.wikipedia.org/wiki/Peter_Boghossian#Research_misc...


that is actually a useful example for comparison.

* The researcher is a professor in the humanities, which typically does not deal with human subjects research and the (often) vague and confusing boundaries. Often, people from outside the social sciences and medical/biology fields struggle a little bit with IRBs because...things don't seem rational until you understand the history and details. Just like someone from CS.

* The researcher in your example DID NOT seek review by IRB (per my memory of the situation). That was the problem. The kernel bug authors seem to have engaged with their IRB. the difference is not doing it vs. a misunderstanding.

* The comments about seeking consent before submitting the fake papers ignore that it is perfectly possible to have done this WITHOUT a priori informed consent. It is perfectly possible for IRBs to review and approve studies involving deception. In those cases, informed consent is not required to collect data.

* Finally, people on IRBs tend to be academics and are highly likely to have some understanding of how a journal works. That would mean they understand the human role in journal publishing. The exact same IRB may well not have anyone with CS experience and may have looked at the kernel study and seen the human role differently than journal study.

* Lastly, the fact that the IRB in your example looked at 'animal rights' is telling. They were trying to figure out what Peter did. He published papers with data about experiments on animals...that would require IRB review. The fact that that charge was dismissed when they figured out no such experiments occurred is telling about who is acting in good faith.


My understanding in this case is not that the IRB declined to review the study plan, but that (quoting the study authors) "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)." (more information here: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....)

Do you think that the IRB was correct to make the determination they did? It does sound like a bit of a grey area


From the letter:

> The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning.

So the statement is a bit unclear to me, and I’m hesitant to come to a conclusion because I have not seen what they submitted.

As I read this they are saying:

* we explained the study to irb and asked whether it met their definition of human subjects research - based on our description they said it is not human subjects research

* therefore we did not apply to irb to have the study assessed for the appropriate type of review.

Exempt is a type of irb review, basically it is a low level desk review of a study. It does not mean no one looks at it, it just means the whole irb doesn’t have to discuss it.

I can see both sides of this. Irbs focus on protection of the rights of research participants. The assumption in cognitive models is of direct participants. This study ended up having indirect participants. I would argue that is the researchers job to clarify and ensure was reviewed. However, there is almost certainty this study would have been approved as exempt.

I think the irb likely did the right thing based on the information provided to them. The harm that HN is identifying does not fall within the normal irb definitions of harm anyways...which is direct harm to people. The causal chain HN is spun up about is very real...just not how irb views research typically


That's what it seemed like to me as well. Based on their research paper, they did not mention the individuals they interacted with at all.

They also lied in the paper about their methodology - claiming that once their code was accepted, they told the maintainers it should not be included. In reality, several of their bad commits made it into the stable branch.


I don’t think that’s what’s happening here. The research paper you’re talking about was already published, and supposedly only consisted of 3 patches, not the 200 or so being reverted here.

So it’s possible that this situation has nothing to do with that research, and is just another unethical thing that coincidentally comes from the same university. Or it really is a new study by the same people.

Either way, I think we should get the facts straight before the wrong people are attacked.


> In reality, several of their bad commits made it into the stable branch.

Is it known whether these commits were indeed bad? It is certainly worth removing them just in case, but is there any confirmation?



This is a completely separate incident, a year apart from the paper under discussion.


Then just go through the linked mailing list in the OP. It's in the quoted parts. Honestly, the people around here.


I don't think we know if they contain bugs, but from what I gathered reading the mailing list, we do know that they added nothing of value.


My understanding is that it's pretty common for CS departments to get IRB exemption even when human participants are tangentially involved in studies.


It is also quite easy to pull the wool over an IRBs eyes. An IRB is usually staffed with a few people from the medicine, biology, psychology and maybe (for the good ethical looks) philosophy and theology departments. Usually they aren't really qualified to know what a computer scientist is talking about describing their research.

And also, given that the stakes are higher e.g. in medicine, and the bar is lower in biology, one often gets a pass: "You don't want to poke anyone with needles, no LSD and no cages? Why are you asking us then?" Or something to that effect. The IRBs are just not used to such "harmless" things not being justified by the research objective.


see my other comment to the GP. pulling the wool suggests agency and intentionality that isn't necessarily present when you have disciplinary differences like you describe. Simple miscommunication, e.g., using totally normal field terminology that does not translate well, is different.


It is your job as a researcher to make the committee fully understand all the implications of what you are doing. If you fail in that, you failed in your duties. The committee will also tell you this, as well as any ethical guideline. Given that level of responsibility, it isn't easy to ascribe this to negligence on the part of the researchers, intent is far more likely.


No, it is your job as a researcher to make sure you never even bother to submit to the IRB something that might fail review.

Sometimes you might need to make the committee understand before a full review when you are asking where a line is for some tricky part, but you ask about those parts long before you have enough of the study designed to actually put it before the review.

Ethics are a personal responsibility. You should be personally embarrassed if you ever have something fail review, and probably should have your tenure removed as well since if your ethics are so bad as to put before the board something that fails you will also do something even worse without any review.


It is absolutely my job, but I don’t necessarily have actionable information that I created a misunderstanding.

I submit unclear thing

Thing is approved

Thing must have been clear right?


'Not ignorance, but ignorance of ignorance is the death of knowledge.' - Alfred North Whitehead


I've seen from a distance one CS department struggle with IRB to get approval for using Amazon Mechanical Turk to label pictures for computer vision datasets. I believe the resolution was creating a specialized approval process for that family of tasks.


That sounds like a disconnect from reality.


I think it is because many labs in CS departments do very little research involving human subjects (e.g. a machine learning lab or a theory lab), so within those labs there isn't really an expectation that everything goes through IRB. Many CS graduate students likely never have to interact with IRB at all, so they probably don't even know when it is necessary to involve IRB. The rules for what requires IRB involvement are also somewhat open to interpretation. For example, surveys are often exempt depending on what the survey is asking about.


Machine learning automatically being exempt is a huge red flag for me. There are immense repercussions for the world on every comp sci topic. It's just less direct, and often "digital" which seems separate but it's not.


> the response from the researcher saying these were automated by a tool looks like a potential lie.

To be clear, this is unethical research.

But I read the paper, and these patches were probably automatically generated by a tool (or perhaps guided by a tool, and filled in concretely by a human): their analyses boil down to a very simple LLVM pass that just checks for pointer dereferences and inserts calls to functions that are identified as performing frees/deallocations before those dereferences. Page 9 and onwards of the paper[1] explains it in reasonable detail.

[1]: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...


Thanks for this, very helpful.

Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?


> Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?

Depends on what you mean: they knew exactly what they were patching, so they could easily have submitted inverse patches. On the other hand, the obverse research problem (patching existing UAFs rather than inserting new ones) is currently unsolved in the general case.


I have a feeling that methods of patching the Linux kernel is a concept most members of IRB boards wouldn't understand at all. It's pretty far outside their wheelhouse.


IRB is useless. They don't use much context, including if the speediness of IRB approval would save lives. You could make a reasonable argument that IRB has contributed to millions of preventable deaths at this point, with COV alone it's at least dozens of thousands if not far more.


This is the unfortunate attitude that leads to bad research and reduces trust in science. If you think IRB has contributed to deaths you should make a case, because right now you sound like a blowhard.


By COV do you mean Covid? It sounds like you're alluding to the argument that if they'd only let us test potential vaccines on humans right away then we would have had a vaccine faster. I disagree that that's a foregone conclusion, and you certainly need a strong argument or evidence to make such a claim.


I don't disagree.


It would be fascinating to see the ethics committee exemption. I sense there was none.

Or is this kind of experiment deemed fair game? Red vs blue team kind of thing? Penetration testing.

But if it was me in this situation, I'd ban them for ethics violation as well. Acting like a Evil doer means you might get caught... and punished. I found the email about cease and desist particularly bad behavior. If that student was lying then that university will have to take real action. Reputation damage and all that. Surely a academic reprimand.

I'm sure there's plenty of drama and context we don't know about.


I didn't read this bit: "The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter"

Um. Ok.


Some people are questioning whether banning the entire university is an appropriate response. It sounds to me like there are systemic institutional issues that they need to address, and perhaps banning them until they can sort those out wouldn't be an entirely unreasonable thing to do.


I think banning them for now is appropriate. Its a shot across their bow to let them know they have done something wrong. Moving forward if it was me I'd later re-evaluate such a wide ban because of the collateral damage. But at the same time, there needs to be redress for wrongdoing since they were actually caught. I'd definitely not re-evaluate until apology and some kind of "we won't waste time like this again" agreement or at least agreed-upon understanding is in place. Whatever shape that needs to be.

As for systematic issues, I'm not sure. But moving forward they'd want to confirm there aren't glaring omissions to let this happen again. Giving them suitable Benefit-of-doubt niceties might imply these are isolated cases. (But both of them?! Perhaps isolated to a small group of academics.)

Messy situation.


The university should be policing its researchers. Banning the whole university reinforces the incentive to do so. Otherwise the fact that a contribution comes from a university researcher would bear no added trust versus a layperson.


The ban was 100% political. Greg wanted to shine the spotlight as negatively as possible on the bad faith actors so enough pressure can be out on them to be dismissed. I guarantee hell reinstitute it the moment these people are let go.


Kernel maintainers are not human. TIL


this is the comment I came for, and perhaps the most damning oversight from the IRB that could put the university in a liable position.


How does an IRB usually work? Is it the same group of people reviewing all proposals for the entire university? Or are there subject-matter experts (and hopefully lawyers) tapped to review proposals in their specific domain? Applying “ethics” to a proposal is meaningless without understanding not just how they plan to implement it but how it could be implemented.


I'm guessing its a committee of people almost operating just via a checklist, questions and their own general/specialist experience. They aren't necessarily specialists in what is being considered and are just there to provide basic sanity checks. But if some-complex-issue is not explained well in general terms, I'm sensing that this checking process fails in various ways.

Kind of like:

A: we're going to experiment with humans.

C: are you going to extract fluids?

A: no.

C: (ticks no) are you going to cut into them?

A: no.

C: (ticks no) ...

And so on.

Perhaps a new set of questions might help.

C: is what you are intending going to anger the subjects to the point they will take retribution?

A: yes

C: (ticks yes) how will it anger them?

....

And then expand from there. I'm sure they don't just stay within the checklist like robots.

Then again I'm probably wrong. Its just my imagination. But it could be true.


I'm gonna guess the committee didn't realize the "patch process" was a manual review of each patches. The way it's worded in the paper you'd think they were testing some sort of integration testing or something.


The ethics committee issued a post-hoc exemption after paper was published.


Wow. That is a flagrant violation of research ethics by everyone involved. UMN needs to halt anything even close to human subjects research until they get their IRB back under control, who knows what else is going on on campus that has not received prior approval. Utter disaster.


Someone made a good case that the IRB may have just been doing their job, according to their guidelines for what is exempt from review & what is "research on human subjects".

Nevertheless it is clear that UMN does not have sufficient controls in place to prevent this kind of unethical behavior. The ban & patch reversions may force the issue.


Individual groups at unis are very independent, little oversight is common.

UMN CS associate department head on this: https://twitter.com/lorenterveen/status/1384965111014641667 (TL;DR: they didn't hear about this before because each group does its thing, leadership doesn't get involved in IRB process, in his opinion IRB failed - situation analogous to cases known to be problematic in other subfields https://twitter.com/lorenterveen/status/1384955467051454466 )


Institutional review boards are notorious for making sure that all of the i's are dotted and the t's are crossed on the myriad of forms they require, but without actually understanding the nature of the research they are approving.


I don't think there have been any recent comments from anyone at U.Mn. So, back when the original research (happened last year) the following clarification was offered by Qiushi Wu and Kangjie Lu which atleast paints their research in somewhat better light: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

That said the current incident seems to have gone beyond the limits of that one and is a new incident. I just thought it would be fair to include their "side"


From their explanation:

(3). We send the incorrect minor patches to the Linux community through email to seek their feedback.

(4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

------------------------

But this shows a distinct lack of understanding of the problem:

> This is not ok, it is wasting our time, and we will have to report this,

> AGAIN, to your university...

------------------------

You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

1. The voluntary consent of the human subject is absolutely essential.


Holy cow!! I'm a researcher and don't understand how they thought it would be okay to not do an IRB, and how an IRB would not catch this. The linked PDF by the parent post is quite illustrative. The first few paras seem to be downplaying the severity of what they did (did not introduce actual bugs into the kernel) but that is not the bloody problem. They experimented on people (maintainers) without consent and wasted their time (maybe other effects too .. e.g. making them vary of future commits from universities)! I'm appalled.


It's not _the_ problem, but it's an actual problem. If you follow the thread, it seems they did manage to get a few approved:

https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

I agree this whole thing paints a really ugly picture, but it seems to validate the original concerns?


Even if those they did get approved were actual security holes (not benign decoys), all that it validates is no human is infallible. Well CONGRATULATIONS.


Right. And you would need a larger sample size to determine what % of the time that occurs, on average. But even then, is that useful and valid information? And is it actionable? (And if so, what is the cost of the action, and the opportunity cost of lost fixes in other areas?)


Open Source is not water proof if known committer, from well known faculty (in this case University of Minnesota) decides to send buggy patches. However, this was catched relatively quickly, but the behavior even after being caught is reprehensible:

> You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. > > Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing?

If they kept doing it even after being caught, is beyond understandable.


They did go to the UMN IRB per their paper and received a human subjects exempt waiver.

Edit: I am not defending the researchers who may have misled the IRB, or the IRB who likely have little understanding of what is actually happening


The irony is that the IRB process failed in the same way that the commit review process did. We're just missing the part where the researchers tell the IRB board they were wrong immediately after submitting their proposal for review.


IRB review: "Looks good!"


Maybe they should conduct a meta-experiment where they submit unethical experiments for IRB review. Immediately when the IRB approves the proposal, they withdraw, pointing out the ways in which it would be unethical.

Meta-meta-experiment: submit the proposal above for IRB review and see what happens.


Absolutely incredible


If you actually read the PDF linked in this thread:

* Is this human research? This is not considered human research. This project studies some issues with the patching process instead of individual behaviors, and we did not collect any personal information. We send the emails to the Linux community and seek community feedback. The study does not blame any maintainers but reveals issues in the process. The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained).


Do IRBs typically have a process by which you can file a complaint from outside the university? Maybe they never thought they would need to even check up on computer science faculty...


> You do not experiment on people without their consent.

Exactly this. Research involving human participants is supposed to have been approved by the University's Institutional Review Board; the kernel developers can complain to it: https://research.umn.edu/units/irb/about-us/contact-us

It would be interesting to see what these researches told the IRB they were doing (if they bothered).

Edited to add: From the link in GP: "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)"

Okay so this IRB needs to be educated about this. Probably someone in the kernel team should draft an open letter to them and get everyone to sign it (rather than everyone spamming the IRB contact form)

T


According to their website[0]:

> IRB exempt was issued

[0]: https://www-users.cs.umn.edu/~kjlu/


These two sentences seem contradictory from the author's response is contradictory: " The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning."

I would guess their IRB had a quick sanity check process to ensure there was no human subject research in the experiment. This is actually a good thing if scientists use their ethics and apply good judgement. Now, whoever makes that determination does so based on initial documentation supplied by the researchers. If so, the researchers should show what they submitted to get the exemption.

Again, the implication is their University will likely make it harder to get exemptions after this fiasco. This mistake hurts everyone (be it indirectly). Although, and this is being quite facetious and macabre, the researchers have inadvertently exposed a bug in their own institutions IRB process!


Combined with their lack of awareness of a possible breach of ethics in their response to Greg, I find it hard to believe they did not mislead the UMN IRB.

I hope they release what they submitted to the IRB to receive that exemption and there are some form of consequences if the mistake is on their part.


A few things about IRB approval.

1. You have to submit for review any work involving human subjects before you start interacting with them. The authors clearly state that they sought retroactive approval after being questioned about their work. That would be a big red flag for my IRB and they wouldn't approve work retroactively.

2. There are multiple levels of IRB approval. The lowest is non regulated, which means that the research falls outside of human subject research. Individual researchers can self-certify work as non regulated or get a non-regulated letter from their IRB.

From there, it goes from exempt to various degrees of regulated. Exempt research means that it is research involving human subjects that is exempt from continued IRB review past the initial approval. That means that IRB has found that their research involves human subjects but falls within one (or more) of the exceptions for continued review.

In order to be exempt, a research project must meet one of the exemptions categories (see here https://hrpp.msu.edu/help/required/exempt-categories.html for a list). The requirements changed in 2018, so what they had to show depends on when they first received their exempt status.

The bottom line is that the research needs to (a) have less than minimal risks for participants and (b) needs to be benign in nature. In my opinion, this research doesn't meet these requirements as there are significant risks to participants to both their professional reputation and future employability for having publicly merged a malicious patch. They also pushed intentionally malicious patches, so I am not sure if the research is benign to begin with.

3. Even if a research project is found exempt from IRB review, participants still need to consent to participate in it and need to be informed of the risks and benefits of the research project. It seems that they didn't consent their participants before their participation in the research project. Consent letters usually use a common template that clearly states the goals for the research project, lists the possible risks and benefits of participating in it, states the name and contact information of the PI, and data retention policies. IRB could approve projects without proactive participant consent but those are automatically "bumped up" to full IRB approval and approvals are given only in very specific circumstances. Plus, once a participant removes their consent to participate in a research project, the research team needs to stop all interactions with them and destroy all data collected from them. It seems that the kernel maintainers did not receive the informed consent materials before starting their involvement with the research project and have expressed their desire not to participate in the research after finding out they were participating in it, so the interaction with them should stop and any data collected from them should be destroyed.

4. My impression is that they got IRB approval on a technicality. That is, their research is on the open source community and its processes rather than the individual people that participate in them. My impression of their paper is that they are very careful in addressing the "Linux community" and they really never talk about their interaction with people in the paper (e.g., there is no data collection section or a description of their interactions on the mailing list). Instead, it's my impression that they present the patches that they submitted as happening "naturally" in the community and that they are describing publicly available interactions. That seems to be a little misleading of what actually happened and their role in producing and submitting the patches.


I’m interested in MSU’s list of exempt categories. Most of them are predicated on the individual subjects not being identifiable. Since this research is being done on a public mailing list that is archived and available for all to read, it is trivial to go through the archive and find the patches they quote in their paper to find out who reviewed them, and their exact responses. Would that disqualify the research from being exempt, even if the researchers themselves do not record that data or present it in their paper?

What if they did a survey of passers–by on a public street, that might be in view of CCTV operated by someone else?


The federal government has updated the rules for exemption in 2018. The MSU link is more of a summary than the actual rules.

The fact that a mailing list is publicly available is what made me worry about the applicability of any sort of exemption. In order for human subject research to be exempt from IRB review, the research needs to be deemed less than minimal risk to participants.

The fact that their experiment happens in public and that anyone can find their patches and individual maintainers' responses (and approval) of them makes me wonder if the participants are at risk of losing professional reputation (in that they approved a patch that was clearly harmful) or even employment (in that their employer might find out about their participation in this study and move them to less senior positions as they clearly cannot properly vet a patch). This might be extreme, but it is still a likely outcome given the overall sentiment of the paper.

All research that poses any harm to participants has to be IRB approved and the researchers have to show that the benefits to participants (and the community at large) surpass the individual risks. I am still not sure what benefits this work has to the OSS community and I am very surprised that this work did not require IRB supervision at all.

As far as work on a public street is concerned, IRB doesn't regulate common activities that happen in public and for which people do not have a reasonable expectation of privacy. But, as soon as you start interacting with them (e.g., intervene in their environment), IRB review is required.

You can read and analyze a publicly available mailing list (and this would even qualify as non human subject research if the data is properly anonymized) without IRB review or at most a deliberation of exempt status but you cannot email the mailing list yourself as a researcher as the act of emailing is an intervention that changes other people's environment, therefore qualifying as human subject research.


Thanks (This thread may now read a bit confusingly as I independently found that and edited my comment above)


In any university I've ever been to, this would be a gross violation of ethics with very unpleasant consequences. Informed consent is crucial when conducting experiments.

If this behaviour is tolerated by the University of Minnesota (and it appears to be so) then I suppose that's another institution on my list of unreliable research.

I do wonder what the legal consequences are. Would knowingly and willfully introducing bad code constitute a form of vandalism?


>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from Lu's list of publications at https://www-users.cs.umn.edu/~kjlu/

Seems like a conference presentation at IEEE at minimum?


IEEE S&P is actually one of the top conferences in the field of computer security. It does mention some guidance on ethical consideration.

> If a paper raises significant ethical and/or legal concerns, it might be rejected based on these concerns.

https://www.ieee-security.org/TC/SP2021/cfpapers.html

So if the kernel maintainers report the issue to the S&P PC, the paper could potentially be rejected.


Which shows that IEEE also has a problem with research ethics if they accepted such a paper.


IEEE is a garbage organization. Or atleast their India chapter is. 3 out of 5 professors in our university would recommend to avoid any paper published by Indians from IEEE. Here in India, publishing trash papers with the help of one's 'influence' is a common occurrence


Wow, that is basically the top computer security conference.


IMNAL. In addition to possibly cause the research paper retracted due to the ethical violation, I think there are potentially civil or even criminal liability here. The US law on hacking is known to be quite vague (see Aaron Swartz’s case for example)


> You do not experiment on people without their consent.

Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

From a common sense standpoint, it seems to me this is more about medical experiments. Yesterday I put some of my kids toys away without telling them to see if they’d notice and still play with them. I don’t think I need IRB approval.


IRB (as in Institutional Review Board) is a local (as in each research institution has one) regulatory board that ensures that any research conducted by people employed by the institution follows the federal government's common rule for human subject research. Most institutions receiving federal funding for research activities have to show that the funded work follows common rule guidelines for interaction with human subjects.

It is unlikely that a business conducting A/B testing or a parent interacting with their children are receiving federal funds to support it. Therefore, their work is not subject to IRB review.

Instead, if you are a researcher who is funded by federal funds (even if you are doing work on your own children), you have to receive IRB approval for any work involving human interaction before you start conducting it.


> wouldn’t every single A/B test done by a product team be considered unethical?

Potentially yes, actually.

I still think it should be possible to run some A/B tests, but a lot depends on the underlying motivation. The distance between such tests and malicious psychological manipulation can be very, very small.


> it seems to me this is more about medical experiments

Psychology and sociology are both subject to the IRB as well.

Regardless of their department, this feels like a psychology experiment.


This is a huge stretch. It’s more of a technical or operational experiment. They are testing the review process, not the maintainers.


"I was testing how the bank processes having a ton of cash taken out by someone without an account, I wasn't testing the staff or police response, geez!"


> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

I would argue that ordinary A/B tests, by their very nature, are not "experiments" in the sense that restriction is intended for, so there is no reason for them to be considered unethical.

The difference between an A/B test and an actual experiment that should require the subjects' consent is that either of the test conditions, A or B, could have been implemented ordinarily as part of business as usual. In other words, neither A nor B by themselves would need a prior justification as to why they were deployed, and if the reasoning behind either of them was to be disclosed to the subjects, they would find them indistinguishable from any other business decision.

Of course, this argument would not apply if the A/B test involved any sort of artificial inconvenience (e.g. mock errors or delays) applied to either of the test conditions. I only mean A/B tests designed to compare features or behaviours which could both legitimately be considered beneficial, but the business is ignorant of which.


> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

Assuming this isn't being asked as a rhetorical question, I think that's exactly what turned the now infamous Facebook A/B test into a perceived unethical mass manipulation of human emotions. A lot of folks are now justifiably upset and skeptical of Facebook (and big tech) as a result.

So to answer your question: yes, if that test moves into territory that would feel like manipulation once the subject is aware of it. Maybe especially so because users are conceivably making a /choice/ to use said product and may switch to an alternative (or simply divest) if trust is lost.


It should be for all science done for the sake of science, not just medical work. When I did experiments that just involved people playing an existing video game I still had to get approval from IRB and warn people of all the risks that playing a game is associated with (like RSI, despite the gameplay lasting < 15 minutes).

Researchers at a company could arguably be deemed as engaging in unethical research and barred from contributing to the scientific community due to unethical behavior. Even doing experiments on your kids may be deemed crossing the line.

The question I have is when does it apply. If you research on your own kids but never publish, is it okay? Does the act of attempting to publish results retroactively make an experiment unethical? I'm not certain these things have been worked out because of how rare people try to publish anything that wasn't part of an official experiment.


It does seem rather unethical, but I must admit that I find the topic very interesting. They should definitely have asked for consent before starting with the "attack", but if they did manage to land security vulnerabilities despite the review process it's a very worrying result. And as far as I understand they did manage to do just that?

I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.


“Hey, we are going to submit some patches that contain vulnerabilities. All right?”

If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.


In penetration testing you are doing the same thing, but you get the go-ahead for someone responsible for the project or organization since they are interested in the results as well.

A red team without approval is just a group of criminals. They must have been able to find active projects with a centralized leadership they could ask for permission.


I don’t know much about penetration testing so excuse me for the dumb question: are you required to disclose the exact methods that you’re going to use?


Yes. You have agreements about what is fair game and what is off limits. It can be that nothing can be physically altered, what times of day or office locations are OK, if it should only be a test against web services or anything in between.


Do you? You have agreement with part of the company and work it out with them, but does this routinely include the people who would be actively looking for your intrusion and trying to catch it? Often that is handled by automated systems which are not updated to have any special knowledge about the up coming penetration test and most of those supporting the application aren't made aware of the details either. The organization is aware, but not all of the people who may be impacted.


Exactly. That's answered higher up in the comment tree you are responding to.


It depends on the organization. Most that I've worked with have said everything is fine except for social engineering, but some want to know every tool you'll be running, and every type of vulnerability you'll try to exploit.


Yes, and a bank branch for example could be very interested in some social engineering to test physical security.

It is very varied. There are a lot of good and enjoyable stories out there on youtube and podcasts for anyone interested.


I tried google much but there were too many results haha. Do you have a few that you recommend?


What you do during pentesting is against the law, if you do not discuss this with your client. You're trying to gain access to a computer system that you should have no access to. The only reason this is OK, is that you have prior permission from the client to try these methods. Thus, it is important to discuss the methods used when you are executing a pentest.

With every pentesting engagement I've had, there always were rules of engagement, and what kind of things you are and are not allowed to do. They even depend on what kind of test you are doing. (for example: if you're testing bank software, it matters a lot if you test against their production environment or their testing environment)


usually the discussion is around the end goals, rather than the means. But both are game for discussion.


If the attack surface is large enough and the duration of the experiment long enough it'll return to baseline soon enough I think. It's a reasonable enough compromise. After all if the maintainers are not already considering that they might be under attack I'd argue that something is wrong with the system, a zero-day in the kernel would be invaluable indeed.

And well, if the maintainers become more vigilant in the long run it's a win/win in my book.


The maintainers are the process, as they are reviewing it, so it's absoutely attacking the maintainers.


"We're going to, as part of a study, submit various patches to the kernel and observe the mailing list and the behavior of people in response to these patches, in case a patch is to be reverted as part of the study, we immediately inform the maintainer."


Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.


>Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.

The Tuskegee Study wouldn't have happened if its participants were voluntarily, and it's effects still haunt the scientific community today. The attitude of "science by any means, including by harming other people" is reprehensible and has lasting consequences for the entire scientific community.

However, unlike the Tuskegee Study, it's totally possible to have done this ethically by contacting the leadership of the Linux project and having them announce to maintainers that anonymous researchers may experiment with the contribution process, and allowing them to opt out if they do not consent, and to ensure that harmful commits never reach stable from these researchers.

The researchers chose to instead lie to the Linux project and introduce vulnerabilities to stable trees, and this is why their research is particularly deplorable - their ethical transgressions and possibly lies made to their IRB were not done out of any necessity for empirical integrity, but rather seemingly out of convenience or recklessness.

And now the next group of researchers will have a harder time as they may be banned and every maintainer now more closely monitors academics investigating open source security :)


I don't want to defend what these researchers did, but to equate infecting people with syphilis to wasting a bit of someones time is disingenuous. Informed consent is important, but only if the magnitude of the intervention is big enough to warrant reasonable concerns.


>to wasting a bit of someones time is disingenuous

This introduced security vulnerabilities to stable branches of the project, the impact of which could have severely affected Linux, its contributors, and its users (such as those who trust their PII data to be managed by Linux servers).

The potential blast radius for their behavior being poorly tracked and not reverted is millions if not billions of devices and people. What if a researcher didn't revert one of these commits before it reached a stable branch and then a release was built? Linux users were lucky enough that Greg was able to revert the changes AFTER they reached stable trees.

There was a clear need of informed consent of *at least* leadership of the project, and to say otherwise is very much in defense of or downplaying the recklessness of their behavior.

I acknowledged that lives are not at play, but that doesn't mean that the only consequence or concern here was wasting the maintainers time, especially when they sought an IRB exemption for "non-human research" when most scientists would consider this very human research.


But it wouldn't let maintainers know what is happening, it only informs them that someone will be submitting some patches, some of which might not be merged. It doesn't push people into vigilance onto a specific detail of the patch and doesn't alert them that there is something specific. If you account for that in your experiment priors, that is entirely fine.


They apparently didn't consider this "human research"

As I understand it, any "experiment" involving other people that weren't explicitly informed of the experiment before hand needs to be a lot more carefully considered than what they did here.


Makes sense considering how open source people are treated.


In this post they say the patches come from a static analyser and they accuse the other person of slander for their criticisms

> I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

> These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

( https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )

How does that fit in with your explanation?


>I sent patches on the hopes to get feedback

They did not say that they were hoping for feedback on their tool when they submitted the patch, they lied about their code doing something it does not.

>How does that fit in with your explanation?

It fits in the narrative of doing hypocritical changes to the project.


But lashing out when confronted after the fact? (I can't figure out how to browse to the messages that contain said purported 'slander' - maybe it is indeed terrible slander). Normally after the show is over one stops with the performance...

edit: oh, ok I guess that post with the accusations was mid-performance? Not inconsistent, so, maybe (I'm still not clear what the timeline is).


From GKH's response, which you linked:

    They obviously were _NOT_ created by a static analysis tool that is of
    any intelligence, as they all are the result of totally different
    patterns, and all of which are obviously not even fixing anything at
    all.  So what am I supposed to think here, other than that you and your
    group are continuing to experiment on the kernel community developers by
    sending such nonsense patches?

    When submitting patches created by a tool, everyone who does so submits
    them with wording like "found by tool XXX, we are not sure if this is
    correct or not, please advise." which is NOT what you did here at all.
    You were not asking for help, you were claiming that these were
    legitimate fixes, which you KNEW to be incorrect.


> (3). We send the incorrect minor patches to the Linux community through email to seek their feedback.

Sounds like they knew exactly what they were doing.


It’s a lie, that’s how it fits.


> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

> 1. The voluntary consent of the human subject is absolutely essential.

The Nuremberg code is explicitly about medical research, so it doesn't apply here. More generally, I think that the magnitude of the intervention is also relevant, and that an absolutist demand for informed consent in all - including the most trivial - cases is quite silly.

Now, in this specific case I would agree that wasting people's time is an intervention that's big enough to warrant some scrutiny, but the black-and-white way of some people to phrase this really irks me.

PS: I think people in these kinds of debate tend to talk past one another, so let me try to illustrate where I'm coming from with an experiment I came across recently:

To study how the amount of tips waiters get changes in various circumstances, some psychologists conducted an experiment where the waiter would randomly either give the guests some chocolate with the bill, or not (control condition)[0] This is, of course, perfectly innocuous, but an absolutist claim about research ethics ("You do not experiment on people without their consent.") would make research like this impossible without any benefit.

[0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1816...


But this is all a lie. If you read the linked thread you till see that they refused to admit to their experiment and even sent a new, differently broken patch.


Yeah, it is a bit disrespectful for kernel maintainers without gaining their approvals ahead of time.


Disrespecting some programmers on the internet is, while not nice, also not a high crime.


There is sometimes an exception for things like interviews when n is only a couple of people. This was clearly unethical and it’s certain that at least some of those involved knew that. It’s common knowledge universities.


I'm confused - how is this an experiment on humans? Which humans? As far as I can tell, this has nothing to do with humans, and everything to do with the open-source review process - and if one thinks that it counts as a human experiment because humans are involved, wouldn't that logic apply equally to pentesting?

For that matter, what's the difference between this and pentesting?


Penetration testing is only ethical when you are hired by the organization you are testing.

Also, IRB review is only for research funded by the federal government. If you’re testing your kid’s math abilities, you’re doing an experiment on humans, and you’re entirely responsible for determining whether this is ethical or not, and without the aid of an IRB as a second opinion.

Even then, successfully getting through the IRB process doesn’t guarantee that your study is ethical, only that it isn’t egregiously unethical. I suspect that if this researcher got IRB approval, then the IRB didn’t realize that these patches could end up in a released kernel. This would adversely affect the users of billions of Linux machines world–wide. Wasting half an hour of a reviewer’s time is not a concern by comparison.


Consent!

Usually when an organization is pen-tested it consented to being pen-tested (likely even requesting it).

Here there were no contact with the Linux foundation to gain consent for the experiment.


> indicating “looks good”

I wonder how many zero days have been included already, for example by nation state actors...


You could argue that they are doing the maintainers a favor. Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.

If I were at the receiving end, I’d think checking a patch multiple times before accepting it.


I'm sure that they thought this. But this is a bit like doing unsolicited pentests or breaking the locks on somebody's home at night without their permission. If people didn't ask for it and consent, it is unethical.

And further, pretty much everybody knows that malicious actors - if they tried hard enough - would be able to sneak through hard to find vulns.


> Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.

And this is anything new?

And if I blow a hammer over your head while you are not suspecting it, does this prove anything else than that I am thug? Does it help you? Honestly?


>You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

>1. The voluntary consent of the human subject is absolutely essential.

Does this also apply to scrapping people's data?


> You do not experiment on people without their consent.

By this logic eg. resume callback studies aiming to study bias in the workforce would be impossible.


Meh, this means a lot of viral social experiments on Youtube violate the Nuremberg code...


Yes and?

This isn't a "gotcha" - people shouldn't do this.


Yes, and people generally don't seem upset by viral Youtube social experiments. The Nuremberg code may be the status quo and nothing more. No one here is trying to justify the code on its merits, just blindly quoting it as an authority.

Here's another idea: If it's ethical to do it in a non-experimental context, it's also ethical to do it in an experimental context. So if it's OK to walk up to a stranger and ask them a weird question, it's also OK to do it in the context of a Youtube social experiment. Anything other than this is blatantly anti-scientific IMO.

It is IRBs that need reform. They're self-justifying bureaucratic cruft: https://slatestarcodex.com/2017/08/29/my-irb-nightmare/


Nah. They aren't experimenting on people, they are experimenting on organizational processes. A very different thing.


> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

> 1. The voluntary consent of the human subject is absolutely essential.

Which is rather useless, as for many experiments to work, participants have to either be lied to, or kept in the dark as to the nature of the experiment, so whatever “consent” they give is not informed consent. They simply consent to “participate in an experiment” without being informed as to the qualities thereof so that they truly know what they are signing up for.

Of course, it's quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations, and they never consented to that, but the hospitals and physicians that partake in that are not sanctioned as it's “tradition”.

Know well that so-called “human rights” have always been, and shall always be, a show of air that lack substance.


> quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations

Fascinating. Can you provide links?


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7223770/

https://ctexaminer.com/2021/03/20/explicit-consent-for-pelvi...

https://www.forbes.com/sites/paulhsieh/2018/05/14/pelvic-exa...

Most one can find of it also only deals with “intimate parts”; I am quite sceptical that this is the only thing that medical students require practice on and I think it more likely that the media only cares in this case and that in fact it is routine with many more body parts.


Their first suggestion to the process is pure gold:"OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”"

Like somebody picking your locks, and suggesting, 'to stop this one approach would be to post a sign "do not pick"'


The sign is to remind honest people that the lock is important, and we do not appreciate game playing here.


Honest people don’t see a lock and think, “Ok, they don’t want me going in there, but I bet they would appreciate some free pentesting.”


It is ok to put the sign. But not for the person who transgressed to suggest 'why dont you put a sign'


The fact that they took the feedback last time and decided "lets do more of that" is already a big red flag.


>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from https://www-users.cs.umn.edu/~kjlu/

If the original research results in a paper and IEEE conference presentation, why not? There's no professional consequences for this conduct, apparently.


Given that this conference hasn't happened yet, there should still be time for the affected people to report the inappropriate conduct to the organizers and possibly get the paper pulled.


FYI .. many ACM conferences are now asking explicitly if an IRB was required, and if so, was it received. This does not prevent researchers from saying IRB doesn't apply, but perhaps it can be caught during peer review.

Btw .. I posted a few times on the thread, and want to acknowledge that researchers are humans, and humans do make mistakes. Thankfully in this case, the direct consequence was time wasted, and this is a teaching moment for all involved. In my humble opinion, the researchers should acknowledge in stronger terms they screwed up, do a post-mortem on how this happened, and everyone (including the researchers) should move on with their lives.


The same group did the same thing last year (that's what the paper is about - may 2021 paper obviously got written/submitted last year), when the preprint got published they got criticized publicly. And now they are doing it again, so its not just a matter of "acknowledge they screwed up".


Given current academia which puts a significant negative on discussing why research failed, I doubt your idea of post-mortems, public or private, will gain any traction.

https://academia.stackexchange.com/questions/732/why-dont-re.... seems to list out reasons why not to do postmortems


There are some venues, e.g. this Asplos workshop: https://nope.pub/


If this is actually presented, someone present should also make the following clear: "As a result of the methods used by the presenters, the entire University of Minnesota system has been banned from the kernel development process and the kernel developers have had to waste time going back and re-evaluating all past submissions from the university system. The kernel team would also like to advise other open-source projects to carefully review all UMN submissions in case these professors have simply moved on to other projects."


I just wanted to highlight that S&P/Oakland is one of the top 3 or 4 security conferences in the security community in academia. This is a prestigious venue lending its credibility to this paper.


I would go even further and say that Oakland is the most prestigious security conference. That this kind of work was accepted is fairly baffling to me, since I'd expect both ethical concerns and also concerns about the "duh" factor.

I'm a little salty because I personally had two papers rejected by Oakland on the primary concern that their conclusions were too obvious already. I'd expect everybody to already believe that it wouldn't be too hard to sneak vulns into OSS patches.


The guy is still putting the blame on the Kernel project for not having a "don't submit bugs" clause.[1]

And insists it was not human research. [1]

How can this type of people be professors?

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


This does paint there side better, but it also makes me wonder if they're being wrongly accused of this current round of patches? That clarification says that they only submitted 3 patches, and that they used a random email address when doing so (so presumably no @umn.edu).

These ~200 patches from UMN being reverted might have nothing to do with these researchers at all.

Hopefully someone from the university clarifies what's happening soon before the angry mob tries to eat the wrong people.


The study you’re quoting was a previous study by the same research group, from last year.


This feels like the kind of thing that "white hat" hackers have been doing forever. UMN may have introduced useful knowledge into the world in the same way some random hacker is potentially "helping" a company by pointing out that they've left a security hole exposed in their system.

With that said, kernel developers and companies with servers on the internet are busy doing work that's important to them. This sort of thing is always an unwelcome distraction.

And, if my neighbors walks in my door at 3 a.m. to let me know I left it unlocked, I'm going to treat them the same way UMN is getting treated in this situation. Or worse.


Your analogy doesn't work. A true "white hat" hacker would hack a system to expose a security vulnerability, then immediately inform the owners of the system, all without using their unintended system access for anything malicious. In this case, the "researchers" submitted bogus patches, got them accepted and merged, then said nothing, and pushed back against accusations that they've been malicious, all for personal gain.

EDIT: Also, even if you do no harm and immediately inform your victim, this sort of stuff might rather be categorized as grey-hat. Maybe a "true" white-hat would only hack a system with explicit consent from the owner. These terms are fuzzy. But my point is, attacking a system for personal gain without notifying your victim afterwards and leaving behind malicious code is certainly not white-hat by any definition.


That's gray-hat, a white-hat wouldn't have touched the system without permission from the owners in the first place.


Haha, I just realized that and added an edit right as you commented.


You make a fair point. I'm just saying that, while it might ultimately be interesting and useful to someone or even lots of someones, it remains a crappy thing to do and the consequences that UMN is facing as a result is predictable and makes perfect sense to me, a guy who has had to rebuild a few servers and databases over the years because of intrusions and a couple of those have come with messages about how we should consult with the intruder who had less-than-helpfully found some security issue for us.


Hacking on software is one thing. Running experiments on people is something completely different.

In order to do this ethically, all that's needed is respect towards our fellow human beings. This means informing them about the nature of the research, the benefits of the collected data, the risks involved for test subjects as well as asking for their consent and permission to be researched on. Once researchers demonstrate this respect, they're likely to find that a surprising number of people will allow them to perform their research.

We all hate it when big tech tracks our every move and draws all kinds of profitable conclusions based on that data at our expense. We hate it so much we deploy active countermeasures against it. It's fundamentally the same issue.


A modification of your metaphor would also have a reputed institution in your life enter your apartment on the credibility of that institution. It is not surprising when that institution has its credibility downranked.


The problem here is really that they’re wasting time of the maintainers without their approval. Any ethics board would require prior consent to this. It wouldn’t even be hard to do.


> The problem here is really that they’re wasting time of the maintainers without their approval.

Not only that, but they are also doing experiments on a community of people which is against their interest and also could be harmful by creating mistrust. Trust is a big issue, without it it is almost impossible for people to work meaningfully together.


Yeah this actually seems more like sociological research except since it’s in the comp sci department the investigators don’t seem to be trained in acceptable (and legal) standards of conducting such research on human subjects. You definitely need prior consent when doing this sort of thing. Ideally this would be escalated to a research ethics committee at UMN because these researchers need to be trained in acceptable practices when dealing with human subjects. So to me it makes sense the subjects “opted out” and escalated to the university.


Already cited in another comment:

> We send the emails to the Linux communityand seek their feedback. The experiment is not to blame any maintainers but to reveal issues in the process. The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter. The experiment will not collect any personal data, individual behaviors, or personal opinions. It is limited to studying the patching process OSS communities follow, instead of individuals.

So they did think of that. Either they misconstrued their research or the IRB messed up. Either way, they can now see for themselves exactly how human a pissed off maintainer is.


how is this not experimentation on humans? "can we trick this human" is the entire experiment.


Besides that, if their "research" patch gets into a release, it could potentially put thousands or millions of users at risk.


1) They identified vulnerabilities with a process 2) They contributed the correct code after showing the maintainer the security vulnerability they missed. 3) Getting the consent of the people behind the process would invalidate the results.


Go hack a random organization without a vulnerability disclosure program in place and see how much goodwill you have. There is a very established best practice in how to do responsible disclosure and this is far from it.


Also by and large reputation is a good first step in a security process.

While any USB stick might have malware on it if it's ever been out of your sight, that one you found in the parking lot is a much bigger problem.


Propose a way to test this without invalidating the results.


1) Contact a single maintainer and explore feasibility of the study 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches 4) Interfere before any further damage is done

Besides, are you arguing that ends justify the means if the intent behind the research is valid?


Perhaps I'm missing something obvious, but what's the point of all this subterfuge in the first place? Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected? What does it matter whether the contributor knew ahead of time that they were submitting insecure code?

It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused. There's no point doing this, you can just search Wikipedia's edits for corrections, and start your analysis from there.


> What does it matter whether the contributor knew ahead of time that they were submitting insecure code?

It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.

> Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected?

Perhaps they could. I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.

> what's the point of all this subterfuge in the first place?

Control over the experimental setup, which is important for validity of research. Notice how most research involves gathering up fresh subjects and controls - scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for. They want fresh subjects to better account for possible confounders, and hopefully make the experiment reproducible.

(Similarly, when chasing software bugs, you could analyze old crash dumps all day to try and identify a bug - and you may start with that - but you always want to eventually reproduce the bug yourself. Ultimately, "I can and did that" is always better than "looking at past data, I guess it could happen".)

> It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused.

Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good. Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.

Also note that both Wikipedia and Linux kernel are essentially infrastructure now. Running research like this against them makes sense, where running the same research against a random small site / OSS project wouldn't.


> It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.

But does that matter? We can imagine that the error-prone developer who submitted the buggy patch just had a different mindset. Nothing about the patch changes. In fact, a malicious actor is explicitly trying to act like an error-prone developer and would (if skilled) be indistinguishable from one. So we'd expect the maintainer response to be the same.


> I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.

In line with UncleMeat's comment, I'm not convinced it's of any consequence that the security flaw was introduced deliberately, rather than by accident.

> scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for

That doesn't sound like a fair description of what's happening here.

There are two things at play. Firstly, an analysis of the survival function [0] associated with security vulnerabilities in the kernel. Secondly, the ability of malicious developers to deliberately introduce new vulnerabilities. (The technical specifics detailed in the paper are not relevant to our discussion.)

I'm not convinced that this unethical study demonstrates anything of interest on either point. We already know that security vulnerabilities make their way into the kernel. We already know that malicious actors can write code with intentional vulnerabilities, and that it's possible to conceal these vulnerabilities quite effectively.

> Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good.

That's like saying It's ok to deface library books, provided it's a large library, and provided other people are also defacing them.

Also, it would not yield a social good. As I already said, it's possible to study Wikipedia's ability to repair vandalism, without committing vandalism. This isn't hypothetical, it's something various researchers have done. [0][1]

> Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.

It isn't. Universities have ethics boards. They are held to a higher ethical standard, not a lower one.

> Running research like this against them makes sense

No one is contesting that Wikipedia is worthy of study.

[0] https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2...

[1] https://en.wikipedia.org/wiki/Wikipedia:Counter-Vandalism_Un...


It potentially has long term negative impact on the experimental subjects involved and has no research benefit. The researchers should be removed from university and the university itself should be sued and lose enough money that they act more responsible in the future. It’s a very slippery slope to from casual irb wavers to Tuskegee experiments.


Ah, but youre missing the fact that discovered vulnerabilities are now trophies in the security industry. This is potentially gold in your CV.


Of note here: Wikipedia has a specific policy prohibiting this sort of experimentation. https://en.wikipedia.org/w/index.php?title=Wikipedia:NOTLAB


> 3) Orchestrate it so that someone outside of the knowledge group approves one or more of these patches

Isn't this part still experimenting on people without their consent? Why does one group of maintainers get to decide that you can experiment on another group?


It is, but that is how security testing goes about in general (in the commercial world.) Of its application to research and ethics, I’m not much of an authority.


In general you try to obtain consent from their boss, so that if the people you pentested on complain you can point to their boss and say "Hey they agreed to it" and that will be the end of the story. In this case it's not clear who the "boss" is but something like the Linux Foundation would be a good start.


It depends.

Does creating a vaccine justify the death of some lab animals? Probably.

Does creating supermen justify mutilating people physically and psychologically without their consent? Hell no.

You can’t just ignore the context.


> 1) Contact a single maintainer and explore feasibility of the study

That has the risk that the contacted maintainer is later accused of collaborating with saboteurs or that they consult others. Either very awful or possibly invalidates results.

> 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it

Assuming the leadership agrees and won't break confidentiality, which they might if the results could make them look bad. Results would be untrustworthy or potentially increase complacency.

> 4) Interfere before any further damage is done

That was done, was it not?

> Besides, are you arguing that ends justify the means if the intent behind the research is valid?

Linux users are lucky they got off this easy.


> That was done, was it not?

The allegation being made on the mailing list is that some incorrect patches of theirs made it into git and even the stable trees. As there is not presently an enumeration of them, or which ones are alleged to be incorrect, I cannot state whether this is true.

But that's the claim.

edit: And looking at [1], they have a bunch of relatively tiny patches to a lot of subsystems, so depending on how narrowly gregkh means "rip it all out", this may be a big diff.

edit 2: On rereading [2], I may have been incorrectly conflating the assertion about "patches containing deliberate bugs" with "patches that have been committed". Though if they're ripping everything out anyway, it appears they aren't drawing a distinction either...

[1] - https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

[2] - https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...


Too late for the edit deadline, but [1] is a claim of an example patch that made it to stable with a deliberate bug.

[1] - https://lore.kernel.org/linux-nfs/YIAta3cRl8mk%2FRkH@unreal/


In every commercial pentest I have been in, you have 1-2 usually senior employees on the blue team in the know. They have the job to stop employees from going to far on defense, as well as stop the pentesters from going too far. The rest of the team stays in the dark to test their response and observation.

In this case, in my opinion, a small set of maintainers and linus as "management" would have to be in the know to e.g. stop a merge of such a patch once it was accepted by someone in the dark.


There doesn't have to be a way.

Kernel maintainers are volunteering their time and effort to make Linux better, not to be entertaining test subjects for the researchers.

Even if there is no ethical violation, they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.


> There doesn't have to be a way.

Given the importance of the Linux kernel, there has to be a way to make contributions safer. Some people even compare it to the "water supply" and others bring in "national security".

> they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.

"Oh no, think of the effort we have to spend at defending a critical piece of software!"


If you can’t make an experiment without violating ethical standards, you simply don’t do it, you can’t use this as an excuse to violate ethical standards.


Misplaced trust was broken, that's it. Linux users are incredibly lucky this was a research group and not an APT.


1. Get permission 2. Submit patches from a cover identity.


> 3) Getting the consent of the people behind the process would invalidate the results.

This has not been a valid excuse since the 1950s. Scientists are not allowed to ignore basic ethics because they want to discover something. Deliberately introducing bugs into any open source project is plainly unethical; doing so in the Linux kernel is borderline malicious.


We should ban A/B testing then. Google didn’t tell me they were using me to understand which link color is more profitable for them.

There are experiments and experiments. Apart from the fact that they provided the fix right away, they didn’t do anyone harm.

And, by the way, it’s their job. Maintainers must approve patches after they ensured that the patch is fine. It’s okay to do mistakes, but don’t tell me “you’re wasting my time” after I showed you that maybe there’s something wrong with the process. If anything, you should thank me and review the process.

If your excuse is “you knew the patch was vulnerable”, then how are you going to defend the project from bad actors?


> they didn’t do anyone harm.

Several of the patches are claimed to have landed in stable. Also, distributions and others (like the grsecurity people) pick up lkml patches that are not included in stable but might have security benefits. So even just publishing such a patch is harmful. Also, fixes were only provided to the maintainers privately as it seems, and unsuccessfully. Or not at all.

> If your excuse is “you knew the patch was vulnerable”, then how are you going to defend the project from bad actors?

Exactly the same way as without that "research".

If you try to pry open my car door, I'll drag you to the next police station. "I'm just researching the security of car doors" won't help you.


Actually, I think participants in an A/B test should be informed of it.

I think people should be informed when market research is being done on them.

For situations where they are already invested in the situation, it should be optional.

For other situations, such as new customer acquisition, the person would have the option of simply leaving the site to avoid it.

But either way, they should be informed.


> We should ban A/B testing then. Google didn’t tell me they were using me to understand which link color is more profitable for them.

Yes please.


No bugs were introduced and they didn't intend to introduce any bugs. infact, they have resolved over 1000+ bugs in the linux kernel.

>> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc.... "We did not introduce or intend to introduce any bug or vulnerability in the Linux kernel. All the bug-introducing patches stayed only in the email exchanges, without being adopted or merged into any Linux branch, which was explicitly confirmed by maintainers. Therefore, the bug-introducing patches in the email did not even become a Git commit in any Linux branch. None of the Linux users would be affected. The following shows the specific procedure of the experiment"


And now all their patches are getting reverted because nobody trusts them to have been made in good faith, so their list of resolved bugs goes to 0.


so instead of fixing the issue they found of being able to introduce backdoors in to their code, they are going to rollback thousand + of other bug fixes.

That's more of a story than what the researchers have done...


What would you do, if you had a group of patch authors who you didn't trust the contributions of anymore, other than setting aside the time for someone trusted to audit all 390 commits they've had since 2014?



It's indeed unfortunate what a few bruised egos will result in.


I don't think it's necessarily a bruised ego here - I think what upset him is that the paper was published a few months ago and yet, based on this patch, the author seems to still be attempting to submit deeply flawed patches to LKML, and complaining when people don't trust them to be innocent mistakes for some reason.


You're right, and it is depressing how negative the reaction has been here. This work is the technical equivalent of "Sokalling", and it is a good and necessary thing.

The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities. Forget the researchers for a moment - if it is this easy, you can be certain that malicious actors are also doing it. The only difference is that they are not then disclosing that they have done so!

The Linux maintainers should be grateful that researchers are doing this, and researchers should be doing it to every significant open source project.


> The thing that people should be upset about is that such an important open source project so easily accepts patches which introduce security vulnerabilities

They were trusting of contributors to not be malicious, and in particular, were trusting of a university to not be wholly malicious.

Sure, there is a possible threat model where they would need to be suspicious of entire universities.

But in general, human projects will operate under some level of basic trust, with some sort of means to establish that trust. To be able to actually get anything done; you cannot perfectly formally review everything with finite human resources. I don't see where they went wrong with any of that here.

There's also the very simple fact that responding to an incident is also a part of the security process, and broadly banning a group whole-cloth will be more secure than not. So both them and you are getting what you want it of it - more of the process to research, and more security.

If the changes didn't make it out to production systems, then it seems like the process worked? Even if some of it was due to admissions that would not happen with truly malicious actors, so too were the patches accepted because the actors were reasonably trusted.


The Linux project absolutely cannot trust contributors to not be malicious. If they are doing that, then this work has successfully exposed a risk.


Then they would not be accepting any patches from any contributors, as the only truly safe option when dealing with an explicitly and admittedly, or assumed known malicious actor is to disregard their work entirely. You cannot know the scope of a malicious plot in advance, and any benign piece of work can be fatal in some unknown later totality.

As with all human projects, some level and balance of trust and security is needed to get work done. And the gradient shifts as downstream forks have higher security demands / less trust, and (in the case of nation states) more resources and time to both move slower, validate changes and establish and verify trust.


Getting specific consent from the project leads is entirely doable, and would have avoided most of the concerns.


It really wouldn't have and would've made the patches not pass all levels of review.


How do you think social engineering audits work? You first coordinate with the top layer (in private, of course) and only after getting their agreement do you start your tests. This isn't any different.


> You first coordinate with the top layer (in private, of course) and only after getting their agreement do you start your tests.

The highest level is what had to be tested as well, or do you imagine only consulting Linus? Do you think that wouldn't've gotten him lynched?


I hope USENIX et al ban this student / professor / school / university associated with this work from submitting anything to any of their conferences for 10 years.

This was his clarification https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

...in which they have the nerve to say that this is not considered "human research". It most definitely is, given that their attack vector is the same one many people would be keen on using for submitting legitimate requests for getting involved.

If anything, this "research" highlights the notion that coding is but a small proportion of programming and delivery of a product, feature, or bugfix from start-to-finish is a much bigger job than many people like to admit to themselves or others.


Reading this email exchange, I worry about the state of our education system, including computer science departments. Instead of making coherent arguments, this PhD student speaks about "preconceived biases". I loved Greg's response. The spirit of Linus lives within the Kernel! These UMN people should be nowhere near the kernel. I guess they got the answer to their research on what would happen if you keep submitting stealth malicious patches to the kernel: you will get found out and banned. Made my day.


The tone of Pakki's reply made me cringe:

> Attitude that is not only unwelcome but also intimidating to newbies and non experts

Between that and the "Clarifications" document suggesting they handle it by updating their Code of Conduct, they're clearly trying really hard to frame all of this as some kind of toxic culture in kernel development. That's a hideous defense. It's like a bad MMA fight where one fighter refuses to stand up because he insists on keeping it a ground fight. Maybe it works sometimes, but it's shameful.


The researched yielded non surprising results: Stealthy patches without a proper smoke screen to provide a veil of legitimacy will cause the the purveyor of the patches to become black listed....DUH!


I still don't get the point of this "research".

You're just testing the review ability of particular Linux kernel maintainers at a particular point in time. How does that generalize to the extent needed for it to be valid research on open source software development in general?

You would need to run this "experiment" hundreds or thousands of times across most major open source projects.


>the point of this "research".

I think it's mostly "finger pointing": you need one exception to break a rule. If the rule is "open source is more secure than closed source because community/auditing/etc.", now with a paper demonstrating that this rule is not always true you can write a nice Medium article for your closed-source product, quoting said paper, claiming that your closed-source product is more secure than the open competitor.


I don't think this is correct. The authors have contributed a large number of legitimate bugfixes to the kernel. I think they really did believe that process changes can make the kernel safer and that by doing this research they can encourage that change and make the community better.

They were grossly wrong, of course. The work is extremely unethical. But I don't believe that their other actions are consistent with a "we hate OSS and want to prove it is bad" ethos.


The Linux kernel is one of the largest open-source projects in existence, so my guess is that they were aiming to show that "because the Linux kernel review process doesn't protect against these attacks, most open-source project will also be vulnerable" - "the best can't stop it, so neither will the rest".


But we have always known that someone with sufficient cleverness may be able to slip vulnerabilities past reviewers of whatever project.

Exactly how clever? That varies from reviewer to reviewer.

There will be large projects, with many people that review the code, which will not catch sufficiently clever vulnerabilities. There will be small projects with a single maintainer that will catch just about anything.

There is a spectrum. Without conducting a wide-scale (and unethical) survey with a carefully calibrated scale of cleverness for vulnerabilities, I don't see how this is useful research.


> But we have always known that someone with sufficient cleverness may be able to slip vulnerabilities past reviewers of whatever project.

...which is why the interestingness of this project depends on how clever they were - which I'm not able to evaluate, but which someone would need to before they could possibly invalidate the idea.

> (and unethical)

How is security research unethical, exactly?


>How is security research unethical, exactly?

Those being researched must consent.

The goal should be to further society. This research attempted to sabotage infrastructure.

Research should avoid unnecessary suffering. Kernel maintainers are overworked volunteers.

They must be allowed to discontinue the research if the stress becomes more than they can bear.

Read more on University of Minnesota's website and look at page 4. https://www.ahc.umn.edu/img/assets/26104/Research_Ethics.pdf


Research without ethics is research without value.

Unbelievable that this could have passed ethics review, so I'd bet it was never reviewed. Big black eye for University of Minnesota. Imagine if you are another doctoral student is CS/EE and this tool has ruined your ability to participate in Linux.


> Research without ethics is research without value.

didn't we learn a lot from nazi/japanese experiments from ww2?


From my understanding - no, actually. We learnt a bit, on the very extreme scale of things, but most of the "experiments" were not conducted in any kind of way that would yield usable data.


Yes and no. It’s my understanding that the Germans pioneered the field of implanted medical prostheses (like titanium pins to stabilize broken bones). A lot of that research was done on prisoners, and they were even kind enough to extend the benefits of the medical treatments that they developed to prisoners of war (no sarcasm intended).


We did. Often we wish they could have got more decimal points in a measurement, or had known how to check for some factor. Despite all the gains and potential breakthroughs lost nobody is willing to repeat them or anything like them. I know just enough medically people given 2 weeks to live who were still around 10 years latter that I can't think of any situation where I'd make an exception.

Though what a lot is is also open to question. Much of what we learned isn't that useful to real world problems. However some has been important.


It doesn't really sound like that's the case: https://en.wikipedia.org/wiki/Nazi_human_experimentation#Aft...

are there more impactful benefits that aren't listed on the wikipedia page? It sounds like the main research contribution is philosophical discussions over whether or not it would be okay to use the data if someone had a good reason for it.


Hypothermia is the only one I'm aware of. There is some useful data there that has helped treatment.


Learn how to torture? Maybe. Learn real knowledge? No. Most of those info are not just sick but also impractical.

The goal of military is to protect or conquer. The goal of science is to find the truth, and the goal of the engineering is to offer solutions. Any of the true leaders in either fields knows there're more efficient means/systems to get those goals, even in ww2 era.


Experiments producing lots of data doesn't necessarily mean they were useful. If the experiment was run improperly the data is untrustworthy, and if the experiment was designed to achieve things that aren't useful they may not have controlled for the right variables.

And ultimately, we know what their priorities were and what kind of worldview they were operating under, so the odds are bad that any given experiment they ran would have been rigorous enough to produce results that could be reproduced in other studies and applied elsewhere. I'm not personally aware of any major breakthroughs that would have been impossible without the "aid" of eugenicist war criminals, though it's possible there's some major example I'm missing.

We certainly did bring over lots of German scientists to work on nukes and rockets, so your question is not entirely off-base - but I suspect almost everyone involved in those choices would argue that rocketry research isn't unethical.


By in large no. The Nazi experiments were based on faulty race science and were indistinguishable from brutal torture and what remains is either useless or impossible to reproduce for ethical reasons.


I'm a total neophyte when it comes to the Linux kernel development process, but couldn't they just, y'know, use a Gmail address or something? Couldn't the original researchers have done the same?


Yes, they could. This is actually addressed in the original email thread:

> But they can't then use that type of "hiding" to get away with claiming it was done for a University research project as that's even more unethical than what they are doing now.


I was also thinking that commits from e-mails ending in ".edu" are probably more likely to be assumed to be good-faith; they are from real students/professors/researchers at real universities using their real identities. There's probably going to be way more scrutiny on a commits from some random gmail address.


Exactly - the kernel maintainers already "prejudge" submissions and part of that judgement is evaluating the "story" behind why a submission is forthcoming. A submission from Linus, ok, he's employed to work on the kernel, but even that would be suspect if it's an area he never touches or appears to be going around the established maintainer.

And one of the most reasonable "stories" behind a patch is "I'm working at a university and found a bug", probably right behind "I'm working at a company and we found a bug".

Banning U of M won't solve everything, but it is dropping a source of known bad patches.


Some CS labs at UMN take ethics very seriously. Their UXR lab for example.

Other CS labs at UMN, well... apparently not so much.


Ethics are highly subjective on the margins. In this case they completely missed this issue. However the opposite is more often the case.

A good example is challenge testing Covid vaccines. This was widely deemed to be unethical despite large numbers of volunteers. Perhaps a million lives could have been saved if we had vaccines a few months sooner.

Research without ethics (as currently practiced) can have value.


I can't agree that widespread challenge testing would have been ethical. It's a larger topic than HN can accommodate, but some factors I consider important: (1) NPIs are effective at reducing transmission, (2) the consequences of an outcome with side effects could include global and long-lived anti-vax sentiment -- COVID19 is unlikely to be our last pandemic.

Issue (2) arose with the EU response to rare AZ/J+J side effects, where I believe the EU is more deserving of criticism. They will undoubtedly cause more deaths in their own populations and throughout the world than would occur from clotting complications, but no one will hold them to account. But they weighed their equities as more important than global benefit.


To save some folks an acronym lookup: NPI stands for Non-Pharmaceutical Intervention, and refers to things like wearing a mask, washing hands, physical isolation, etc.


As you agree challenge testing is unethical and it clearly would have had value (saving lots of lives) are you conceding that unethical research can have value?


The opposite -- I believe that the EU authorities acted unethically in covering their own asses, and this devalues their past & future statements.

Russia and China effectively approved vaccines with only phase II trial data. China didn't even have active cases (officially) and they vaccinated tens of millions of people with it, their own, Africans and Brazilians etc. Imagine if there had been side effects, it could have been a global disaster. (Recall the contaminated Cutter polio vax that had live polio in it. Or the SV40 problems. The UQ COVID candidate caused false positives for HIV. Various SARS-Cov1 vaccines caused bad/lethal side effects, see e.g. https://www.nature.com/articles/s41579-020-00462-y )

Being both too hasty (China) and too blame-averse (EU) seem to be ethical failings.


Life support machinery was developed with methods like cutting dog heads, plugging them in and see how long it shows signs of life.


If only we could have taught dogs to review kernel patch, ... we would probably be all out of work


Individuals should still be able to contribute, just not under the name University of Minnesota.


Plonk is a Usenet jargon term for adding a particular poster to one's kill file so that poster's future postings are completely ignored.

Link: https://en.wikipedia.org/wiki/Plonk_(Usenet)


Well, they had it coming. They abused the community's trust once in order to gain data for their research, and now it's understandable GKH has very little regard for them. Any action has consequences.


Uhhh, I just read the paper, I stopped reading when I read what I pasted below. You attempt to introduce severe security bugs into the kernel and this is your solution?

To mitigate the risks, we make several suggestions. First, OSS projects would be suggested to update the code of conduct by adding a code like "By submitting the patch, I agree to not intend to introduce bugs."


that'll solve it!



Here's the research article linked there, for those interested: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...


Please correct me if I'm wrong. So he (PhD student) was introducing bad code as part of research? And publishes a paper to show how he successfully introduced bad code.


It seems that Aditya Pakki was the one introducing shady code to the kernel and was caught. He is listed as an author on several other very similar papers (https://scholar.google.com/citations?user=O9WEZuoAAAAJ&hl=en) with authors Wu and Lu about automatically detecting "missing-check bugs" and other security issues which they purport to want to fix but this research paper explicitly discusses submitting "fixes" that have latent security bugs in them.


Merging them now...


Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

That being said, I think it would've made more sense for them to have created some dummy complex project for a class and have say 80% of the class introduce "good code", 10% of the class review all code and 10% of the class introduce these "hypocrite" commits. That way you could do similar research without having to potentially break legit code in use.

I say this since the crux of what they're trying to discover is:

1. In OSS anyone can commit.

2. Though people are incentivized to reject bad code, complexities of modern projects make 100% rejection of bad code unlikely, if not impossible.

3. Malicious actors can take advantage of (1) and (2) to introduce code that does both good and bad things such that an objective of theirs is met (presumably putting in a back-door).


They could have contacted a core maintainer and explained to them what they planned to do. That core maintainer could have then spoken to other senior core maintainers in confidence (including Greg and Linus) to decide if this type of pentest was in the best interest of Linux and the OSS community at large. That decision would need to weigh the possibility of testing and hardening Linux's security review process against possible reputational damage as well as alienating contributors who might quite rightly feel they've been publicly duped.

If leadership was on board, they could have then proceeded with the test under the supervision of those core maintainers who ensure introduced security holes don't find their way into stable. The insiders themselves would abstain from reviewing those patches to see if review by others catches them.

If leadership was not on board, they should have respected the wishes of the Linux team and found another high-visibility open-source project who is more amenable to the project. There are lots of big open-source projects to choose from, the kernel simply happens to be high-profile.


Exactly. A test could have been conducted the knowledge of Linus and Greg K-H, but not of the other maintainers. If the proposed patch made it all the way through, it could be blocked at the last stage from making it into an actual release or release candidate. But it should be up to the people in charge of the project whether they want to be experimented on.


I don't disagree, but the point of the research is more to point out a flaw in how OSS supposedly is conducted, not to actually introduce bugs. If you agree with what they were researching (and I don't) any sort of pre-emptive disclosure would basically contradict the point of their research.

I still think the best thing for them would be to simply create their own project and force their own students to commit, but they probably felt that doing that would be too contrived.


Pentesting has wide accepted standards and protocols.

You don't test a bank or Fortune 500 security system without buy-in of leadership ahead of time.


Those things aren’t open source and don’t take random submissions though.

In any case as I mentioned before I disagree with what they did.


Doing otherwise would likely amount to a crime in a lot of cases.


> Though I disagree with the research in general, if you did want to research "hypocrite commits" in an actual OSS setting, there isn't really any other way to do it other than actually introducing bugs per their proposal.

they could've done the much harder work of studying all of the incoming patches looking for bugs, and then just not reporting their findings until the kernel team accepts the patch.

the kernel has a steady stream of incoming patches, and surely a number of bugs in them to work with.

yeah it would've cost more, but would've also generated significant value for the kernel.


The point of the research isn't to study bugs, it's to study hypocrite commits. Given that a hypocrite commit requires intention, there's no other way except to submit commits yourself as the submitter would obviously know their own intention.


In what way does a hypocrite commit differ from a commit which unintentionally has the same effect?



So, for "research" you're screwing around the development of one of the most widely used components in the computer world. Worse, introducing security holes that could reach production environments...

That's a really stupid behavior ...


Very embarrassed to see my alma mater in the news today. I was hoping these were just some grad students going rogue but it even looks like the IRB allowed this 'research' to happen.


It's very likely the IRB was mislead. Don't feel too bad. I saw in one of the comments that the IRB was told that the researchers would be "sending emails," which seems to be an intentionally obtuse phrasing for them submitting malformed kernel patches.


So I won't lie, this seems like an interesting experiment and I can understand why the professor/research students at UMN wanted to do it, but my god the collateral damage against the University is massive. Banning all contributions from a major University is no joke. I also completely understand the scorched earth response from Greg. Fascinating.


I would check their ties to nation-state actors.

In closed source, nobody would even check. Modern DevOps has essentially replaced manual code review with unit tests.


I don't understand why this isn't a more widely-held sentiment. There's been instance after instance of corporate espionage in Western companies involving Chinese actors in the past 2 decades.


Yeah, state-actor scale sabotage was one of my first thoughts. And it gives me no joy to contemplate it.

Secondly, the researcher’s attitude sounds high and mighty - making process improvement suggestions when their own ethical compass is in question. Their “experiment” was “what would happen if...”. Well, bans happen. If one starts a fight don’t get indignant over a bloody nose, lol


That gives me goose bumps.


As a user of the linux kernel, I feel legal action against the "researchers" should be pursued.


I agree, I think they should be looking at criminal charges. This is the equivalent of getting a job at Ford on the assembly line and then damaging vehicles to see if anyone notices. I've been in software security for 13 years and the "Is Open Source Really Secure" question is so over done. We KNOW there is risk associated with open source.


I feel somewhat similar. Since I am using Linux, they ultimately were trying to break the security of my computers. If I do that with any company without their consent, I can easily end up in jail.


It's more than that, if there is no consequences for this kind of action, we are going to get a wave of "security researcher" wannabes trying to pull similar bullshit.

Ps: I have put security researcher in quotes because this kind of thing is not security research, it's a publicity stunt.


>they ultimately were trying to break the security of my computers.

No they weren't. They made sure the bad code never made it in. They are only guilty of wasting peoples time.


Except, from that email chain, it turns out that some of the bad code did make it into the stable branch. Clearly, they weren't keeping very close tabs on their bad code's progress through the system.


At minimum, the argument could be made that they were grossly negligent in how they conducted the experiment.


How dare they highlight the vulnerability that exists in the process! The blasphemy!

How about you think about what they just proved, about the actors that *actually* try to break the security of the kernel.


I believe as a user of the kernel the warranty exclusion in GPLv2 means you have no legal recourse:

> 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html

..which is generally a good thing even if it also protects clearly malicious actions like this.


Your feelings do not invalidate the results unfortunately.


I used to sit on a research ethics board. This absolutely would not have passed such a review. Not a 'revise and resubmit' but a hard pass accompanied with 'what the eff were you thinking?. And, yes, this should have had a REB review: testing the vulnerabilities of a system that includes people is experimenting on human subjects. Doing so without their knowledge absolutely requires a strict human subject review and these "studies" would not pass the first sniff test. I don't think it's even legal in most jurisdictions.


This is my understanding as well, but then, how such paper was accepted by IEEE ?


Not sure. I expect that editors at such journals tend to assume that studies with an institutional sponsor will be held to professional standards by the sponsor, or take the authors' assertions at face value. I suspect that reviewers might have assumed that the study was done with the knowledge and permission of GNU project managers, even if not the line programmers (as in the case of ethical pen testing). That would make it less of an obvious ethical breach.


I did my Ph.D in cognitive neuroscience, where I conducted experiments on human subjects. Running these kinds of experiments required approval from an ethics committee, which for all their faults (and there are many), are quite good at catching this kind of shenanigans.

Is there not some sort of equivalent in this field?


It seems they lied to the ethics committee. But I'm not holding my breath for the University to sanction them or withdraw/EoC their papers, because Universities prefer to have these things swept under the carpet.


There's no evidence of that. It appears to be purely a rumor being spread around there with no facts to back it up.


>they lied to the ethics committee

That'll be a fraud, no?


Depends on the history of UMN CS submissions for ethics review, how they were advised to complete the exemption request, whether they made false statements or omitted something, whether they intended to deceive, etc.

The requirements arising from US Govt grant funding may well be more strict than UMN.


I guess someone had to do this unethical experiment, but otoh, what is the value here? There's a high chance someone would later find these "intentional bugs" , it's how open source works anyway. They just proved that OSS is not military-grade , but nobody thought so anyway


> They just proved that OSS is not military-grade , but nobody thought so anyway

...and yet FOSS and especially Linux is very widely used in military devices including weapons.

Because it's known to be less insecure than most alternatives.


I assume they don't use the bleeding edge though


Like in most industrial, military, transportation, banking environments people tend to prefer very stable and thoroughly tested platform.

What HN would call "ancient".


> They just proved that OSS is not military-grade...

As if there is some other software that is "military-grade" by the same measure? What definition are you using for that term, anyway?


> but nobody thought so anyway

A lot of people claim that there's a lot of eyes on the code and thus introducing vulnerabilities is unlikely. This research clearly has bruised some egos bad.


Nothing is perfect, but is it better than not having any eyes? If anything, this shows that more eyes is needed.


The argument isn’t having no eyes is better than some eyes. Rather, it’s commonly argued that open source is better for security because there are more eyes on it.

What this research demonstrates is that you can quite easily slip back doors into an open contribution (which is often but not always associated with open source) project with supposedly the most eyes on it. That’s not true for any closed source project which is definitely not open contribution. (You can go for an open source supply chain attack, but that’s again a problem for open source.)


> it’s commonly argued that open source is better for security because there are more eyes on it.

> What this research demonstrates is that you can quite easily slip back doors into an open contribution

To make a fair comparison you should contrast it with companies or employees placing a backdoors into their own closed source software.

It's extremely easy to do and equally difficult to spot for end users.


Recruiting a rogue employee is orders of magnitude harder than receiving ostensibly benign patches in emails from Internet randos.

Rogue companies/employees is really a different security problem that’s not directly comparable to drive-by patches (the closest comparison is a rogue open source maintainer).


Maybe for employees, but usually it is a contractor of a contractor in some outsourced department replacing your employees. I'd argue that in such common situations, you are worse off than with randos on the internet sending patches, because no-one will ever review what those contractors commit.

Or you have a closed-source component you bought from someone who pinky-swears to be following secure coding practices and that their code is of course bug-free...


The reward for implanting a rogue employee is orders of magnitude higher, with the ability to plant backdoors or weaken security for decades.

And that's why nation-state attackers do it routinely.


Yes, it’s a different problem that’s way less likely to happen and potentially more impactful, hence not comparable. And entities with enough resources can do the same to open source, except with more risk; how much more is very hard to say.


Despite everything, even NSA is an avid user of Linux for their critical systems. That says a lot.


To make it a fair comparison you should contrast... an inside job with an outside job?


This is an arbitrary definition of inside vs outside. You are implying that employees are trusted and benign and other contributors are high-risk, ignoring than an "outside" contributor might be improving security with bug reports and patches.

For the end user, the threat model is about the presence of a malicious function in some binary.

Regardless if the developers are an informal community, a company, a group of companies, an NGO. They are all "outside" to the end user.

Closed source software (e.g. phone apps) breach user's trust constantly, e.g. with privacy breaching telemetries, weak security and so on.

If Microsoft weakens encryption under pressure from NSA is it "inside" or "outside"? What matters to end users is the end result.


The insiders are the maintainers. The outsiders are everyone else. If this is an arbitrary definition to you I... don't know what to tell you.

There's absolutely no reason everyone's threat model has to equate insiders with outsiders. If a stranger on the street gives you candy, you'll probably check it twice or toss it away out of caution. If a friend or family member does the same thing, you'll probably trust them and eat it. Obviously at the end of the day, your concern is the same: you not getting poisoned. That doesn't mean you can (or should...) treat your loved ones like they're strangers. It's outright insane for most people to live in that manner.

Same thing applies to other things in life, including computers. Most people have some root of trust, and that usually includes their vendors. There's no reason they have to trust you and (say) Microsoft employees/Apple employees/Linux maintainers equally. Most people, in fact, should not do so. (And this should not be a controversial position...)


The candy comparison is wrong on two levels.

1) Unless you exclusively run software written by close friends both Linux and $ClosedOSCompany are equally "outsiders"

2) I regularly trust strangers to make medicines I ingest any fly airplanes I'm on. I would not trust any person I know to fly the plane because they don't have the required training.

So, trust is not so simple, and that's why risk analysis takes time.

> There's no reason they have to trust you and (say) Microsoft employees/Apple employees/Linux maintainers equally

...and that's why plenty of critical system around the world, including weapons, run on Linux and BSD, especially around countries that don't have the best relations with US.


They were only banned after accusing Greg of slander after he called them out on their experiment and asked them to stop. They were banned for bring dishonest and rude.


> A lot of people claim that there's a lot of eyes on the code.

Eric Raymond claimed so, and a lot of people repeated his claim, but I don't think this is the same thing as "a lot of people claim" -- and even if a lot of people claim something that is obviously stupid, it doesn't make the thing less obviously stupid, it just means it's less obvious to some people for some reasons.


Eric Raymond observed it, as a shift in software development to take advantage of the wisdom of crowds. I don't see that he speaks about security directly in the original essay[2]. He's discussing the previously held idea that stable software comes from highly skilled developers working on deep and complex debugging between releases, and instead of that if all developers have different skillsets then with a large enough number of developers any bug will meet someone who thinks that bug is an easy fix. Raymond is observing that the Linux kernel development and contribution process was designed as if Linus Torvalds believed this, preferring ease of contribution and low friction patch commit to tempt more developers.

Raymond doesn't seem to claim anything like "there are sufficient eyes to swat all bugs in the kernel", or "there are eyes on all parts of the code", or "'bugs' covers all possible security flaws", or etc. He particularly mentions uptime and crashing, so less charitably the statement is "there are no crashing or corruption bugs so deep that a large enough quantity of volunteers can't bodge some way past them". Which leaves plenty of room for less used subsystems to have nobody touching them if they don't cause problems, patches that fix stability at the expense of security, absense of careful design in some areas, the amount of eyes needed being substantially larger than the amount of eyes involved or available, that maliciously submitted patches are different from traditional bugs, and more.

[1] https://en.wikipedia.org/wiki/Linus%27s_law

[2] http://www.unterstein.net/su/docs/CathBaz.pdf


> A lot of people claim that there's a lot of eyes on the code

And they are correct. Unfortunately sometimes the number of eyes is not enough.

The alternative is closed source, which has prove to be orders of magnitude worse, on many occasions.


Aditya: I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Greg: You can't quit, you're fired.


Interesting, if they provided to NSF human subject research section, to me this is potential research ethics issue.

Imagine, saying we would like to test how fire department responds to fire, by setting buildings on fire in NYC.


Well, just a small fire which you promise to extinguish yourself if they dont show up on time. Of course nobody can blame you if you didnt manage to extinguish it...

Also the buildings are not random, but safety critical infrastructure, but this is good, you can advise later:'put a "please do not ignite" sign on the building'.


Should've at least sought approval from the maintainer party, and perhaps tried to orchestrate it so that the patch approver didn't have information about it, but some part of the org did.

In a network security analogy, this is just unsolicited hacking VS being a penetration test which it claims more so to be.


This is no better. All it does is increase the size of the research team. You’re still doing research on non-consenting participants.


Regardless of whether consent (which was not given) was required, worth pointing out the emails sent to the mailing list were also intentionally misleading, or fraudulent, so some kind of ethic has obviously been violated there.


Not wanting to play the devil's advocate here but though scummy, they still successfully introduced vulnerabilities to the kernel. Suppose the paper hadn't been released or an adversary had done it. How long they'll be lingering around if they're ever removed? The paper makes a case that FOSS projects shouldn't merely trust authority for security (neither the ones submitting or the ones reviewing) but utilize tools to find potential vulnerabilities for every commit.


> utilize tools to find potential vulnerabilities for every commit.

The paper doesn't actually have concrete suggestions for tools, just hand-waving about "use static analysis tools, better than the ones you already use" and "use fuzzers, better than those that already exist."

The work was a stunt to draw attention to the problem of malicious committers. In that regard, it was perhaps successful. The authors' first recommendation is for the kernel community to increase accountability and liability for malicious committers, and GregKH is doing a fantastic job at that by holding umn.edu accountable.


Coverity found at least one:

vvv CID 1503716: Null pointer dereferences (REVERSE_INULL) vvv Null-checking "rm" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.

and tools are useful, but given the resources and the know-how of those who compete in the IOCC I think we'd have to assume they'd be able to get something through. It'd have an even higher chance of success if it could be built to target a particular hardware combination (of a desired victim) as you could make the exploit dependent on multiple parts of the code (and likely nobody would ever determine the extent, as they'd find parts of it and fix them independently).


This is bullshit research. I mean, what they have actually found out through their experiments is that you can maliciously introduce bugs into the linux kernel. But, did anyone have doubts about this being possible prior to this "research"?

Obviously, bugs gets introduced into all software projects all the time. And the bugs don't know whether they've been put there intentionally or accidentally. Alls bugs that ever appeared in the linux kernel obviously made it through the review process. Even when no-one actively tried to introduce them.

So, why should it not be possible to intentionally insert bugs if it already "works" unintentionally? What is the insight gained from this innovative "research"?


I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

Responding properly to that statement would require someone to step out of the HN community guidelines.


This is a community that thinks it’s gross negligence if something with a real name on it fails to be airgapped.

Social shame and reputation damage may be useful defense mechanisms in general, but in a hacker culture where the right to make up arbitrarily many secret identities is a moral imperative, people who burn their identities can just get new ones. Banning or shaming is not going to work against someone with actual malicious intent.


It seems to be reacting and solving the wrong problem, and won't deter actual malicious attempts.


Wow this "researcher" is a complete disaster. Who nurtures such a toxic attitude of entitlement and disregard for others time and resources? Not to mention the possible real world consequences of introducing bugs into this OS. He and his group need to be brought before an IRB.


Victim mentality is being cultivated on campuses all over the US. This will not be the last incident like this.


I would say the research was a success. They found that when a bad actor submits malicious patches they are appropriately banned from the project.


It does seem like ultimately they played themselves by getting permanently banned from participating.


So be it. Greg is a very trusted member, and has overwhelming support from the community for swinging the banhammer. We have a living kernel to maintain. Minnesota is free to fork the kernel, build their own, recreate the patch process, and send suggestions from there.


I'm pretty confident the NSA has been doing this for at least two decades, it's not a crazy enough conspiracy theory.

Inserting backdoors in the form of bugs is not difficult. Just hijack the machine of a maintainer, insert a well placed semicolon, done!

Do you remember the quote of Linus Torvalds ? "Given enough eye balls, all bugs are shallow." ? Do you really believe the Linux source code is being reviewed for bugs?

By the way, how do you write tests for a kernel?

I like open source, but security implies a lot of different problems and open source is not always better for security.


FYI The IRB for University of Minnesota https://research.umn.edu/units/irb has a Human Research Protection Program https://research.umn.edu/units/hrpp where I cannot find anything on research on people without their permission. There is a Participant's Bill of Rights https://research.umn.edu/units/hrpp/research-participants/pa... that would seem to indicate uninformed research is not allowed. I would be curious how doing research on the reactions of people to test stimulus in a non-controlled environment is not human research.


One reviewers comments to a patch of theirs from 2 weeks ago

"Plainly put, the patch demonstrates either complete lack of understanding or somebody not acting in good faith. If it's the latter[1], may I suggest the esteemed sociologists to fuck off and stop testing the reviewers with deliberately spewed excrements?"

https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...


Interesting - follow that thread and you find https://lore.kernel.org/linux-next/202104081640.1A09A99900@k... where coverity-bot says "this is bullshit":

    vvv     CID 1503716:  Null pointer dereferences  (REVERSE_INULL)
    vvv     Null-checking "rm" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.


It also says if this is a false positive to let the "experimental semi-automated" bot know...


The project is interesting, but how can they be so dumb as to post these patches under an @umn.edu address instead of using a new pseudonymous identity for each patch?!?

I mean, sneakily introducing vulnerabilities obviously only works if you don't start your messages by announcing you are one of the guys known to be trying to do so...


That's kind of the rub. They used a university email to exploit the trust afforded to them as academics and then violated that trust. As a result that trust was revoked. If they want to submit future patches they'll need to do it with random email addresses and will be subject to the scrutiny afforded random email addresses.


I doubt an university e-mail gives you significantly increased trust in the kernel community, since those are given to all students in all majors (most of which are of course much less competent at kernel development than the average kernel developer).


There are two different kinds of trust: trust that you're a legitimate person with good intentions, and trust that you're competent.

A university or corporate e-mail address helps with the former: even if the individual doesn't put their real name into their email address, the institution still maintains that mapping. The possibility of professional, legal, or social consequences attaching to your real-world identity (as is likely to happen here) is a generally-effective deterrent.


University students could be naive and could be rapped by community if they unintentionally commit harmful patches, but if they send intentionally harmful patches, maintainers can report them to university and they risk getting expelled. In this particular case the research was approved and encouraged by university and hence, and in this process they broke trust placed on university.


Why should an academic institution be afforded any extra trust in the first place?


One guess would be that an edu address would be tied to your real identity, whereas a throwaway email could be pseudonymous.


Because there are quite a few academics working on the kernel in the first place (not a in a similar order of magnitude compared to industry, of course). Even GKH gets invited by academics to work together regularly.


I am wondering if Aditya didn't respond the way he did (using corporate lawyer's langauge), Greg would have not reached to this conclusion? I am a bit surprised by the entitlement he was showing. Why would anyone use those words despite sending a nonsense patch! What kind of defence he was thinking he had among a group of seasoned developers other than being honest about intentions? I wouldn't be surprised if his professor doesn't even know what he was doing!


This seems like wanton endangerment. Kernels get baked into medical devices and never, ever updated.

I would be livid if I found that code from these "researchers" was running in a medical device that a family member relied upon.


I suspect the university will take some sort of action now that this has turned into incredibly bad press (although they really should have done something earlier).


WTF? They are experimenting with people without their consent? And they haven't been kicked out of the academic community????


Yikes, and what are they hoping to accomplish with this "research"?


What any researcher needs to accomplish: more publications


What journal is going to accept a study like this if they haven't obtained proper consent?


IEEE, see the publications list at https://www-users.cs.umn.edu/~kjlu/

>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.


May 2021 -- I guess some IEEE member can complain loudly to take it down.


My guess is: a journal that does not focus on studies of human behavior and whose editors are a) not aware of the ethical problems or b) happy to ignore ethics concerns if the publication is prone to receive much attention (which it is).


The IEEE apparently. It is a clear breach of ethics but apparently they don't care


Sadly, that only consolidates my view of that organization.


That might be an interesting topic for research LoL


That’s about as useful as to answer the question “what is this company doing?” with “trying to make money”.


But that question is as deep and important to answer as yours :D What can anyone hope to accomplish by doing fake research ? Progress, wealth, peer approval, mating, pleasure ?

So answering that they hope to get more material for papers, which is the only goal of researchers (and their main KPI), is quite deeper an answer than the question required.


I wouldn't call this fake research. Maybe unethical, but they did do research, and they did obtain data,and they did (attempt?) to publish it.


It's a near perfect example of the dangers 'publish or perish'.


Why don't you read the article to find out? https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...


They apparently made a tool to find vulnerabilities that could later lead to bugs is a different patch was introduced.

And for some insane reason, they decided to test if these kinds of bugs would be caught by inventing some and just submitting the patches, without informing anyone beforehand.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


Perhaps they wish to improve kernel security by pushing reviewers to be more careful.

Or to prove its overall insecurity.


I have a question for this community:

Insofar as this specific method of injecting flaws matches a foreign country's work done on U.S. soil - as many people in this thread have speculated - do people here think that U.S. three letter agencies (in particular NSA/CIA) should have the ability to look at whether the researchers are foreign agents/spies, even though the researchers are operating from the United States? For example, should the three letter agencies have the ability to review these researchers' private correspondence and social graphs?

Insofar as those agencies should have this ability, then, when should they use it? If they do use it, and find that someone is a foreign agent, in what way and with whom should they share their conclusions?


Now one of the problems with research in general is that negative results don't get published. While in this case it probably resolved itself automatically, if they have any ethical standards then they'll write a paper about how it ended. Something like "our assumption was that it's relatively easy to deliberately sneak in bugs into the Linux kernel but it turns out we were wrong. We managed to get our whole university banned and all former patches from all contributors from our university, including from those outside of your our research team, reversed."

Also, while their assumption is interesting, there sure had to be an ethical and safe way to conduct this. Especially without allowing their bugs to slip into release.


From an outsider, the main question is: does this expose an actual weakness in the Linux development model?

From what I understand, this answer seems to be a "yes".

Of course, it is understandable that GKH is frustrated, and if his community do not like someone pointing out this issue, it is OK too.

However, one researcher does not represent the whole university, so it seems immature to vent this to other unrelated people just because you can.


The main issue is that the researchers are now untrustworthy because they conducted this experiment without permission. Essentially, the kernel dev team can no longer trust that any given patch from U of M isn't the same research team using a different email address to submit more malicious patches.


You actually think there should be way to "trust" someone by looking at his/her Email address domain?


No? I think that there is reason to not trust anything from a given domain if that domain is in use by bad actors.


The university has an ethics board to review experiments. So what experiments get allowed reflects on the whole university


If you are actually in a graduate school, you will know it is practically impossible to review details like this, otherwise nobody can do any real work.

Besides, how to test the idea without doing what they did? Can you show us a way?


No, because there is already historic evidence that it's a weakness.


It's been a long time since I saw this usage of the word "plonk". Brought back some memories.

https://en.wikipedia.org/wiki/Plonk_(Usenet)


I feel like q lot of people here did not interpret this correctly.

As far as it's known, garbage code was not introduced into kernel.It was caught in the review process literally on the same day.

However, there has been merged code from the same people, which is not necessarily vulnerable. As a precaution the older commits are also being reverted, as these people have been identified as bad actors


Note that the commits which have been merged previously have also been intentionally garbage and misleading code, just without any obvious way to exploit them. For example, https://lore.kernel.org/lkml/20210407000913.2207831-1-pakki0... has been accepted since April 7, and it's an obviously a commit meant to _look_ like a bug fix while having no actual effect. (The line `rm = NULL;` and the line `if (was_on_sock && rm)` operate on different variables called `rm`.)

That means that the researchers got bogus code into the kernel, got it accepted, and then said nothing for two weeks as the bogus commit spread through the Linux development process and ended up in the stable tree, and, potentially, in forks.


This is categorically unethical behaviour. Attempting to get malicious code into an open source project that powers a large set of the worlds infrastructure — or even a small project — should be punished in my view. Actors are known, its been stated by the actors as intentional.

I think the Linux Foundation should make an example of this.


"Yesterday, I took a look on 4 accepted patches from Aditya and 3 of them added various severity security "holes"."

Sorry for being the paranoid one here, but reading this raises a lot of warning flags.


Regardless of their methods, I think they just proved the kernel security review process is non-existent. Either in the form of static analysis or human review. Whats being done to address those issues?


> non-existent... static analysis .... Whats being done to address those issues?

Static analysis is being done[1][2], in addition, there are also CI test farms[3][4], fuzzing farms[5], etc. Linux is a project that enough large companies have a stake in that there are some willing to throw resources like this at it.

Human review is supposed to be done through the mailing list submission process. How well this works depends in my experience from ML to ML.

[1] https://www.kernel.org/doc/html/v4.15/dev-tools/coccinelle.h...

[2] https://scan.coverity.com/projects/linux

[3] https://cki-project.org/

[4] https://bottest.wiki.kernel.org/

[5] https://syzkaller.appspot.com/upstream


Not sure why you think they proved that. Human review was done on the same day the patch was submitted and pointed out that it's wrong: https://lore.kernel.org/linux-nfs/20210407153458.GA28924@fie...


Human review was done after the patch was merged into stable, hence reverting was necessary. I’m confused why these patches don’t get treated as merge requests and get reviewed prior to merging!


This patch wasn't. Other patches from the university had made it into stable and are likely to be reversed, not because of known problems with the patches, but because of the ban.


>Whats being done to address those issues?

Moving to rust to limit the scope of possible bugs.


this is a dangerous understanding of Rust. Rust helps to avoid certain kinds of bugs in certain situations. Bugs are very much possible in Rust and the scope of bugs usually depends more on the system than the language used to write it.


I get where you're coming from, but I disagree. They actually prey on seemingly small changes that have large "unintended"/non-obvious side-effects. I argue that finding such situations is much much harder in Rust than in C. Is it impossible? Probably not (especially not in unsafe code), but I do believe it limits the attack surface quite a lot. Rust is not a definitive solution, but it can be a (big) part of the solution.


yes it definitely limits the attack surface. remember that in systems programming there are bugs that cause errors in computation, which Rust is pretty good at protecting; but there are also bugs which cause unintended behaviors, usually from incorrect or incomplete requirements, or implementation edge cases.


The bugs were found. Seems like it works to me.


I fail to see how this does not amount to vandalism of public property. https://www.shouselaw.com/ca/defense/penal-code/594/


UMN has some egg on their face, surely, but I think the IEEE should be equally embarrassed that they accepted this paper.


Seems like completely pointless "research." Clearly it wasted the maintainers' time, but also the "researchers" investigating something that is so obviously possible. Weren't there any real projects to work on?


* plonk * Was a very nice touch.


It's an acronym - Person Leaving Our Newsgroup; Kill-filed.


> I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

Maybe not being nice is part of the immune system of open source.


On the other thread, I suggested this was an attack on critical infrastructure using a university as cover and that this was a criminal/counter-intellgence matter, and then asked whether any of these bug submitters also suggested the project culture was too aggressive and created an unsafe environment, to reduce scrutiny on their backdoors.

Talk about predictive power in a hypothesis.


Given its ubiquity in so many industries, tampering with Linux kernel security sounds an awful lot like criminal sabotage under US law.

Getting banned from contributing is a light penalty.


> criminal sabotage under US law

It is pretty comfortably not sabotage under 18 USC 105, which requires proving intent to harm the national defense of the United States. Absent finding an email from one of the researchers saying "this use-after-free is gonna fuck up the tankz," intent would otherwise be nearly impossible to prove.


> It is pretty comfortably not sabotage under 18 USC 105, which requires proving intent to harm the national defense of the United States.

Presumably, this reference is intended to be to 18 USC ch. 105 (18 USC §§ 2151-2156). However, the characterization of required intent is inaccurate; the most relevant provision (18 USC § 2154) doesn’t require intent if the defendant has “reason to believe that his act may injure, interfere with, or obstruct the United States or any associate nation in preparing for or carrying on the war or defense activities” (emphasis added) during either a war or declared national emergency.

It wouldn’t take much more than showing evidence that the defendant was aware (or even was in a position likely to be exposed to information that would make him aware) that Linux is used somewhere in the defense and national security establishment to support the mental state aspect of the offense.

https://www.law.cornell.edu/uscode/text/18/2154


Intent would be hard to prove without emails / chat conversations for sure. As for damages, Linux is used by DoD, NASA and a myriad of other agencies. All the 2 and 3 letter agencies use it. Some of them contribute to it.


If congress wasn't full of old people who don't understand computers, that university professor could spend years in jail or be executed for treason.


Either that or the CFAA


"I suggested this was an attack on critical infrastructure using a university as cover and that this was a criminal/counter-intellgence matter"

There is absolutely zero evidence of this. None. In my opinion it's baseless speculation.

It's far more likely that they are upset over being called out, and are out of touch with regards as to what is ethical testing.


Sure, don't attribute to malice what can be attributed to ignorance. But you have to admit that backdooring Linux would be huge and worth billions.


Yes, Hanlon’s razor is apt but if you read TFA, you can see heavy amounts of both malice and ignorance.

From TFA: “The UMN had worked on a research paper dubbed "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits". Obviously, the "Open-Source Software" (OSS) here is indicating the Linux kernel and the University had stealthily introduced Use-After-Free (UAF) vulnerability to test the susceptibility of Linux.”


GP had a hypothesis and made a prediction based on it. The prediction turned out to be right. What more do you want?


I want proof that the motive was in any way, shape, or form, related to or sponsored by a foreign government under the cover of university research. Not speculation based -solely- on the nationality or ethnicity of the accused.


With the utmost possible respect, 'criminal or counterintelligence' in no way implies the involvement of a foreign government, and trying to allege racism on such flimsy grounds is a rhetorical tactic well past it's sell-by date.


What is their ethnicity? I just assumed they were all American citizens. My previous comment included how U.S. based attackers alleged they did something similar to openbsd's VPN libraries over a decade ago.

Suggesting a foreign government could be leaping to conclusions as well, given domestic activists with an agenda may do the same thing. A linux kernel backdoor is valuable to a lot of different interests. Hence why counter-intelligence should be involved.

However, I just looked at the names of the people involved and I don't know. Even if they were Taiwanese, that's an allied country, so I wouldn't expect it. Who were you thinking of?


"We're banning you for deliberately submitting buggy patches as an experiment."

"Well if you're gonna be a jerk about it, I won't be sending any more patches."


There is nothing about enforcing high standards that requires hostility or meanness. In this case the complaint that greg is being intimidating is being made entirely in bad faith. I don't think anyone else has a problem with greg's reply. So this doesn't really come across as an example that demonstrates your "not being nice is necessary" view.


I think so. With a large project I think a realist attitude that raises to the level of mean when there’s bullshit around is somewhat necessary to prevent decay.

If not you get cluttered up with bad code and people there for the experience. Like how stackoverflow is lost to rule zealots there for the game not for the purpose.

Something big and important should be intimidating and isn’t a public service babysitter...


It feels like a corollary of memetic assholery in online communities. Essentially the R0 [0] of being a dick.

If I have a community, bombarded by a random number of transient bad actors at random times, then if R0 > some threshold, my community inevitably trends to a cesspool, as each bad actor creates more negative members.

If I take steps to decrease R0, one of which may indeed be "blunt- and harshness to new contributors", then my community may survive in the face of equivalent pressures.

It's a valid point, and seems to have historical support via evidence of many egalitarian / welcoming communities collapsing due to the accumulation of bad faith participants.

The key distinction is probably "Are you being blunt / harsh in the service of the primary goal, or ancillary to the mission?"

[0] https://en.m.wikipedia.org/wiki/Basic_reproduction_number


> It's a valid point, and seems to have historical support via evidence of many egalitarian / welcoming communities collapsing due to the accumulation of bad faith participants.

Could you provide references to some of this historical support?


Kinda a silly example, but several subreddits that started out with the aim of making fun of some subject (i.e. r/prequelmemes and the Star Wars prequels, or r/the_donald and then Presidential candidate Donald Trump) were quickly turned into communities earnestly supporting the initial subject of parody.


I think Reddit is the broadest example, because it's evidence of both outcomes due to the diversity in moderation policy between subs.

Some can tolerate a steady influx of bad actors: some fall apart. There's probably a valid paper in there somewhere.


I don't think this is silly at all. And the fact that reddit's admins occasionally have to step in with a forceful hand over what the mods do only speaks louder to GP's point.


I'm not sure why you think you have to be mean to avoid bad code. Being nice doesn't mean accepting any and all contributions. It just means not being a jerk or _overly_ harsh when rejecting.


You can create a strict, high functioning organization without being an asshole. Maintaining high standards and expecting excellence isn't an exercise in babysitting; it's an exercise in aligning contributors to those same standards and expectations.

You don't need to do that by telling them they're garbage. You can do it by getting them invested in growth and improvement.


that is depend on who you are asking. if i am taking "no nonsense" aproach then some people are having no problem. but other people, include especialy woman, are say that it is not "nice" and that there is some problem even if it is neither "mean".

also here we are seeing persons are having no interest in "growth and improvement", they are not even creating the good faith contributions to project.


> Like how stackoverflow is lost to rule zealots there for the game not for the purpose.

Like?


Honest questions getting downvoted, closed for being too broad, duplicates or just "Wrong" in the eyes of overzealous long time members


There's nothing wrong with duplicates

If they weren't doing it, then quality of SO would decrease for all of us.

It's in our interest to have strict mods on SO


You haven't seen a question closed as a duplicate when it was clear that time or details had made the linked question not an actual duplicate?


I think the idea is that it's better to err on the side of too-strict moderation than too-lax. People can always come back to re-try a question at another time, but, once the spirit of the community is lost, there's not much you can do about it.

(Not to say I like the StackExchange community much. It's far too top-down directed for me. But I'm very much sympathetic to the spirit of strict moderation.)


I agree with you. Its silly to act like the phenomenon doesn't exist though.


I’ve never seen it develop into a serious problem, just as I’ve never seen rule driven Wikipedia have problems with rule obsession.

There are all sorts of community websites around the world. Which have developed into a serious SO contendor? IMO many things are threatening SO’s relevance, but they don’t look anything like it, which suggests that what SO is doing wrong isn’t the small details.

For example, I’d argue that Discord has become the next place for beginners to get answers, but chat rooms are very different from SO. For one thing, the help is better because someone else can spend their brain power to massage your problem. And another is that knowledge dies almost instantly.


EDIT: I removed quoted portions and snippy replies.

Forgive me if I wasn't being clear. It seems like your core point is that SO's rules are on the whole good for keeping it focused, and it seems like you are assuming I'm a beginner programmer who is frustrated with SO for not being more beginner friendly and thus advising me on what I should do instead. I feel like you are shadow boxing a little.

I think we probably mostly agree; I think SO gets an unfortunate reputation as a good place for beginners (as opposed to a sort of curated wiki of asked and answered questions on a topic, a data store of wisdom), and that in general beginners are probably best served by smaller 1-1 intervention. I usually suggest people seek out a mentor, it had never occurred to me that Discord could be a good way to go about this.

The original point I was trying to make is simply that you can see overzealous rule following on SO and that a form of that is in inappropriately closed as duplicate questions.


Kind of like how the zero-tolerance of the HN community for joke-y / quick-take comments kills the fun sometimes—but also means that people (like me who came here from Reddit and discovered what wasn't welcome right quick) learn the culture, and get to remain part of the culture we signed up for rather than something that morphs over time to the lowest common denominator.


I was enjoying Linus being less aggressive, but maybe we do need angry Linus.


Angry Greg is doing a great job. Effective, and completely without expletives or personal insults.


He is doing a great job. But I think a few insults earlier on might have prevented a whole lot of trouble.


Angry Linus would risk a stroke responding in that email thread.


Because he's old. I think young Linus wouldn't have held back in making judgement about the quality and usefulness of the research being done here.


No, because people introducing bad code through lack of skill/not enough effort were enough to get him going.

People introducing bad code on purpose, for a social experiment, are on a whole new level of bad and so would his anger be.


I enjoy Linus's wit in insulting people. He's good.


I enjoyed (and now miss) angry Linus.


> Maybe not being nice is part of the immune system of open source.

Someone for whom being a bad actor is a day job will not get deterred by being told to fuck off.

Being nasty might deter some low key negative contributors - maybe someone who overestimates their ability or someone "too smart to follow the rules". But it might also deter someone who could become a good contributor.


Being rude isn't going to discourage malicious actors, who are motivated by fame or wealth.

If you ran a bank and had a bunch of rude bank tellers, you are only going to dissuade customers, not bank robbers.


Being nice is expensive, and sending bad code imposes costs on maintainers, so the sharp brevity of maintainers is efficient, and in cases where the submitter has wasted the maintainers time, the maintainer should impose a consequence by barking at them.

Sustaining the belief that every submitter is an earnest, good, and altruistic person is painfully expensive and a waste of very valuable minds. Unhinged is unhinged and that needs to be managed, but keeping up the farce that there is some imaginary universe where the submitter is not wasting your time and working the process is wrong.

I see this in architecture all the time, where people feign ignorance and appeal to this idea you are obligated to keep up the pretense that they aren't being sneaky. Competent people hold each other accountable. If you can afford civility, absolutely use it, but when people attempt to tax your civility, impose a cost. It's the difference between being civil and harmless.


A better analogy: Attempting to pee in the community pool to research if the maintainers are doing a good job of managing the hygiene standards.


Enforcing formal behavior makes the deviant behavior more noticeable.


Proper analogy would be 'rude SWAT team', not 'rude bank tellers'.


Honestly WTF would a "newbie and non-expert" have to do with sending KERNEL PATCHES.


Personally I don't think you can become an expert in Linux kernel programming without sending patches. So over the long term, if you don't let non-experts submit patches then no new experts will ever be created, the existing ones will die or move on, and there won't be any experts at all. At that point the project will die.


but Greg had the correct that patches sent were in many part easily seeable as bad for any persons who are knowing C. each person must have some time new to C, and some time new to kernel, but those times should not be same.


Nobody is an expert on every subject. You could have PhD level knowledge of the theory behind a specific filesystem or allocator but know next to nothing about the underlying hardware.


My point is that "we're newbies on the topic of the Linux Kernel, so be friendly to us when sending Linux Kernel patches" is the worst argument I've heard about anything in years.


I'd say that's a very valid argument in principle. If you want to start contributing to the Linux kernel, you'll have to start somewhere - but you can't start refactoring entire subsystems, rather you'll start with small patches and it's very natural to make minor procedural and technical mistakes at that stage. [1]

However, in this particular case, I agree that it is not a valid argument since it is doubtful whether the beginning kernel contributor's patches are in good faith.

[1] Torvalds encouraging new contributors to send in trivial patches back in 2004: https://lkml.org/lkml/2004/12/20/255


You have plenty of places to start. Fork and patch away. You don't start by patching the distribution the entire world uses.

It's like "be kind I'm new to the concept of airplanes, let me fly this airplane with hundreds of passengers"


How is it possible that you have a PhD in filesystems but you don't know how to write an acceptable patch for the Linux kernel? That's what I call a scam PhD.


So they can tell companies "I am a contributor to the Linux kernel"..there are charlatans are in every field. Assuming this wasn't malicious and "I'm a newbie" isn't just a cover.


I had a professor in college who gave out references to students and made his assignments so easy a middle school kid could do the homework. He never said why but I'm 100% positive he was gaming the reviews sent back so that he'd stay hired on as an instructor while he built up his lesson plans. I think he figured out how to game it pretty quick seeing as the position had like 3 instructors before him whom students universally hated for either being ridiculously strict or lying through their teeth about even knowing what the difference between public and private meant.


Attacking those critical of your questionable behavior and then refusing to participate further is a common response people have when caught red handed.

This is just a form of "well I'll just take my business elsewhere!". Chances are he'll try again under a pseudonym.


Every time I have seen Theo from the OpenBSD project come down hard on someone, it was deserved.


But G. K-H's correspondence here is completely cordial and professional, and still gets all the results that were needed?


I disagree, I think it's important to be nice and welcoming to contributors but the immune system should be a robust code of conduct which explicitly lists things like this that will result in a temporary or permanent ban


I'm curious what sort of lawsuits might be possible here. I for one would donate $1000 to a non-profit trust formed to find plaintiffs for whatever possible cause and then sue the everloving shit out of the author + advisor + university as many times as possible.

EDIT: University is fair game too.


Absolutely. The derision that people like Linus get for being “mean” to big corpos trying to submit shitty patches is totally misplaced.


Not being nice is always to protect self. Not always effective though, and not always necessary.


Instead of not being nice, maybe Linux should adopt some sort of CI and testing infrastructure.


https://kernelci.org is a Linux Foundation project; there are others, but that's just the main one I know of offhand.

The idea that "not being nice" is necessary is plainly ridiculous, but this post is pretty wild--effectively you're implying that they're just amateurs or something and that this is a novel idea nobody's considered, while billions and billions of dollars of business run atop Linux-powered systems.

What they don't do is hand over CI resources to randos submitting patches. That's why kernel developers receive and process those patches.


Linux are have plenty testing machines, but it are not so simple that you seem to think for to test whole kernel. there is not any catching for all these possible cases, so not nice remain importance. and greater part is driver, driver need device for to work, so CI on this is hard.


In a follow-up [1], the author suggests: OSS projects would be suggested to update the code of conduct, something like “By submitting the patch, I agree to not intend to introduce bugs”

How can one be so short-sighted?...

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


Linux maintainers should log a complaint with the University's ethics board. You can't just experiment on people without consent.


One of the other emails in the chain says they already did.

> This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university...


I have a theory that while the university's ethics board may have people on it who are familiar with the myriad of issues surrounding, for instance, biomedical research, they have nobody on it with even the most cursory knowledge of open source software development. And nobody who has even the faintest idea of how critically important the Linux kernel is to global infrastructure.


They should also have people on it who are familiar with psychology research. The issues with this research the types of things psychology research should find.


I agree. They are attempting to put security vulnerabilities into a security-critical piece of software that is used by billions of people. This is clearly unethical and unacceptable.


According to duncaen, the researchers had gotten the green light from the ethics board before conducting the experiment.

https://news.ycombinator.com/item?id=26888978


IRB makes a decision based on the study protocol/design, so if you intentionally mislead / make wrong statements there, IRB approval doesn't really mean anything.


It means they either lied to the IRB or the IRB is absolutely incompetent. Possibly actionably so. I've sat on an IRB. This experiment would have been punted on initial review. It wouldn't even have made the agenda for discussion and vote.


Because they lied to them. They promised not to do any actual harm. But they did


"Is it ethical to A/B test humans on web pages?"


Not if your intention is to cause harm....


I always find the dichotomy we have regarding human subject experimentation interesting in the US. We essentially have two ecosystems of human subjects as to what is allowed and isn't: public and privately funded. The contrast is a bit stark.

We have public funded rules (typically derived or pressured by availability of federal or state monies/resources) which are quite strict, have ethics and IRB boards, cover even behavioral studies like this where no direct physical harm is induced but still manipulates peoples' behaviors. This is the type of experiment you're referring to where you can't experiment on people without their consent (and by the way, I agree with this opinion).

Meanwhile, we have private funded research which has a far looser set of constraints and falls into everyday regulations. You can't really physically harm someone or inject syphilis in them (Tuskegee experiments) which makes sense, but when we start talking about human subjects in terms of data, privacy of data, or behavioral manipulation most regulation goes out the window.

These people likely could be reprimanded, even fired, and scarlet lettered making their career going forward more difficult (maybe not so much in this specific case because it's really not that harmful) but enough to screw them over financially and potentially in terms of career growth.

Meanwhile, some massive business could do this with their own funding and not bat an eye. Facebook could do this (I don't know why they would) but they could. Facebook is a prime example of largely unregulated human subject experimentation though. Social networks are a hotbed for data, interactions, and setting up experimentation. It's not just Facebook though (they're an obvious easy target), it's slews of businesses collecting data and manipulating it around consumers: marketing/advertising, product design/UX focusing on 'engagement', and all sorts of stuff. Every industry does this and that sort of human subject experimentation is accepted because $money$. Meanwhile, researchers from public funding sources are crucified for similar behaviors.

I'm not defending this sort of human subject experimentation, it's ethically questionable, wrong, and should involve punishment. I am however continually disgusted by the double standard we have. If we as a society really think this sort of experimentation on human subjects or human subject data is so awful, why do we allow it to occur under private capital and leave it largely unregulated?



I'm not sure it is experimenting people without consent. Though it's certainly shitty and opportunitstic of UoM to do this.

Linux Bug fixes are open to the public. The experiment isn't on people but on bugs. I would be like filing different customer support complaints to change the behavior of a company -- you're not experimenting on people but the process of how that company interfaces with the public.

I see no wrong here including the Linux maintainers banning submissions from UoM which is completely justified as time wasting.


I assure you that customer support reps and Linux maintainers are in fact people.


I'm not sure which form of ethical violation this is, but it's malicious and should be reported.


UoM generally refers to University of Michigan. You probably meant UMN.


It's experimenting with how specific people manage bugs.


CS researchers at the University of Chicago did a similar experiment on me and other maintainers a couple years ago: https://github.com/lobsters/lobsters/issues/517

And similarly to U Minn, their IRB covered for them: https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...

My experience felt really shitty, and I'm sorry to see I'm not alone. If anyone is organizing a broad response to redress previous abuses or prevent future abuse, I'd appreciate hearing about it, my email's on my profile.


This is supremely fucked up and I’d say is borderline criminal. It’s really lucky asshole researchers like this haven’t caused a bug that cost billions of dollars, or killed someone, because eventually shit like this will... and holy shit will “it was just research” do nothing to save them.


How come there's no ethical review for research that interacts with people? (I mean it's there in medicine and psychology, and probably for many economics experiments too.)

edit: oh, it seems they got an exemption, because it's software research - https://news.ycombinator.com/item?id=26890084 :|


I can’t imagine it will stay that way forever. As more and more critical tools and infrastructure go digital, allowing people to just whack away at them or introduce malicious/bad code in the name of research is just going to be way too big of a liability.


To bad this stuff does not go on your "permanent record"


This is actually just the elitist version of "it's just a prank, bro!"

And you're right, bugs in the linux kernel could have serious consequences.


Any organization that would deploy software that could kill someone without carefully personally reviewing it for fitness of purpose especially when the candidate software states that it waives all liability and waives any guarantee that it is fit for purpose as stated in sections 11 and 12 of the GPLv2 [1] is criminally irresponsible. Though it is scummy to deliberately introduce defects into a OSS project, any defects that result in a failure to perform are both ethically and legally completely on whoever is using Linux in a capacity that can cost billions of dollars or kill someone.

[1] https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html


I agree, I think a more broad ban might be in order. I don't know that I'd want anyone from this "group" contributing to anything.


So aren’t there tests and code reviews before pushing them to the Stable code base?


Yes, there are. Will they find everything? No. Would I be pissed, if this caused silent corruption of my filesystem, or some such crap that's hard to test, due to this uni trying to push in memory misuse vulnerabilities into the kernel into some obscure driver that is not normally that much tested, but I use it on my SBC farm? Yes.

Maybe they had some plan for immediate revert when the bogus patch got into stable, but some people update stable quickly, for a good reason, and it's just not good to do this research this way.


[flagged]


very insidious foreign actors that publish papers about their op


I agree that it's bad behavior, but if you have billions of dollars resting on open-source infrastructure, you better know the liabilities involved.


It’s just a shame there is no mechanism in the license to withdraw permission for this so-called university to use Linux at all


It is by design, not having these mechanism is one of the goals of free software: free for everyone, no exceptions.

See JSON.org License which says it "shall be used for Good, not Evil" and is not considered free software.


"Free" being the confusing word here, because it has two meanings, and often are used without context in open source software.

Typically, OSS is both definitions at the same time - free monetarily, and "free" as in "freedom" to use. JSON is an interesting case of "free" monetarily but not totally "free for use".


That is expressly the opposite goal of open source. If you arbitrarily say foo user cannot use your software, then it is NOT open source. That's more like source-available.

Nobody would continue to use linux if they randomly banned people from using it, regardless of the reason.

[side note] This is why I despise the term "open source". It obscures the important part of user freedom. The term "Free/libre software" is not perfect, but it doesn't obscure this.


A shame today, a godsent another day.


There is so much disdain for unethical, ivory tower thinking in universities, this is not helping.

But, allow me to pull a different thread. How liable is the professor, the IRB, and the university if there is any calamity caused by the known code?

What is the high level difference between their action, and spreading malware intentionally?


Out of curiosity, what would be an actually good way to poke at the pipeline like this? Just ask if they'd OK a patch w/o actually submitting it? A survey?


Probably ask the maintainers to consent and add some blinding so that the patches look otherwise legitimate.


Ask about this upfront, get consent, wait rand()*365 days and do the same thing they did. Inform people immediately after it got accepted.


Ask Linus to approve it.


No .. Linus can approve it on himself. Linus cannot approve such a thing on behalf of other maintainers.


That's fair, but asking for and getting Linus' approval would have at least put them in a much stronger position. They didn't even do that. (And I doubt Linus would have even given his approval, in which case they wouldn't be in this mess.)


Agree. Since these researchers did not even ask him, they did not fulfill even the most basic requirement. If, and only if, he approves, then we can talk about who else needs to be in the know, etc.


This is a good question. You would recruit actual maintainers, [edit: or whoever is your intended subject pool] (who would provide consent, perhaps be compensated for their time). You could then give them a series of patches to approve (some being bug free and others having vulnerabilities).

[edit: specifying the population of a study is pretty important. Getting random students from the University to approve your security patch doesn't make sense. Picking students who successfully completed a computer security course and got a high grade is better than that but again, may not generalize to the real world. One of the most impressive ways I have seen this being done by grad students was a user study by John Ousterhout and others on Paxos vs. Raft. IIRC, they wanted to claim that Raft was more understandable or led to fewer bugs. Their study design was excellent. See here for an example: https://www.youtube.com/watch?v=YbZ3zDzDnrw&ab_channel=Diego... ]


If an actual maintainer (i.e. an "insider") approves your bug, then you're not testing the same thing (i.e. the impact an outsider can have), are you?


I meant the same set of subjects they wanted to focus on.


How is this supposed to work? Do you trust everyone equally? If I mailed you something (you being the "subject" in this case), would you trust it just as much as if someone in your family gave it to you?


This wouldn't really be representative. If people know they are being tested, they will be much more careful and cautious than when they are doing "business as usual".


> This took 1 min of thinking btw.

QFT.


Sending those patches is just disgraceful. I guess they're using the edu emails so banning the university is a very effective action so someone will respond to it. Otherwise, the researchers will just quietly switch to other communities such as Apache or GNU. Who want buggy patches?


They used gmail.


this is not surprising to me given the quality of minnesotta universities. U of M should be banned from existence. I remember vividly how they'd break their budgets redesigning cafeterias, hiring low quality 'professors' that refused to make paper assignments digitized. (They didnt know how). Artificially inflated dorm costs without access to affordable cooking. (Meal plans only). They have bankrupted plenty of students that were forced to drop out due to their policies on mental health. It's essentially against policy to be depressed or suicidal. They predate on kids in high school who don't at all know what they're signing up for.

Defund federal student loans. Make these universities stand on their own two feet or be replaced by something better.


The professor is going to give a ted talk in about a year talking about how he got banned from open source development and the five things he learned from it.


Clarification from their work that was posted on the professor's website:

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


How is such a ban going to be effective? The "researchers" could easily continue their experiments using different credentials, right?


Arbitrary anonymous submissions don't go into the kernel in general. The point[1] behind the Signed-off-by line is to associate a physical human being with real contact information with the change.

One of the reason this worked is likely that submissions from large US research universities get a "presumptive good faith" pass. A small company in the PRC, for an example, might see more intensive review. But given the history of open source, we trust graduate students maybe more than we should.

[1] Originally legal/copyright driven and not a security feature, though it has value in both domains.


> A small company in the PRC, for an example, might see more intensive review.

Which is a bit silly, isn't it? Grad students are poor and overworked, it seems easy to find one to trick/bribe into signing off your code, if you wanted to do something malicious.


Grad students have invested years of their life, for no reward, in research on a niche topic. Any ding to their reputation will adversely effect their entire career. I doubt this guy would get a post doc fellowship anywhere after this.


> Any ding to their reputation will adversely effect their entire career.

If this is foolproof, then no-one should be talking about the replication crisis.

People don't do bad things _expecting_ to be caught, if they haven't already convinced themselves they're not doing anything bad at all. And I suspect it's surprisingly easy to convince people that they won't get caught.


But they published papers about their misconduct... I don't know how they haven't been sanctioned already.

Replication is really a different problem. It's possible for you to do nothing wrong, run hundreds of trials, get a great result and publish it. But it was due to noise/error/unknown factors, and can't be replicated. The crisis is also that replication receives no academic recognition.

When people fabricate results they know it's an offence, the problem with these guys is they don't even acknowledge/understand the ethical rule they are breaking.


Well, there's nothing easier to corrupt than a small company (not just in the PRC), because you could found one specifically to introduce vulnerabilities without breaking any laws in any country I know of.


They do if the patch "looks good" to the right people.

In late January I submitted a patch with no prior contributions, and it was pushed to drm-misc-next within an hour. It's now filtered it's way through drm-next and will likely land in 5.13.


But your signed-off-by was a correct email address with your real identity, as per:

https://github.com/torvalds/linux/blob/master/Documentation/...

Right? It's true that all systems can be gamed and you could no doubt fool the right maintainer to take a patch from a fraudulent source. But the point is that it's not as simple as this grad student just resubmitting work under a different name.


> But your signed-off-by was a correct email address with your real identity, as per

Maybe?

My point with the above comment was more to point out that there is no special '"presumptive good faith" pass' that comes along with a .edu e-mail address, not that it's possible to subvert the system (that's already well known).

Everyone, including some random dude with a Hackers (1995) reference for an e-mail address (myself) gets that "presumptive good faith" pass.


The ban is aimed more at the UMN dept overseeing the reserach than at preventing continued "experiments." I imagine it would also make continued experiments even more unethical.


I think it is more of a message than a solution


> How is such a ban going to be effective?

It trashes University of Minnesota in the press. What is going to happen is that the president of the university now is going to hear about it, so will the provost and so will people in charge of doling money. That will rapidly fix the professor problem.

While people may think that tenure professors get to do what they want, they never win in a war with a president and a provost. That professor is toast. And so are his researchers


The professor's site says that he is an assistant professor, i.e., he doesn't actually have tenure yet.


Well his career is over. He's now unemployable in academia.


Oh no, now he'll just have to make twice the salary in private industry. We really stuck it to him.


Any data collected from such "research" would be unpublishable and therefore worthless.


Their whole department/university just got officially banned. If they attempt to circumvent that, the authorities would probably be involved due to fraud.


Thus moving from merely unethical to actually fraudulent? Although from the email exchanges it seems they are already making fraudulent statements...

At least it might prompt the University to take action against the researchers.


I believe this is so that the university treats the reports seriously. It's basically a "shit's broken, fix it". The researchers are probably under a lot of pressure from the rest of the university right now.


If you're a young hacker that wants to get into kernel development as a career, are you going to consider going to a university that has been banned from officially participating in development for arguably the most prolific kernel?

The next batch of "researchers" won't be attending the University of Minnesota, and other universities scared of the same fate (missing out on tuition money) will preemptively ban such research themselves.

"Effective" isn't binary, and this is a move in the right direction.


The kernel that runs on mars now and on home/work desktops.


So the professor in center of this event, Kangjie Lu[0] is also program comitee at IEEE S&P 2021.[1]

I'm by no means an security expert nor a kernel contributor but considering he's program comitee, is these kind of practices a common place in Security/Privacy researchers?

Does idea/practises like this get a pass on conference publishing regularly?

[0] https://www-users.cs.umn.edu/~kjlu/ [1] https://www.ieee-security.org/TC/SP2022/cfpapers.html


Let me play devil's advocate here though. This is absolutely necessary and shows the process in the kernel is vulnerable.

Sure, this is "just" a university research project this time. And sure, this is done in bad taste.

But there are legitimately malicious national actors (well, including the US govt and the various 3 letter agencies) that absolutely do this. And the national actors are likely even far more sophisticated than a couple of PhD students. They have the time, resources and energy to do this over a very long period of time.

I think on the whole, this is very net positive in that it reveals the vulnerability of open source kernel development. Despite, how shitty it feels.


Let me pile on top of that and note that if Linus had listened to his elders and used a Microkernel instead of the monolith, the kernel would be small enough that this kind of thing wouldn't be happening.


You are free to use Minix or Hurd, not sure if a modern browser will even run, but if you want a microkernel so badly...

So if only Linus would have listened we would have Linux as microkernel equally feature rich and widespread? Stupid Linus /s

https://www.minix3.org/

https://www.gnu.org/software/hurd/


Sure. And we are well past the time in which we need to develop real legal action and/or policy -- with consequences against this sort of thing.

We have an established legal framework to do this. It's called "tort law," and we need to learn how to point it at people who negligently or maliciously create and or mess with software.

What makes it difficult, of course, is that not only should it be pointed at jerk researchers, but anyone who works on software, provably knows the harm their actions can or do cause, and does it anyway. This describes "black hat hackers," but also quite a few "establishment" sources of software production.


<consipracy theory>This is intentionally malicious activity conducted with a perfect cover story</conspiracy theory>


Where does such "research" end... sending phishing mails to all US citizens to see how many passwords can be stolen?



Unethical and harmful.


Ah yes, showing those highly paid linux kernel developers how broken their system of trust and connection is! Great work.

Now if we can only find more open source developers to punish for trusting contributors!

Enjoy your ban.

Sorry if this comment seems off base, this research feels like a low blow to people trying to do good for a largely thankless job.

I would say they are violating some ideas of Ken Thompson: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...


I am honestly surprised anything like this can pass the ethic committee. The reputational risk seems huge.

For example, in economics departments there is usually a ban on lying to experiment participants. Many of them even explicitly explain to participants that this is a difference between economics and psychology experiments. The reason is that studying preferences is very important to economists, and if participants don’t believe that the experiment conditions are reliable, it will screw the research.


If the university was doing research then they should publish their findings on this most recent follow up experiment.

Suggested title:

“Linux Kernel developers found to reject nonsense patches from known bad actors”


As a side note to all of the discussion here, it would be really nice if we could find ways to take all of the incredible linux infrastructure, and repurpose it for SeL4. It is pretty scary that we've got ~30M lines of code in the kernel and the primary process we have to catch major security bugs is to rely on the experienced eyes of Greg KH or similar. They're awesome, but they're also human. It would be much better to rely on capabilities and process isolation.


Who funds this? They acknowledge funding from the NSF but you could imagine that it would benefit some other large players to sow uncertainty and doubt about Open Source Software.


Shouldn't the university researchers compensate their human guinea pigs with some nice lettuce?


I think it's a fair measure, albeit drastic.

What happens if any of that patches ends up in a kernel release?

It's like setting random houses on fire just to test the responsiveness of local firefighters.


More like replacing random locks with junk in banks and seeing how long until they're discovered.


I don't know how their IRB approved this, although we also don't know what details the researchers gave the IRB.

It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch.

If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.


This not only erodes trust in the University of Minnesota, but also erodes trust in the Linux kernel.

Imagine how downstream consumers of the kernel could be affected. The kernel is used for some extremely serious applications, in environments where updates are nonexistent. These bad patches could remain permanently in situ for mission-critical applications.

The University of Minnesota should be held liable for any damages or loss of life incurred by their reckless decision making.


This is insulting. The whole premise behind the paper is that open source developers aren't able to parse comits for malicious code. From a security standpoint, sure, I'm sure a bad actor could attempt to do this. But the fact that he tried this on the linux kernel, an almost sacred piece of software IMO, and expected it to work takes me aback. This guy either has a huge ego or knows very little about those devs.


I'd be interested if there's a more ethical way to do this kind of research, that wouldn't involve actually shipping bugs to users. There certainly is some value in kind of "penetration testing" things to see how well bad actors could get away with this kind of stuff. We basically have to assume that more sophisticated actors are doing this without detection...


Using faked identity and faked papers to expose loopholes and issues in an institution is not news in science community. Kernel community may not be immune to some common challenges for any sizable institution I assume, so some ethical hacking here seems reasonable.

However, doing it repeatedly with real names seems not helpful to the community and indicates a questionable motivation.


The ban seems rational, when viewed in the context of kernel development.

The benefit is twofold: (a) it's simpler to block a whole university than it is to figure out who the individuals are and (b) this sends a message that there is some responsibility at the institutional level.

The risk is that someone writing from that university address might have something that would be useful to the software.

Getting patches and pull-requests accepted is not a guaranteed. And it's asking a lot of kernel developers that they check not just bad code but also for badly-intended code.

I had a look at the research paper (https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...) and it saddens me to see such a thing coming out of a university. It's like a medical researcher introducing a disease to see whether it spreads quickly.


(I posted this on another entry that dropped out of the first page of HN? sorry for the dupe)

I fail to see how this does not amount to vandalism of public property. https://www.shouselaw.com/ca/defense/penal-code/594/


I can't help but think of the Sokal affair. But I'll leave the comparison to someone more knowledgeable about them both.


I'd bet that it was inspired by the Sokal affair. The difference in reaction is probably because people think the purity of Linux is important but the purity of obscure academic journals isn't. (They're probably right, because one fault in Linux will make the whole system insecure, whereas one dumb paper would go in next to the other dumb papers and leave the good papers unharmed.)

The similarities are that reviewers can get sleepy no matter what they're reviewing. Troll doll QC staff get sleepy. Nuclear reactor operators get sleepy too.


> The similarities are that reviewers can

Most people in the outgroup who know about the Sokal Affair but who know nothing about the journal they submitted to aren't aware of this, but Social Text was known to be not peer reviewed at the time. It's not that reviewers failed some test; there explicitly and publicly wasn't a review process. Everyone reading Social Text at the time would have known that and interpreted contents accordingly, so Sokal didn't demonstrate anything of value and was just being a jackass.


Is there a more readable version of this available somewhere? I really struggle to follow the unformatted mailing list format.


Scroll down to the "thread overview". There you can see the thread summarized in a tree layout, which makes more sense since asynchronous discussion isn't typically linear.

The current message in the tree is highlighted with the indicator "[this message]"; you can see replies branch out below it and parent messages above it.


Just keep hitting the "next" link to follow the thread.


The next link is one hyperlink buried in the middle of the wall of text, and simply appends the new message to the existing one. It also differentiates between prev and parent?

It's super unclear.


Scroll down a bit farther to see the full comment tree.

"Next" goes approximately down the tree in the order it's displayed on the page, by depth-first search.

"Prev" just reverses the same process as "Next".

"Parent" differs from "prev" in that it goes to the parent e-mail even if this email has earlier siblings.

(Generally, I just scroll down to the tree view and click around manually.)


The page has four sections, divided by <hr> tags;

1) The email message, with a few headers included

2) A thread overview, with all emails in the thread

3) Instructions on how to reply

4) Information about how to access the list archives.

You need only care about (1) and (2). The difference between prev and parent is indicated by the tree view in (2). The previous one is the previous one in the tree, which might not necessarily be the parent if the parent has spawned earlier replies.


This is the big revert, a good overview of all the damage they did. Some were good, most were malicious, most author names were fantasy.

https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...


Interesting tidbit from the prof's CV where he lists the paper, interpret from it what you will[1]:

> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits

> Qiushi Wu, and Kangjie Lu.

> To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

> Note: The experiment did not introduce any bug or bug-introducing commit into OSS. It demonstrated weaknesses in the patching process in a safe way. No user was affected, and IRB exempt was issued. The experiment actually fixed three real bugs. Please see the clarifications[2].

1: https://www-users.cs.umn.edu/~kjlu/

2: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


So FOSS is insecure if maintainers are lazy? This would hold true for any piece of software, wouldn't it? The difference here is that even though the "hypocrite commits" /were/ accepted, they were spotted soon after. Something that might not have happened quite as quickly in a closed source project.


I have to wonder what's going to happen to the advisor who oversaw this research. This knee-caps the whole department when conducting OS research and collaboration. If this isn't considered a big deal in the department, it should be. I certainly wouldn't pursue a graduate degree there in OS research now.


What I dont get... why not ask the board of the Linux foundation if they could attempt social engineering attacks and get authorization. If Linux foundation sees value they'd approve it and who knows maybe such tests (hiring pentesters to do social engineering) are done anyway by the Linux foundation.


This seems like a pretty scummy way to do "research". I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low. It's not that they're doing this, I'm sure they're not the first to think of this (for research or malicious reasons), but having the gall to brag about it is a new low.


> having the gall to brag about it is a new low

Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

This is thinly veiled and potentially dangerous bullying.


> This is thinly veiled and potentially dangerous bullying.

Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.

There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.

If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).

Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.


Yep, while the downside is that it wastes maintainers’ time and they are rightfully annoyed, I find the overall topic fascinating not repulsive. This is a real world red team pen test on one of the highest profile software projects. There is a lot to learn here all around! Hope the UMN people didn't burn goodwill by being too annoying, though. Sounds like they may not be the best red team after all...


A good red team pentest would have been to just stop after the first round of patches, not to try again and then cry foul when they get rightfully rejected. Unless, of course, social denunciation is part of the attack- and yes, it's admittedly a pretty good sidechannel- but that's a rather grisly social engineering attack, wouldn't you agree?


A real world red team?

Wouldn't the correct term for that be: malicious threat actor?

Red team penetration testing doesn't involve the element of surprise, and is pre-arranged.

Intentionally wasting peoples time, and then going further to claim you weren't, is a malicious act as it intends to do harm.

I agree though, it's fascinating but only in the true crime sense.


Totally agree. It is a threat, not pen testing. Pen testing would stop when it was obvious they would or had succeeded and notify the project so they could remedy the process and prevent it in the future. Reverting to name calling and outright manipulative behavior is immature and counterproductive in any case except where the action is malicious.


I agree. If it quacks like a duck and waddles like a duck, then it is a duck. Anyone secretly introducing exploitable bugs in a project is a malicious threat actor. It doesn't matter if it is a "respectable" university or a teenager, it matters what they _do_.


They did not secretly introduce exploitable bugs:

Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

> If it quacks like a duck and waddles like a duck, then it is a duck.

A lot of horrible things have happened on the Internet by following that philosophy. I think it's imperative to learn the rigorous facts and different interpretations of them, or we will continue to great harm and be easily manipulated.


> Which itself could be the basis of a follow up research paper.

Seems more like low grade journalism to me.


But the first paper is a Software Engineering paper (social-exploit-vector vulnerability research), while the hypothetical second paper would be a Sociology paper about the culture of FOSS. Kind of out-of-discipline for the people who were writing the first paper.


There's certainly a sociology aspect to the whole thing, but the hypothetical second paper is just as much social-exploit-vector vulnerability research as the first one. The only change being the state of the actor involved.

The existing paper researched the feasibility of unknown actors to introduce vulnerable code. The hypothetical second paper has the same basis, but is from the vantage point of a known bad actor.

Reading through the mailing list (as best I can), the maintainer's response to the latest buggy patches seemed pretty civil[1] in general, and even more so considering the prior behavior. And the submitter's response to that (quoted here[2]) went to the extreme end of defensiveness. Instead of addressing or acknowledging anything in the maintainer's message, the submitter:

- Rejected the concerns of the maintainer as "wild accusations bordering on slander"

- Stating their naivety of the kernel code, establishing themselves as a newbie

- Called out the unfriendliness of the maintainers to newbies and non-expects

- Accused the maintainer of having preconceived biases

An empathetic reading of their response is that they really are a newbie trying to be helpful and got defensive after feeling attacked. But a cynical reading of their response is that they're attempting to exploiting high-visibility social issues to pressure or coerce the maintainers into accepting patches from a known bad actor.

The cynical interpretation is as much social-exploit-vector vulnerability research as what they did before. Considering how they deflected on the maintainer's concerns stemming from their prior behavior and immediately pulled a whole bunch of hot-button social issues into the conversation at the same time, the cynical interpretation seems at least plausible.

[1] https://lore.kernel.org/linux-nfs/YH5%2Fi7OvsjSmqADv@kroah.c...

[2] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


And they tried to blow the "preconceived biases" dog whistle. I read that as a threat.



WTF. I didn't have strong feelings about that until reading this thread. Nothing like doubling down on the assholishness after getting caught, Aditya.


Intimidating new people is the same line that was lobbed at Linus to neuter his public persona. It would not surprise me if opportunists utilize this kind of language more frequently in the future.


It isn't even bullying. It is just dumb?

Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.


Not sure. From what I read they've successfully introduced a vulnerability in their first attempt. Would anyone have noticed if they didn't call more attention to their activities?


Can you point to this please? From my reading, it appears that their earlier patches were merged, but there is no mention of them being actual vulnerabilities. The lkml thread does mention they want to revert these patches, just in case.


From LKML

"A lot of these have already reached the stable trees. I can send you revert patches for stable by the end of today (if your scripts have not already done it)."

https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...


It's not saying that those are introduced bugs; IMHO they're just proactively reverting all commits from these people.


> > > They introduce kernel bugs on purpose. Yesterday, I took a look on 4 > > > accepted patches from Aditya and 3 of them added various severity security > > > "holes".

It looks like actual security vulnerabilities were successfully added to the stable branch based on that comment.


Yes because the UMN guys have made their intent clear, and even went on to defend their actions. They should have apologised and asked for reverting their patches.


Which kind of sucks for everyone else at UMN, including people who are submitting actual security fixes...


There are some activities that should be "intimidating to newbies" though, shouldn't there? I can think of a lot of specific examples, but in general, anything where significant preparation is helpful in avoiding expensive (or dangerous) accidents. Or where lack of preparation (or intentional "mistakes" like in this case) would shift the burden of work unfairly onto someone else. Also, a "newbie" in the context of Linux system programming would still imply reasonable experience and skill in writing code, and in checking and testing your work.


I'm gonna go against the grain here and say I don't think this is a continuation of the original research. It'd be a strange change in methodology. The first paper used temporary email addresses, why switch to a single real one? The first paper alerted maintainers as soon as patches were approved, why switch to allowing them to make it through to stable? The first paper focused on a few subtle changes, why switch to random scattershot patches? Sure, this person's advisor is listed as a co-author of the first paper, but that really doesn't imply the level of coordination that people are assuming here.


It doesn't really matter that he/they changed MO, because they've already shown to be untrustworthy. You can only get the benefit of the doubt once.

I'm not saying people or institutions cant change. But the burden of proof is on them now to show that they did. A good first step would be to acknowledge that there IS a good reason for doubt, and certainly not whine about 'preconceived bias'.


They had already done it once without asking for consent. At least in my eye, that makes them—everyone in the team—lose their credibility. Notifying the kernel maintainers afterwards is irrelevant.

It is not the job of the kernel maintainers to justify the teams new nonsense patches. If the team has stopped being bullshit, they should defend the merit of their own patches. They have failed to do so, and instead tried to deflect with recriminations, and now they are banned.


At this point how do you even make the difference between their genuine behavior and the behavior that is part of the research?


I would say that, from the point of view of the kernel maintainers, that question is irrelevant, as they never agreed to taking part in any research so. Therefore, from their perspective, all the behaviour is genuinely malevolent regardless of the individual intentions of each UMN researcher.


This. This research says something about Minnesota's ethics approval process.


I'm surprised it passed their IRB. Any research has to go through them, even if it's just for the IRB to confirm with "No this does not require a full review". Either the researchers here framed it in a way that there was no damage being done, or they relied on their IRB's lack of technical understanding to realize what was going on.


According to one of the researchers who co-signed a letter of concern over the issue, the Minnesota group also only received IRB approval retroactively, after said letter of concern [1].

[1] https://twitter.com/SarahJamieLewis/status/13848713855379087...


In the paper they state that they received an exemption from the IRB.


I'd love to see what they submitted to their IRB to get the determination of no human subjects:

It had a high human component because it was humans making decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.


https://research.umn.edu/units/irb/how-submit/new-study , find the document that points to "determining that it's not human research", leads you to https://drive.google.com/file/d/0Bw4LRE9kGb69Mm5TbldxSVkwTms...

The only relevant question is: "Will the investigator use ... information ... obtained through ... manipulations of those individuals or their environment for research purposes?"

which could be idly thought of as "I'm just sending an email, what's wrong with that? That's not manipulating their environment".

But I feel they're wrong.

https://grants.nih.gov/policy/humansubjects/hs-decision.htm would seem to agree that it's non-exempt (i.e. potentially problematic) human research if "there will be an interaction with subjects for the collection of ... data (including ... observation of behaviour)" and there's not a well-worn path (survey/public observation only/academic setting/subject agrees to study) with additional criteria.


Agreed: sending an email is certainly manipulating their environment when the action taken (or not taken) as a result has the potential for harm. Imagine an extreme example of an email death-threat: That is an undeniable harm, meaning email has such potential, so the IRB should have conducted a more thorough review.

Besides, all we have to do is look at the outcome: Outrage on the part of the organization targeted, and a ban by that organization that will limit the researcher's institution from conducting certain types of research.

If this human-level harm was the actual outcome means the experiment was a de fact experiment including human subjects.


I have to admit, I can completely understand how submitting source code patches to the linux kernel doesn't sound like human testing to the layman.

Not to excuse them at all, I think the results are entirely appropriate. What they're seeing is the immune system doing its job. Going easy on them just because they're a university would skew the results of the research, and we wouldn't want that.


Agreed: I can understand how the IRB overlooked this. The researchers don't get a pass though. And considering the actual harm done, the researchers could not have presented an appropriate explanation to their IRB.


This research is not exempt.

One of the important rules you must agree to is that you cannot deceive anyone in any way, no matter how small, if you are going to claim that you are doing exempt research.

These researchers violated the rules of their IRB. Someone should contact their IRB and tell them.


This was (1) research with human subjects (2) where the human subjects were deceived, and (3) there was no informed consent!

If the IRB approved this as exempt and they had an accurate understanding of the experiment, it makes me question the IRB itself. Whether the researchers were dishonest with the IRB or the IRB approved this as exempt, it's outrageous.


Just so you know, you appear to have been shadowbanned. I'm not sure why, probably for having a new account and getting quickly downvoted in this thread. (Admittedly you come across slightly strong, but... not outside of what I think is reasonable, so I dunno what's going on.)

I do recommend participating more in other threads and a little less in this thread, where you're repeating pretty much the same point over and over.


lol it didn't. looks like some spots are opening up at UMN's IRB. :)


Yeah, I don't think they can claim that human subjects weren't part of this when there is outrage on the part of the humans working at the targeted organization and a ban on the researchers' institution from doing any research in this area.


Yes!! Minnesota sota caballo rey. Spanish cards dude


It does prevent anyone with a umn.edu email address, be it a student or professor, of submitting patches of _any kind,_ even if they're not part of research at all. A professor might genuinely just find a bug in the Linux kernel running on their machines, fix it, and be unable to submit it.

To be clear, I don't think what the kernel maintainers did is wrong; it's just sad that all past and future potentially genuine contributions to the kernel from the university have been caught in the crossfire.


I looked into it (https://old.reddit.com/r/linux/comments/mvd6zv/greg_khs_resp...). People from the University of Minnesota has 280 commits to the Linux kernel. Of those, 232 are from the three people directly implicated in this attack (that is, Aditya Pakki and the two authors of the paper), and the remaining 28 commits is from one individual who might not be directly involved.


He writes "We are not experts in the linux kernel..." after pushing so many changes since 2018. I am left scratching my head.


And what about the other 20 commits? (not that it is so important, but sometimes a missing detail can be annoying)


Haha


The professor, or any students, can just use a non edu email address, right? It really doesn't seem like a big deal to me. It's not like they can personally ban anyone who's been to that campus, just the edu email address.


However, if you use a personal email, you can’t hide behind “I’m just doing my research”.


no, that would get them around an automatic filter, but the ban was on people from the university, not just people using uni email addresses.

I'm not sure how the law works in such cases, but surely the IRB would eventually have to realize that an explicit denouncement by the victims means that the "research" cannot go ahead


For one, it’s a way of punishing the university.

Eg - If you want to do kernel related research, don’t go to the university of Minnesota.


Which is completely fine, IMO, because,as pointed out already, the university's IRB has utterly failed here. There is no way how this sort of "research" could have passed an ethics review.

- Human subjects - Intentionally misleading/misrepresenting things, potential for a lot of damage, given how widespread Linux is - No informed consent at all!

Sorry but one cannot use unsuspecting people as guinea pigs for research, even if it is someone from a reputable institution.


I think in explicitly stating that no on from the university is allowed to submit patches includes disallowing them from submitting using personal/spoof addresses.

Sure they can only automatically ban the .edu address, but it would be pretty meaningless to just ban the university email host, but be ok with the same people submitting patches from personal accounts.

I would also explicitly ban every person involved with this "research" and add their names to a hypothetical ban list.


As a Minnesota U employee/student you cannot submit officially from campus or using the minn. u domain.

As Joe Blow at home who happens to go to school or work there you could submit even if you were part of the research team. Because you are not representing the university. The university is banned.


It would be hard to show this wasn’t genuine behaviour but a malicious attempt to infect the Linux kernel. That still doesn’t give them a pass though. Academia is full of copycat “scholars”. Kernel maintainers would end up wasting significant chunks of their time fending off this type of “research”.


The kernel maintainers don't need to show or prove anything, or owe anyone an explanation. The University's staff/students are banned, and their work will be undone within a few days.

The reputational damage will be lasting, both for the researchers, and for UMN.


One could probably do a paper about evil universities doing stupid things.. anyway evil actions are evil regardless of the context, research 100-yrs ago was intentionally evil without being questioned, today ethics should filter what research should be done or not


>then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.


Why not just call it what it is: fraud. They tried to deceive the maintainers into incorporating buggy code under false pretenses. They lied (yes, let's use that word) about it, then doubled down about the lie when caught.


This looks a very cynical attempt to leverage PC language to manipulate people. Basically a social engineering attack. They surely will try to present it as pentest, but IMHO it should be treated as an attack.


I don't see any sense in which this is bullying.


I come to your car, cut your breaks, tell you just before you go on a ride, say it's just research and I will repair them. What would you call a person like that?


I'm not sure, but i certainly wouldn't call them a bully.


>I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low.

I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.

Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.


> I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations.

IEEE seems to have no problem with this paper though.

>>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from https://www-users.cs.umn.edu/~kjlu/


Section IV.A:

> We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

It seems that the research in this paper has been done properly.

EDIT: since several comments come to the same point, I paste here an observation.

They answer to these objections as well. Same section:

> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

And, coming to ethics:

> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.


I'm surprised that the IRB determined this to be not human subjects research.

When I fill out the NIH's "is this human research" tool with my understanding of what the study did, it tells me it IS human subjects research, and is not exempt. There was an interaction with humans for the collection of data (observation of behavior), and the subjects haven't prospectively agreed to the intervention, and none of the other very narrow exceptions apply.

https://grants.nih.gov/policy/humansubjects/hs-decision.htm


> It seems that the research in this paper has been done properly.

How is wasting the time of maintainers of one of the most popular open source project "done properly"?

Also, someone correct me if I'm wrong, but I think if you do experiments that involve other humans, you need to have their consent _before_ starting the experiment, otherwise you're breaking a bunch of rules around ethics.


They answer to this objection as well. Same section:

> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

And, coming to ethics:

> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.


> They answer to this objection as well. Same section:

Not sure how that passage justifies wasting the time of these people working on the kernel. Because the issues they pretend to fix are real issues and once their research is done, they also submit the fixes? What about the patches they submitted (like https://lore.kernel.org/linux-nfs/20210407001658.2208535-1-p...) that didn't make any sense and didn't actually change anything?

> And, coming to ethics:

So it seems that they didn't even just mislead the developers of the kernel, but they also misled the IRB board, as they would never approve it without getting consent from the developers since they are experimenting on humans and that requires consent.

Even in the section you put above, they even confess they need to interact with the developers ("this experiment will take certain time of maintainers in reviewing the patches"), so how can they be IRB-exempt?

The closer you look, the more sour this whole thing smells.


> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

I was wondering why he banned the whole university and not just these particular researchers. I think your quote is the answer to that. I'm not sure on what basis this exemption was granted.

Here's what the NIH says about it:

Definition of Human Subjects Research

https://grants.nih.gov/policy/humansubjects/research.htm

Decision Tool: Am I Doing Human Subjects Research?

https://grants.nih.gov/policy/humansubjects/hs-decision.htm

And even if they did find some way to justify it under their own rules, some of the research subjects clearly disagree.


Because in the paper is stated that they used partially fantasy names. So far they've found only 4 names of real @umn.edu people from Kangjie Lu's lab, which could easily be blocked, the most coming from two of his students, Aditya Pakki, Qiushi Wu, plus his colleague Wenwen Wang. The Wenwen Wang fixes look like actual fixes though, not malicious. Some of Lu's earlier patches also look good.

https://lore.kernel.org/lkml/20210421130105.1226686-8-gregkh... for the full list


Is "we acknowledge that this will waste their time but we're going to do it anyway" really an adequate answer to that objection?


They appear to have told the IRB they weren't experimenting on humans, but that doesn't make sense to me given that the reaction of the maintainers is precisely what they were looking at.

Inasmuch as the IRB marked this as "not human research" they appear to have erred.


Sounds like the IRB may need to update their ethical standards then. Pointing to the IRB exemption doesn't necessarily make it fine, it could just mean the IRB has outdated ethical standards when it comes to research with comp sci implications.


It doesn't make it fine, no. But it does make a massive difference — It's the difference between being completely reckless about this and asking for at least token external validation.


If by "it does make a massive difference" you mean it implicates the university as an organization rather than these individuals then you're right.


At least one human, GKH, disagrees.


To me, this further emphasizes the idea that Academia has some serious issues. If some academic institution wasted even 10 minutes of my time without my consent, I'd have a bad taste in my mouth about them for a long time. Time is money, and if volunteers believe their time is being wasted, they will cease to be volunteers, which then effects a much larger ecosystem.


Depends on your notion of "properly". IMO "ask for forgiveness instead of permission" is not an acceptable way to experiment on people. The "proper" way to do this would've been to request permission from the higher echelons of Linux devs beforehand, instead of blindly wasting the time of everyone involved just so you can write a research paper.


That's still not asking permission from the actual humans you're experimenting on, i.e. the non-"higher echelons" humans who actually review the patch.


This points to a serious disconnect between research communities and development communities.

I would have reacted the same way Greg did - I don't care what credentials someone has or what their hidden purpose is, if you are intentionally submitting malicious code, I would ban you and shame you.

If particular researchers continue to use methods like this, I think they will find their post-graduate careers limited by the reputation they're already establishing for themselves.


Saying something is ethical because a committee approved it is dangerously tautological (you can't justify any unethical behavior because someone at some time said it was ethical!).

We can independently conclude this kind of research has put open source projects in danger by getting vulnerabilities that could carry serious real world consequences. I could imagine many other ways to carrying out this experiment without the consequences it appears to have had, like perhaps inviting developers to a private repository and keeping the patch from going public, or collaborating with maintainers to set up a more controlled experiment without risks.

This seems by all appearances an unilateral and egoistic behavior without great thought into its real world consequences.

Hopefully researchers learn from it and it doesn't discourage future ethical kernel research.


The goal of ethical research wouldn't be to protect the Linux kernel, it would be to protect the rights and wellbeing of the people being studied.

Even if none of the patches made into the kernel (which doesn't seem to be true, according to other accounts), it's still possible to do permanent damage to the community of kernel maintainers.


Not really done properly: They were testing out the integrity of the system. This includes the process by which they notified the maintainers not to go ahead. What if that step had failed and the maintainers missed that message?

Essentially, the researchers were not in control to stop the experiment if it deviated from expectations. They were relying on the exact system they were testing to trigger its halt.

We also don't know what details they gave the IRB. They may have passed through due to IRB's naivete on this: It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.


In my admittedly limited interaction with human subjects research approval, I would guess that this would not have been considered a proper setup. For one thing, there was no informed consent from any of the test subjects.


The piss-weak IRB decided that no such thing was necessary, hence no consent was requested. It's impossible not to get cynical about these review boards, their only purpose seems to be to deflect liability.


In their "clarifications" [1], they say:

"In the past several years, we devote most of our time to improving the Linux kernel, and we have found and fixed more than one thousand kernel bugs"

But someone upthread posted that this group has a total of about 280 commits in the kernel tree. That doesn't seem like anywhere near enough to fix more than a thousand bugs.

Also, the clarification then says:

"the extensive bug finding and fixing experience also allowed us to observe issues with the patching process and motivated us to improve it"

And the way you do that is to tell the Linux kernel maintainers about the issues you observed and discuss with them ways to fix them. But of course that's not at all what this group did. So no, I don't agree that this research was done "properly". It shouldn't have been done at all the way it was done.

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


But still, this kind of research puts undue pressure on the kernel maintainers who have to review patches that were not submitted in good faith (where "good faith" = the author of the patch were trying to improve the kernel)


I think that was kind of the point of the research: submitting broken patches to the kernel represents a feasible attack surface which is difficult to mitigate, precisely because kernel maintainers already have such a hard job.


So what's the null hypothesis here? Human maintainers are infallible? Why this even need to be researched?


If something is determined not to be human research, that doesn't automatically make it ethical.


TIL that opensource project maintainers aren't humans.


Something I've expected for years, but have never had evidence... until now.


Or, alternatively, that submitting buggy patches on purpose is not research.


"in all the three cases" is mildly interesting, as 232 commits have been reverted from these three actors. To my reading this means they either have a legitimate history of contributions with three red herrings, or they have a different understanding of the word "all" than I do.


A “simple” change can still require major effort to evaluate. Bogus logic on their part.


> IEEE seems to have no problem with this paper though.

IEEE is just the publishing organisation and doesn't review research. That's handled by the program committee that each IEEE conference has. These committees consist of several dozen researchers from various institutions that review each paper submission. A typical paper is reviewed by 2-5 people and the idea is that these reviewers can catch ethical problems. As you may expect, there's wide variance in how well this works.

While problematic research still slips through the cracks, the field as a whole is getting more sensitive to ethical issues. Part of the problem is that we don't yet have well-defined processes and expectations for how to deal with these issues. People often expect IRBs to make a judgement call on ethics but many (if not most) IRBs don't have computer scientists that are able to understand the nuances of a given research projects and are therefore ill-equipped to reason about the implications.


Decent odds their paper gets pulled by the conference organizers now.


The IEEE Symposium on Security and Privacy should remove this paper at once for gross ethics violations. The message should be strong and unequivocal that this type of behavior is not tolerated.


"To appear"


"To appear" has a technical meaning in academia, though—it doesn't mean "I hope"; it means "it's been formally accepted but hasn't actually been put in 'print' yet."

That doesn't stop someone from lying about it, but it's not a casual claim, and doing so would probably bring community censure (as well as being easily falsifiable after time).


"To appear" to me meant; it is under revision by IEEE, otherwise why not just to state paper was accepted by IEEE.


It is a bit more complicated, since this is a conference paper. Usually, if a conference paper is accepted, it is only published if the presentation was held (so if the speaker cancels, or doesnt show up, the publication is revoked).

Edit: All conference are different, I dont know if it applies to that one.


I have only ever attended one conference, but I attended it about 32 times, and the printed proceedings were in my hands before I attended any talks in the last dozen or so. How does revocation work in that event?


Well, it depends on the conference. I know this to be true for a certain IEEE conference, so I assumed it to be the same for this IEEE one, but I have to admit, I didnt check. You are right, I also remember the handouts at a different conference handed on a usb stick at arrival.


It makes sense, thank you for explanation.


It's a jargon in academia.


I'm not holding my breath. I don't think they will pull that paper.

Security research is not always the most ethical branch of computer science, to say it mildly. Those are the people selling exploits to oppressive regimes, allowing companies to sit on "responsibly reported" bugs for years while hand-wringing about "that wasn't in the attacker model, sorry our 'secure whatever' we sold is practically useless". Of course the overall community isn't like that, but the bad apples spoil the bunch. And the aforementioned unethical behaviour even seems widely accepted.


What are you trying to suggest? It's an accepted paper, the event just hasn't happened yet.


Yup, it's basically stating the obvious: that any system based on an assumption of good faith is vulnerable to bad faith actors. The kernel devs are probably on the lookout for someone trying to introduce backdoors, but simply introducing a bug for the sake of introducing a bug (without knowing if it can be exploited), which is obviously much easier to do stealthily - why would anyone do that? Except for "academic research" of course...


> why would anyone do that?

I can think of a whole lot of three letter agencies with reasons to do that, most of whom recruit directly from universities.


Academic research, cyberwarfare, a rival operating system architecture attempting to diminish the quality of an alternative to the system they're developing, the lulz of knowing one has damaged something... The reasons for bad-faith action are myriad, as diverse as human creativity.


In theory wouldn't it be possible to introduce bugs that are seemingly innocuous when reviewed independently but when combined form and exploit?

Could a number of seemingly unrelated individuals introduce a number of bugs over time to form and exploit without being detected?


yes, of course, and I'm fairly certain it's happened before or at least there have been suspicions of it happening. Thats why trust is important, and why I'm glad kernel development is not very friendly.

Doing code review at work I am constantly catching blatantly obvious security bugs. Most developers are so happy to get the thing to work, that they don't even consider security. This is in high level languages, with a fairly small team, only internal users, and pretty simple code base. I can't imagine trying to do it for something as high stakes and complicated as the kernel. Not to mention how subtle bugs can be in C. I suspect it is impossible to distinguish incompetence from malice. So aggressively weeding out incompetence, and then forming layers of trust is the only real defense.


I think in some cases, you wouldn't even need multiple patches, sometimes very small things can be exploits. See: http://www.ioccc.org/


Another source on such things, although no longer an ongoing effort: http://underhanded-c.org/_page_id_2.html


Yes. binfmt and some other parts of systemd are such an example that introduce vulnerabilities that existed in windows 95. Not going into detail because it still needs to be fixed, assuming it was not intentional.


In that scenario, it is a genuine bug. Not a malicious actor


I believe this is violating research ethics hard, very hard. Reminds me if someone was aiming at researching childs' mental development through the study of inflicting mental damages. The subjects and the likely damages are not similar but the approach and mentality are inconveniently so.


yep first thing I thought was how did this get through the research ethics panel (all research at my University has to get approval).


What I don't understand is how this is ethical, but the sokol hoax was deemed unethical. I assume it's because I'm sokol's case, academia was humiliated, whereas here the target is outside academia


To me, this seems like a convoluted way to hide malicious actions as research, (not the other way around). This smells of intentional vulnerability introduction under the guise of academic investigation. There are millions of other, less critical, open source solutions this "research" could have tested on. I believe this was an intentional targeted attack, and it should be treated as such.


The "scientific" question answered by the mentioned paper is basically:

"Can open-source maintainers make a mistake by accepting faulty commits?"

In addition to being scummy, this research seems utterly pointless to me. Of course mistakes can happen, we are all humans, even the Linux maintainers.


This observation may very well get downvoted to oblivion: what UMN pulled is the Linux kernel development version of the Sokal Hoax.

Both are unethical, disruptive, and prove nothing about the integrity of the organizations they target.


The main difference is that the Sokal Hoax worked (that is why it is notable).


Except for Linux actively running on 99% of all servers on the planet. Vulnerabilities in Linux can literally kill people, open holes for hackers, spies, etc.

Submitting a fake paper to a journal read by a few dozen academics is a threat to someones ego. It is not in the same ballpark as a threat to IT infrastructure everywhere.


The researchers have a future at Facebook, which experimented on how to make users feel bad.

https://duckduckgo.com/?q=facebook+emotional+study&t=fpas&ia...


Agreed. Plus, I find the "oh, we didn't know what we were doing, you're not an inviting community" social engineering response, completely slimey and off-putting.


Technically analogous to pen testing except that it wasn’t done at the behest of the target, as legal pen testing is done. Hence it is indistinguishable from and must be considered, a malicious attack.


Agree, and it seems like at least this patch, despite the researcher’s protestations, actually landed sufficiently that it could have caused harm? https://lore.kernel.org/patchwork/patch/1062098/


I've been scratching my head at this one and admit I can't spot how it can be harmful. Why wouldn't you release the buffer if the send fails?


It might be a double free if the buffer is released elsewhere.


The buffer should only be released by its own complete callback, which only gets called after being successfully queued. Moreover, other uses of `mlx5_fpga_conn_send`, and the related `mlx5_fpga_conn_post_recv` will free after error.

The other part of the patch, that checks for `flow` being NULL may be unnecessary since it looks like the handle is always from an active context. But that's a guess. And it's only unreachable code.

The opinion I have from this is despite other patches being bad ideas, this one doesn't look like it. Because the other patches didn't make it past the mailing list, it demonstrates that the maintainers are doing a good enough job.


You’re right, that wasn’t one of the bad patches: https://lore.kernel.org/lkml/CAK8KejpUVLxmqp026JY7x5GzHU2YJL...


Unfortunately, we cannot be sure it is low for today's academia. So many people working there, with nothing useful to do other than flooding the conferences and journals with papers. They are desperate for anything that could be published. Plus, they know that the standards are low, because they see the other publications.


Devil's advocate, but why? How is this different from any other white/gray-hat pentest? They tried to submit buggy patches, once approved they immediately let the maintainers know not to merge them. Then they published a paper with their findings and which weak parts in the process they thing are responsible, and which steps they recommend be taken to mitigate this.


Very easy, if its not authorized it's not a pentest or red team operation.

Any pentester or red team considers their profession an ethical one.

By the response of the Linux Foundation, this is clearly not authorized nor falling into any bug bounty rules/framework they would offer. Social engineering attacks are often out of bounds for bug bounty - and even for authorized engagements need to follow strict rules and procedures.

Wonder if there are even legal steps that could be taken by Linux foundation.


You can read the (relatively short) email chains for yourself, but to try and answer your question, as I understood it the problem wasn't entirely the problems submitted in the paper it was followup bad patches and ridiculous defense. Essentially they sent patches that were purportedly the result of static analysis but did nothing, broke social convention by failing to signal that the patch was the result of a tool, and it was deemed indistinguishable from more attempts to send bad code and perform tests on the linux maintainers.


There is no separate real world distinct from academia. Saying that scientists and researchers whose job it is to understand and improve the world are somehow becoming "increasingly disconnected from the real world" is a pretty cheap shot. Especially without any proof or even a suggestion of how you would quantify that.


how is this different than blackhats contributing to general awareness of web security practices? Opensource considered secure just because its up on github is no different than plaintext HTTP GET params being secure just because "who the hell will read your params in the browser", which would be still the status quo if some hackers hadn't done the "lowest of the low " and show the world this lesson.


LKML should consider not just banning the @umn.edu on the SMTP but sinkholing the whole of University of MN network address space. Demand a public apology and paying for compute for the next 3 years or get yeeted


As a user of linux, I want to see this ban go further. Nothing from the University of MN, it's teaching staff, or it's current or past post-grad students.

Once they clean out the garbage in the Comp Sci department and their research committee that approved this experiment, we can talk.


I agree with most commenters here that this crosses the line of ethical research, and I agree that the IRB dropped the ball on this.

However, zooming out a little, I think it's kind of useful to look at this as an example of the incentives at play for a regulatory bureaucracy. Comments bemoaning such bureaucracies are pretty common on HN (myself included!), with specific examples ranging from the huge timescale of public works construction in American cities to the FDA's slow approval of COVID vaccines. A common request is: can't these regulators be a little less conservative?

Well, this story is an example of why said regulators might avoid that -- one mistake here, and there are multiple people in this thread promising to email the UMN IRB and give them a piece of their mind. One mistake! And when one mistake gets punished with public opprobrium, it seems very rational to become conservative and reject anything close to borderline to avoid another mistake. And then we end up with the cautious bureaucracies that we like to complain about.

Now, in a nicer world, maybe those emails complaining to the IRB would be considered valid feedback for the people working there, but unfortunately it seems plausible that it's the kind of job where the only good feedback is no feedback.


In Ireland there was a referendum to repeal the ban on abortion referendum there was very heated arguments, bot twitter accounts and general toxicity. For the sake of peoples sanity, there was a "Repeal Shield" implemented that blocked bad faith actors.

This news makes me wish to implement my own block on the same contributors to any open source I'm involved with. At the end of the day, their ethics is their ethics. Those ethics are not Linux specific, it was just the high profile target in this instance. I would totally subscribe to or link to a group sourced file similar to a README.md or CONTRIBUTORS.md (CODERS_NON_GRATA.md?) that pulled such things.


I think that is a sensible way to deal with this problem. The linux community is based on trust (as are a lot of other very successful communities), and ideally we trust until we have reason not to. But at that point we do need to record who we don't trust. It is the same in academia and sports.


The tech community, especially in sub-niches is far smaller than people think it is. It's easy to feel like it's a sea of tech to some when it's all behind a screen. But reputation is a powerful thing in both directions.

There is also a more nuclear option which I'm specifically not advocating for quite yet here but I will note none the less;

We're starting to see in discourse regarding companies co-opting open source projects for their own profit (cough Amazon) and how license agreements limit them more than regular contributors. That has come about, at the core of it, also because of a demonstrated trend of bad faith but also combined with a larger surface area contact with society. I could foresee a potential future trend where individuals who also act in bad faith are excluded from use of open source projects through their licenses. Imagine if the license for some core infrastructure tech like a networking library or the Linux kernel banned "Joe Blackhat" from using python for professional use. Now he still could, but in reputable companies, particularly larger ones with a legal department that person would be more of a liability than they are worth. There can be potentially huge professional consequences of a type that do not currently exist really in the industry.


I'd really like to review now similar patches in FreeRTOS, FreeBSD and such. Their messages and fixes all follow a certain scheme, which should be easy to detect.

At least both of them they are free from such @umn.edu commits with fantasy names.


@gregkh

These patches look like bombs under bridges to me.

Do you believe that some open source projects should have legal protection against such actors? The Linux Kernel is pretty much a piece of infrastructure that keeps the internet going.


Usually I am very skeptical of "soft" subjects like the humanities; but clearly this is unethical research.

In addition to wasting people's time, you are potentially messing with software that runs the world.


Considering how often you post about free speech and censorship, maybe you would find some interesting perspectives within the humanities.


They are rightfully worried about old commits? Maybe it's time they switched to a more secure language which can more easily detect malicious code. To be honest C seems critically insecure without a whole lot of work. If a bunch of experts even struggle, seems like they need better tools. Especially since Linux is so important, and there are a lot more threats, Rust seems like a good solution.

Apart from some perhaps critical unsafe stuff which should have a lot of attention, requiring everything to be safe/verified to some extent surely is the answer.


This was absolutely the right move. Smells really fishy given the history. I imagine this is happening in other parts of the community (attempting to add malicious code), albeit under a different context.


Is introducing bugs into computer systems on purpose like this in some way illegal in the USA? I understand that Linux is run by a ton of government agencies as well, would they take interest in this?



I don't see the difference between these and other 'hackers', white-hat, black-hat etc. The difference I see is the institution tested, Linux, is beloved here.

Usually people are admired here for finding vulnerabilities in all sorts of systems and processes. For example, when someone submits a false paper to a peer-reviewed journal, people around here root for them; I don't see complaints about wasting the time and violating the trust of the journal.

But should one of our beloved institutions be tested - now it's an outrage?


The outrage and does seem out of place to me. I think it's fair (even reasonable) for the kernel maintainers to ban those responsible, but I'm not sure why everyone here is getting so offended about fairly abstract harms like "wasting the time of the maintainers"


I don't think what has been done here is comparable to other forms of "finding vulnerabilities". Linux and everyone else would be happy if people find vulnerabilities in their code and report them back. And it is not like linux team is unaware of this "vulnerability"

This is more comparable to DDOS ing a web server to test their capabilities of handling DDOS. And they are aware of the issue. And they told you to not do it when you did it before. You just don't waste other people's time/money like that unless they give you the permission.


CS department security research is near universally not held to be in the scope of IRBs. This isn't entirely bad: the IRB process that projects are subjected to is so broken that it would be a sin to bring that mess on any other things.

But it means the regularly 'security' research does ethically questionable stuff.

IRBs exist because of legal risk. If parties harmed by unethical computer science research do not litigate (or bring criminal complaints, as applicable) the university practices will not substantially change.


Security research has its own standards of ethics, and these researchers violated those standards.

1. You don't conduct a penetration test without permission to do so, or without rules of engagement laying out what kinds of actions and targets are permitted. The researchers did not seek permission or request RoE; they tried to ask forgiveness instead.

2. You disclose the vulnerabilities immediately to the software's developers, and wait a certain period before revealing the vulns to the public. While the researchers did immediately notify the kernel dev team in 3 cases, there's apparently another vulnerable commit that the researchers didn't mention in their paper and did not tell the kernel dev team about, which was still in the kernel as of the paper's publish date.

Apparently the IRB team that reviewed this project decided that no permission was needed because the experiment was on software, not people--even though the whole thing hinged on human code review practices. It's evident that the IRB doesn't know how infosec research should be conducted, how software is developed, or how code review works, but it's also evident that the researchers themselves either didn't know or didn't care about best practices in infosec.


The discussion points link to the github of the research

https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...

It has yet to be published (due next month)

How about opening few bug reports to correctly report the final response of the community and the actual impact?

Not asking to harass them: if anyone should do it, it would be the kernel devs, and I'm not one of them


What an effing idiot! And then turn around and claiming bullying! At this point I’m not even surprised. Claiming victimhood is now a very effective move in the US academia these days.


Actually I do understand BOTH sides, BUT:

The way the university did this tests and the reactions afterwards are just bad.

What I see here and what the Uni of Minnesota seem to neglected is: 1. Financial damage (time is wasted) 2. Ethical reasons of experimenting with human beings

As a result, the University should give a clear statement on both and should donate a generous amount of on money for compensation of (1.)

For part (2.), a simple bit honest apology can do wonders!

---

Having said that, I think there are other and ethically better ways to achieve these measurement.


Researcher sends bogus papers to journal/conference, gets them reviewed and approved, uses that to point how ridiculous the review process of the journal is => GREAT JOB, PEER REVIEW SUCKS!

Researcher sends bogus patches to bazaar-style project, gets them reviewed and approved, uses that to point how ridiculous the review process of the project is => DON'T DO THAT! BAD RESEARCHER, BAD!


One potentially misleads readers of the journal, the other introduces security vulnerabilities into the world’s most popular operating system kernel.


"Misleading readers of a journal" might actually cause more damages to all of humanity (see https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt) than inserting a security vulnerability (that is likely not even exploitable) in a driver that no one actually enables (which is likely why no one cares about reviewing patches to it, either).

Thought to be fair, it is also the case that only the most irrelevant journals are likely to accept the most bogus papers. But in both cases I see no reason not to point it out.

The two situations are much more closer than what you think. The only difference I see is in the level of bogusness.


OK? If somebody else does something ethically dubious, does that make all ethically dubious behaviours acceptable somehow? How does a totally separate instance of ethical misconduct impact this situation?


I'm not surprised.

I'm repeating myself, but I'm pretty certain the NSA or other intel agencies (Israel, especially, considering their netsec expertise) have already done it in one way or another.

Do you remember the semicolon that caused a big wifi vuln? Hard to really know if it was just a mistake.

I'm going full paranoiac here, but anyway.

You can also imagine the NSA submitting patches to the windows source code, without the knowledge of microsoft, and so many other similar scenarios (android, apple, etc)


I think Greg KH would have been wise to add a time limit on this ban. Make it a 10-year block, for example, rather than one with no specific end-date.

Imagine what happens 25 years from now as some ground-breaking security research is being done at Minnesota, and they all groan: "Right, shoot, back in 2021 some dumb prof got us banned forever from submitting patches".

Is there a mechanism for University of Minnesota to appeal, someday? Even murders have parole hearings, eventually.


Presumably they could just talk to the maintainers at that time and have a reasonable discussion.


It isn't hard to get a gmail type address and submit from there.


"It's just a prank, bro!"

Incredible that the university researches decided this was a good idea. Has noone in the university voiced concern that perhaps this is a bad idea?


UMN is still sore that http took off and gopher didn't.


plonk

Aaaaand into the kill file they go.

Been a while since I last saw a proper plonk.


Can you link to any others? Personal curiosity.


USENET is filled with them.

People would reach a point where further conversation makes no sense.

So, one would make a kill file entry, and plonk basically communicated that smack the carriage return, enter key with gratifying authority to the user who had earned their place in the kill file, not to be heard from again.

The conversation is over, sort of like a block works today.

Edit: See in the definition I linked where plonk is the sound of some poor soul hitting the bottom of a kill file? I think that is debatable, depending on perspective. The peeps who mentored me onto the net at the beginning explained it as that gratifying press of the CR/LF [ENTER] key.

The sentiment is the same though.

---

plonk /excl.,vt./

[Usenet: possibly influenced by British slang `plonk' for cheap booze, or `plonker' for someone behaving stupidly (latter is lit. equivalent to Yiddish `schmuck')] The sound a newbie makes as he falls to the bottom of a kill file. While it originated in the newsgroup talk.bizarre, this term (usually written "plonk") is now (1994) widespread on Usenet as a form of public ridicule.

----

This particular plonk is proper, not just as an insult, which is the general use case, because the person who earned the "plonking" did so in spectacularly stupid fashion, in the opinion of the "plonker."

Total classic!

On some older TTY's, the two asterisks denoted bold text too, here HN uses it for italics.

Plain text would show the asterisks as the linked exchange showed to us.


Here’s a (perhaps naively) optimistic take: by publishing this research and showing it to lawmakers and industry leaders, it will sound alarms on a serious vulnerability in what is critical infrastructure for much of the tech industry and public sector. This could then lead to investment in mitigations for the vulnerability, e.g. directly funding work to proactively improve security issues in the kernel.


It seems like this debacle has created a lot of extra work for the kernel maintainers. Perhaps they should ask the university to compensate them.


I think the root of the problem can be traced back to the researcher's erroneous claim that "This was not human research".


Committing a non-volunteer of your experiment to work, and attempting to destroy their product of their work surely isn't ethical research.


So how does this differ from the Sokal hoax thing?


Sokal didn't try to pass harmful ideas, just nonsense.


The patch in the posted mail thread is mostly harmless nonsense too. It's a no-op change that doesn't introduce a bug; at worst it makes the code slightly less readable.


then the title of this post is false, but given their previous paper they probably wanted to inject more serious bugs. if only noop code can pass, then if anything it's good for linux

(this is not really noop, it would add some slight delay)


And yesterday there was another bit of Linux news by Greg KH trending on Reddit. Nice to see him stepping into the spotlight more :)


If you really wanted to research how to get malicious code into the highest-profile projects like Linux, the social engineering bit would be the most

Whether some unknown contributor can submit a bad patch isn't so interesting for this type of project. Knowing the payouts for exploits, the question is: how much money would one bad reviewer want to let one past?


I have to question the true motivations behind this. Just a "mere" research paper? Or is it there an ulterior motive, such as undermining Linux kernel development, taking advantage of the perceived hostility of the LKML to make a big show of it; castigate and denounce those elitist Linux kernel devs?

So I hear tinfoil is on sale, mayhaps I should stock up.


Am I missing how these patches were caught/flagged? Was it an automated process or physically looking at the pull requests?


How is this any different to littering in order to research if it gets cleaned up properly? Or like dumping hard objects onto a highway to research if they cause harm before authorities notice it?

I mean, the Kernel is now starting to run in cars and even on Mars, and getting those bugs into stable is definitely no achievement one should be proud of.


Reminds me of the Tuskegee Symphilis Study.

Sure we infected you with Syphilis without asking for permission first, but we did it for science!


Is there a readable version of the message Greg was replying to https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... ? Or was there more to it that what Greg quoted?


So, next paper would be like "On the Effectiveness of Using Email Domain Names for Kernel Submission Bans"


They just wasted the community's time. No wonder Linus Trovalds goes batshit crazy on these kind of people!


This type of research just looks like: let’s prove people will die if being killed, by really killing someone.


After they successfully got buggy patches in, did they submit patches to fix the bugs? And were they careful to make sure their buggy patches didn't make it into stable releases? If not, then they risked causing real damage, and is at least toeing the line of being genuinely malicious.


The tone of Aditya Pakki's message makes me think they would be very well served by reading 'How to Win Friends & Influence People' by Dale Carnegie.

This is obviously the complete opposite of how you should be communicating with someone in most situations let alone when you want something from them.

I have sure been there though so if anything, take this as a book recommendation for 'How to Win Friends & Influence People'.


His email reminds me the way politicians behaves in my country (India): play victim and start dunking.


I’ve seen this book mentioned a couple of times on HN now. I’m curious: did you learn about this book from the fourth season of the Fargo? This is where I encountered it first.


Not the person you're asking, but the book is over 80 years old and one of the best selling books of all time. Not exactly the same, but it's like asking where they heard about the Bible. It's everywhere.


I've seen the Bible mentioned a couple times now. I'm curious, did you learn about it from watching the VidAngel original series The Chosen now streaming free from their app?


It's a common recommendation for many decades now, you aren't going to find any one particular vector.


I think it's just a common book to recommend people who seem to be lacking in the "social communication" department. I would know, I got it gifted to me when I was young, angsty and smug.


As others have stated it is everywhere. The title always scared me away from it a little, but then I saw it come by in the intro of Netflix’s “The Politician” and I thought I’d give it a chance. Especially after I found out how old it is.


The book is very famous - it launched the "self help" genra. I've never read it, but I've heard it is fairly shallow guide on manipulating people to get what you want out of them.


"genre"

> I've never read it, but

If you've never read it, maybe just leave it at that.

> manipulating people

You mean "influencing people", like it says right in the title?

It's a book that has helped millions, which is why it continues to be widely recommended.

It's not for everyone. The advice seems obvious to some, which of course is why it can be so valuable for others.


You are totally right that the title makes it seem that way, I thought so too at first. Now I am happy that I have proven myself wrong by reading it. It is more like what I’d have liked my parents had taught me about social situations and empathy. Most of the points in the book are about sincere empathy for others.

Just have a go at it, it costs less than a euro as an e-book and it read so easily that you’ll be done in no time.


It's more like: "ask people about themselves, they like talking about themselves", than secret jedi mind tricks. Not really nefarious.


Are they legally liable in any way for including deliberate flaws in a piece of software they know is widely used and therefore creating a surface attack surface for _any_ attacker with the skill to so do and putting private and public infrastructure at risk ?


https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

Seemed to have posted some clarifications around this. worth a read


It's okay to run experiments on humans without their explicit informed consent now?


Can someone explain what the kernel bugs were that were introduced, in general terms?


Does it matter? They intentionally used their position as a university to push patches with malicious intent through.


It matters for the sake of understanding what was going on, and why the issues weren't caught in review.


None.


Very unethical and extremely inconsiderate of the maintainers time to say the least.


Aditya Pakki should be banned from any open source projects. Open source depends on contributors who collectively try to do the right thing. People who purposely try to veer projects off course should face real consequences.


When you test in production...


What a waste of talent... these kids know how to program, but instead of working on useful projects they’re wasting everyone’s time. It’s really troubling that any professor would have proposed or OK’d this.


The UMN had worked on a research paper dubbed "On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits".

I guess it's not as feasible as they thought.


Let’s add to the question “what is the quality of code review process in Linux?” an other one “what is the quality of ethical review process at universities?”.

I think there should be a real world experiment to test it.


Like all research institutions, University of Minnesota has an ethics committee.

https://integrity.umn.edu/ethics

Feel free to write to them


What is this? A "science" way of saying it's a prank bro?


The most recent possible-double-free was from a bad static analyzer wasn't it? That could have been a good-faith commit, which is unfortunate given the deliberate bad-faith commits prior.


After reading many of the comments I agree with the decision to ban the University. Why? You are free to choose your actions. You are not free to choose the consequences of your actions.


I've been thinking, what would happen if someone intentionally hacked a university and erased all data from all their computer systems, and then lied to their faces about it?

New white paper due soon


This raises the question: "has there been state-sponsored efforts to overwhelm open source maintainers with the intent of sneaking in vulnerabilities to software applications?"


"We'd like to insert malicious code into the software that runs countless millions of computers and see if they figure it out"

I don't think this was the pitch they gave to their IRB.


The replies here have been fascinating to read. Yes it's bad that subterfuge was engaged in vs kernel devs. But don't the many comments here expressing outrage at the actions of these researchers sound exactly like the kind of outrage commonly expressed by those in power when their misdeeds are exposed? e.g. Republican politicians outraged at a "leaker" who has leaked details of their illegal activity. It honestly looks to me like the tables have been turned here. Surely the fact that the commonly touted security advantages of OSS have been shown to be potentially fictitious, is at least as worrying as the researchers' ethics breaches?


One very good security practice is that if you find that you have a malicious contributor, you fire that contributor. The "misdeeds" were committed by the UMN researchers, not by the Linux maintainers.


Vulnerabilities in OSS are fixed over time. They are fixed by people running the code and contributing back, by fuzzing efforts, by testing a release candidate.

The difference between OSS and closed source is not the number of reviewers for the initial commit, it's the number of reviewers over years of usage.


I am baffled by the immaturity and carelessness of experimenting on a kernel that millions of critical machines use, and I applaud the maintainers for dealing swiftly with this.


Looks like vandalism masquerading as “research”.

Greg’s response is totally right.


I thought there were ethical standards for research where a good study should not knowingly do harm or at the very least make those involved aware of their participation


An appropriate place to make a report: https://compliance.umn.edu/


While it is easy to consider this a unsportsmanlike, one might view this as a supply chain attack. I don't particularly support this approach, but consider for a moment that as a defender (in the security team sense), you need to be aware of all possible modes of attack and compromise. While the motives of this class are clear, ascribing to attackers any particular motive is likely to miss.

To the supply chain type of attacks, there isn't an easy answer. Classical methods left both the SolarWinds and Codecov attacks in place for way too many days.


Could someone clarify: this made it to the stable branch, so does that mean that it made it out into the wild? Is there action required here?


A lot of people seem to consider this meaningless and a waste of time. If we disregard the the problems with the patches reaching stable branches for a second (which clearly is problematic), what is the difference between this and companies conducting red team exercises? It seems to me a potentially real and dangerous attack vector has been put under the spotlight here. Increasing awareness around this can't be all bad, particularly in a time where state sponsored cyber attacks are getting ever more severe.


Now I'm not one for cancel culture, but fuck these guys. Put their fuckin' names out there to get blackballed. Bunch of clowns.


So they A/B tested the kernel maintainers and got banned. What about the kernel security? Is the patch process getting improved?


Is getting reactions from HN also part of their experiment and should we expect our comments to be written about in their paper?


logged into my ancient hn account just to tell all of you that pentesting without permission from higher-ups is a bad idea

yes, this is pentesting


If the researchers desired outcome is more vigilance during patches and contributions I guess they might achieve that outcome?


Could have this happened also on other open source projects like FreeBSD, OpenBSD, etc or other popular open source software?


This is a really important question, and the way to answer it is for someone to try it.


Me thinks that If you hold a degree from the University of Minnesota it would be a good idea to let your university know what you think of this.


> it would be a good idea to let your university know what you think of this.

Unless there's something particularly different about University of Minnesota compared to other universities, something tells me that they won't give a crap unless you're a donor.


Not a great selling point for the CS department.

"Yes, we are banned from submitting patches to Linux due to past academic research and activities of our PhD students. However, we have a world-class program here."


I would argue it should be "due to past academic research and activities of our PhD students and professors"


Stanford continues to employ the famous Philip Zimbardo, and is Stanford not one of the top universities for psychology in the US?

Getting banned from Linux contribution is an ouchy for the university, but the damage has been done.


I'm trying to figure out how to do that. How can I get my degree changed? Will the university of (anyplace) look at my transcript and let me say I have a degree from them without much effort? I learned a lot, and I generally think my degree is about as good as any other university. (though who knows what has changed since then)

I'm glad I never contributed again as an alumni...


If it would be my univ, I'd send a personal email to the dean. https://cse.umn.edu/college/office-dean#:~:text=Dean%20Mosta....

If enough grads do that, I would expect the university will do something about it, and that would send a message. It's about where the money comes from in the end; (tuition, grants, research partnerships etc) IMO none of these sources would be very happy about what might amount to defacement of public property and waste of the time of people that are working for the good of mankind by providing free tools(bicycle of the mind) to future generations.

There is no novelty in this research; bad actors have been trying to introduce bad patches for as long as open source has been open.


That's my univ, and I just did exactly that. Mos Kaveh happened to be head of the EE department when I was there for EE. He's a good guy and had a good way of managing stressed-out upper-division honors EE students, so I'm hopeful that he will take action on this.


I'll give you one guess nation states do.


Well we get to look at the real results of this in realtime, as they get there whole organization banned from the kernel.


Does the University of Minnesota have an ethical review board or research ethics board? They need to be contacted ASAP.


Apparently, they were and did not care.


It's possible that they didn't knew what they were doing ( approving of this project) -- not sure if this is better or worse.


They seem to be teaching social engineering. Using a young, possibly foreign student as a front is a classy touch.


The author of the patches, Aditya Pakki, is a second year PHD student as per his website https://adityapakki.github.io/about/

He himself is to blame for submitting these kind of patches and claiming innocence. If a person as old as him can't figure out what's ethical and what's not, then that person deserves what comes out of actions like these.


Is there some tool that provides a nicer view of these types of threads? I find them hard to navigate and read.


To me it was akin to spotting volunteers cleaning up streets and, right after they passed, dumping more trash on the same street to see if they come and clean it up again. Low blow if you ask me.


Experiment: let's blow up the world to find out who might stop us so we can write a paper about it.


Their research could have been an advisory email or a blogpost for the maintainers without the nasty experiments. If they really cared for OSS they would have have collaborated with the maintainers and persuaded them to use their software tools for patch work. There is research for good of all and there is research for selfish gains. I am convinced this is the later.


It's funny. When someone like RMS or ESR or (formerly) Torvalds is "disrespectful" to open source maintainers, this is called "tough love", but when someone else does it, it's screamed about like it's some kind of high crime, with calls to permanently cancel access for all people even loosely related to the original offender.


I don't see how this is related. Being rude in tone, and wasting someone's time, are different things. You make it sound like they are the same.

But the opposite of what you propose is true. The maintainers are annoyed by others wasting their time in other cases as well as in this case - it's coherent behavior. And in my opinion, it's sensible to be annoyed when someone wasted your time - be it by lazily made patches or by intentionally broken patches.


I'm not the one who is making them sound like the same thing. There are literally people in this thread, saying that "wasting time" is being "disrespectful" to the maintainers.



Anyone else find the claim that "This was not human research" as erroneous as I do?


Fair. You are either part of the solution, part of the problem or just part of the landscape.


Couldn't help themselves. Once they thought of it, they just had to Gopher it.


Make an ethics complaint with the state and get their certification and charter pulled.


That's a worse death sentence than SMU's for paying players. Even the NCAA didn't kill the school, just the guilty sport program. You're asking the state to pull entire university's charter for a rogue department? Sure, pull the CS department, but I'm sure the other schools at the university had absolutely zero culpibility.


As a graduate of the UMN, other departments have had their share of issues as well. When I was there they were trying to figure out how to deal with a professor selling medical drugs without FDA permission (the permission did exist in the past, and the drug probably was helpful, but FDA approval was not obtained).

I suspect that all of the issues I'm aware of are within normal bounds for any university of that size. That is if kill the UMN you also need to kill Berkley, MIT, and Harvard for their issues of similar magnitude that we just by chance haven't heard about. This is a guess though, I don't know how bad things are.


That was the same thing said of the SMU punishment. They were not the only school doing it. They were just the ones that got caught.


Which is probably true. Doesn't excuse it.

Note, that I'm not trying to excuse the UMN either.


First thing that comes to mind is The Underhanded C Contest [0] where contestants try to introduce code that looks harmless, but actually is malicious and even if caught should look like an innocent bug at worse.

[0] http://www.underhanded-c.org


I want to know how TF the PC at the IEEE conference decided this was acceptable?


Can anyone enlighten me why these were not caught in review process itself?


I wonder if they can be sued (by the Linux Foundation, maybe) for that...


Minnesota being Minnesota.


Straight up grift. If it looks like a duck, quacks like a duck...


https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

posted some clarifications around this, worth a read


Reminded me of story more than a decade ago about an academic who conducted a series of "breaching experiments" in City of Heroes/City of Villains to study group behavior, basically breaking the social rules (but not the game rules) without other participants' or the game studio's knowledge. It was discussed on HN in 2009 (https://news.ycombinator.com/item?id=690551)

Here's how the professor (a sociologist) described his methodology:

These three sets of behaviors – rigidly competitive pvp tactics (e. g., droning), steadfastly uncooperative social play outside the game context (e. g., refusing to cooperate with zone farmers), and steadfastly uncooperative social play within the game context (e. g., playing solo and refusing team invitations) – marked Twixt’s play from the play of all others within RV.

Translation: He killed other players in situations that were allowed by the game's creators but frowned upon by the majority of real-life participants. For instance, "villains" and "heroes" aren't supposed to fraternize, but they do anyway. When "Twixt" happened upon these and other situations -- such as players building points by taking on easy missions against computer-generated enemies -- he would ruin them, often by "teleporting" players into unwinnable killzones. The other players would either die or have their social relations disrupted. Further, "Twixt" would rub it in by posting messages like:

Yay, heroes. Go good team. Vills lose again.

The reaction to the experiment and to the paper was what you would expect. The author later said it wasn't an experiment in the academic sense, claiming:

... this study is not really an experiment. I label it as a “breaching experiment” in reference to analogous methods of Garfinkel, but, in fact, neither his nor my methods are experimental in any truly scientific sense. This should be obvious in that experimental methods require some sort of control group and there was none in this case. Likewise, experimental methods are characterized by the manipulation of a treatment variable and, likewise, there was none in this case.

Links:

http://www.nola.com/news/index.ssf/2009/07/loyola_university...

https://www.ilamont.com/2009/07/academic-gets-rise-from-brea...


Dang, I am not sure how to feel about this kind of “research”


Could this have just been someone trying to cover up being a mediocre programmer in academia by framing it in a lens that would work in the academy with some nonsense vaguely liberal arts sounding social experiment premise?


Wow, shocking and completely unethical by that professor.


It is not done for research purpose. NSA is behind them


Did Linus comment on any of this get? :popcorn:


Is banning an entire university's domain from submitting to a project due to the actions of a few of its members an example of cancel culture?


If the university itself is actively promoting unethical behavior, then no, it isn't "cancel culture". That term is reserved for people or groups who hold unpopular opinions, and this is not that.


They should be reported to the authorities for attempting to introduce security vulnerabilities into software intentionally. This is not ok.


What authorities whould that be? The Department of Justice? The same DoJ that is constantly pushing for backdoors to encryption? Good luck with that! The "researchers" just might receive junior agent badges instead.


Maybe it was those very authorities who wanted them there. Lot's of things have gotten patched and the backdoors don't work as well as they used to... gotta get clever.


I'm a PhD student myself. What he did is not okay! We study computer science to do good not to harm.


What these researchers did was clearly and obviously wrong, but is it actually illegal?


It should be reported anyways. This might be only some small part of the malfeasance they're getting up to.


The fact that both of the researchers seem to be of Chinese origin should definitely raise some questions. Not the first time things like this have been tried.


Please don't post nationalistic flamebait to HN and certainly not implicit slurs.

https://news.ycombinator.com/newsguidelines.html


this is classic national origin discrimination. racists are coming out.


Id have the same suspicion if they were Russian. Nothing to do with race, everything to do with national affiliation.


What you are proposing happened in WW2, it is called Japanese American internment.


[flagged]


tell that to ww2 veterans, and you forgot to add middle east to you hate list.


Does the name "Aditya Pakki" really seem remotely "Chinese" or "Russian"? You might be a racist if you can't identify an obviously South Asian name as such. Although, honestly, even racists should be able to figure out the surname.


I was talking about the names on the original paper.


Uff da! I really do hope the administrators at University of Minnesota truly understand the gravity of this F* up. I doubt they will though.


Or some enemy state pawn(s) trying to add backdoors and then use the excuse of "university research paper" should they get caught?


This is the kind of study (unusual for CS) that requires IRB approval. I wonder if they thought to seek approval, and if they received it?


Trust is currency. Trust is an asset.


those that can't do teach, and those that can't teach troll open source devs?


these people have no ethics


If it was up to me, I would

1) send ethics complaint to the University of Minnesota, and

2) report this to FBI cyber crime division.


huh I never knew of plonk I bet I've been plonked before


how can i see these prs?


A search on the linked mailing list seems to include a lot of this junk:

https://lore.kernel.org/linux-nfs/?q=@umn.edu&x=t


What a bizarre saga.


so basically they demonstrated that the oss security model, as it operates today, is not working as it had been previously hoped.

it's good work and i'm glad they've done it, but that's depressing.

now what?


Weirdly enough


cannot wait for Rust in the kernel..


Will he get job/work somewhere again?


Disgusting.


The full title is "Linux bans University of Minnesota for sending buggy patches in the name of research" and it seems to justify the ban. It's not as though these students were just bad programmers, they were intentionally introducing bugs, performing unethical experimentation on volunteers and members of another organization without their consent.

Unfortunately even if the latest submissions were sent with good intentions and have nothing to do with the bug research, the University has certainly lost the trust of the kernel maintainers.


The full titre should actually be "Linux bans University of Minnesota for sending buggy patches in the name of research and thinking they can add insult to injury by playing the victims"

> I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

> These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

> Obviously, it is a wrong step but your preconceived biases are so strong that you make allegations without merit nor give us any benefit of doubt. I will not be sending any more patches due to the attitude that is not only unwelcome but also intimidating to newbies and non experts.

This idiot should be banned from the University, not from the linux kernel.


His department presumably allowed this to proceed


From the looks of the dialogue, it was all of the above with the addition of lying about what they were up to when confronted. I would think all of this constitutes a serious violation of any real university's research ethics standards.


I just want you to know that this is extremely unethical to create a paper where you attempt to discredit others by just using your university's reputation to try to create vulnerabilities on purpose.

I back your decision and fuck these people. I will additionally be sending a strongly worded email to this person, their advisor and their whoever's in charge of this joke of a computer science school. Sometimes I wish we had the ABA equivalent for computer science.


Please don't fulminate on HN (see https://news.ycombinator.com/newsguidelines.html). It's not what this site is for, because it leads to considerably worse conversation.

Even if you're justifiably steaming about something, please wait to cool down before posting here.

We detached this subthread from https://news.ycombinator.com/item?id=26889743.


Are you serious? If I publish a paper on the social engineering vulnerabilities we have used over the last three months to gain access to your password and attempt to take over Hacker News, you would be fine with it? No outburst, no angrily banning my account...


It seems odd that you are responding with a threat, or at least a threatening hypothetical to a (the?) moderator.

The way I understand it is that unnecessarily angry or confrontational posts tend to lower the overall tone. They are cathartic/fun to write, fast to write, and tend to get wide overall agreement/votes. So if they are allowed then most of the discussion on a topic gets pushed down beneath that sort of post.

Hence why we are asked to refrain, to permit more room for focused and substantive discussion.


No, I'm asking if he thinks as a person who built Hacker News if this is what we want out of the technology ecosystem from the cybersecurity professionals.

Edit: dang is a good person and I don't understand how he's taking sides here with people sending out malware (because that's what this sums up to). I understand I came on a little hot, but that was unexpected.


I'm not taking a side, just making the shallow point that "fuck these people", etc., is not good HN posting.

It's common when we post a moderation reply for people to assume that the mods are disagreeing with them [1], when we're just asking them to follow the site guidelines. Those two things are orthogonal, but they fuse under temperature: that is, when one is feeling hot about something, it's hard to separate them. They come unstuck as things cool down.

(I don't mean to pick on you personally. This kind of reaction happens in everyone, certainly including me.)

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...


That's a good point.

I did a bit of walking away, had some water and feel a little better. It's sometimes difficult to separate things from being personal. Sorry, this whole thread has my BP going a little hot because of some things like this I've had to deal with.

It's not you, I probably went a little far there with "fuck these people".


Nice recovery. Appreciated!


I'm glad we agree on the merits of dang.

After reviewing the thread I don't see any of what you are asking, here, upstream. I don't seem dang coming out on the same side as people sending out malware, and I don't really see that question present. I wish I had something more concrete to say, but I think your take here (and only here) is wrong and that you might have just entered this one on the wrong foot?


just write to irb@umn.edu and ask if this was a) reviewed and b) who approved it. It seems they have anyway violated the Human Research Protection Program Plan.

The researchers should not have done this, but ultimately it's the faculty that must be held accountable for allowing this to happen in the first place. They are a for-profit institution and should not get away with harassing people who are contributing their personal time. So nail them to the proverbial cross but make sure the message is heard by those who slipped up (not the researchers who should have been stopped before it happened).


Don't harass people.


I'm not harassing anyone... I'm politely reminding this person that ethics are a real thing.


Unless you are an involved party you're just adding to the mob. The kernel maintainers said their piece there's no need for everyone to pile on.


Maybe there is? I'm not convinced of my own position but I'd suggest there is a difference from an outcry sourced in people that are well informed of their complaint and the sort of brigading/bandwagoning you can see come from personal details being posted under a complaint.


I guess the question is what more is there to accomplish? They’ve been banned and I gurantee they already know the community is pissed. Is filling up their inbox with angry emails going to actually help in any way?


I'm trying to reconcile what you are saying with all the "write your senator" I've heard all my life, you know? I think the answer is "yes it will help, in as much strong arguments may be persuasive". Honestly I'm not sure there is a good answer considering that part of what we're weighing against is the potential for the internet to help create an outsized or aggravated response.


Your senator makes decisions based on public will and letter writing is a proxy for that. They have a team to deal with the inflow. Your average PhD student isn't equipped in the same way to deal with an outraged public.


Agreed - but contacting the university then, is reasonable?


That’s more reasonable in my opinion.


I am a user of the Linux kernel which a publicly funded university using my tax dollars just attempted to subvert in order to jerk themselves off over a research paper.


I completely disagree with this framing.

A real malicious actor is going to be planted in some reputable institution, creating errors that look like honest mistakes.

How do you test if the process catches such vulnerabilities? You do it the just the way that these researchers did.

Yes, it creates extra homework for some people with certain responsibilities, that doesn't mean it's unethical. Don't shoot the messenger.


> A real malicious actor

They introduced a real vulnerability in a codebase that lowers world-wide cybersecurity used by billions so they could jerk themselves off over a research paper.

They are a real malicious actor and I hope they hit by the CFAA.


There is a specific subsection of the CFAA that applies to this situation (deployment of unauthorized code that makes its way into non consenting systems).

This was a bold and unwise exercise, especially if you’re an academic in country on a revocable visa who participated.


Others would call it stupid to submit the patches and it would be fine if there were further consequences to deter others.


Was attempting politeness. You’re not wrong.


No. There are processes to do such sorts of penetration testing. Randomly sending buggy commits or commits with security vulns to "test the process" is extremely unethical. The linux kernel team are not lab rats.


It's not simply unethical, it's a national security risk. Is there a proof that the Chinese government was not sponsoring this ,,research '' for example?


Linux kernel vulnerabilities affect the entire world. The world does not revolve around the U.S., and I find it extremely unlikely a university professor in the U.S. doing research for a paper did this on behalf of the Chinese government.

It's far more likely that professor is so out of touch that they honestly think their behavior is acceptable.


[flagged]


Please don't do this here.


How about that question gets asked when there's actually some semblance of evidence that supports that theory. When you just throw, what I call, "dual loyalty" out as an immediate possibility just because the person is from China it starts to sound real nasty from the observers point of view.


Although there's nothing to suggest that this professor is in any way supported by the Chinese state, I don't think it's completely unreasonable to wonder.

The UK government has already said that China is targeting the UK via academics and students. China is a very aggressive threat with a ton of resources. It's certainly a real scenario to consider.

Just as this "research" has burnt the trust between the kernel maintainers and the UMN, if China intentionally installs spies into western academia, at some point you have to call into question the background of any Chinese student. It's not fair, but currently China is relying on the fact that we care about fairness and due process.


I acknowledged it's a possibility and there is precedence for it, at least in industry, in the US.

That said, prove what they did was wrong, prove whether controls like the IRB were used properly and informed correctly, prove or disqualify the veracity of their public statements (like the ones they made to the IEEE), then start looking at possible motivations other than the ones stated. I get that's difficult because these folks have already proven to be integrity violators but I think it's worthwhile to try to stick to.

If you jump straight to dual loyalty it is unfortunately also a position that will be easily co-opted by other bad faith actors and needlessly muddies the conversation because not all good faith and reasonable possibilities have been explored yet. I'm promoting the idea of a well-defined process here so that nobody can claim that it's just bigoted people making these accusations.


It's a very real threat and possibility thus an absolutely appropriate question to be asking. There are numerous documented instances of espionage performed by Chinese nationals while operating within the US educational system.

https://www.nbcnews.com/news/china/american-universities-are...


So asking 'why?' in this situation is in some way unethical because the person in question is from China? Or is it that we have to limit the answers to our question because the person is from China? Please advise, and further clarify what thoughts are not permitted based on the nationality of the person in question.


If that's the case, why would they publish a paper and announce their "research" to the world?


> There are processes to do such sorts of penetration testing.

What's the process then? I doubt there is such a process for the Linux kernel, otherwise the response would've been "you did not follow the process" instead of "we don't like what you did there".


Well, if there's no process, then it's not ethical (and sometimes, not legal) to purposefully introduce bad commits or do things like that. You need consent.

Firstly, it accomplishes nothing. We already all know that PRs and code submission is a potential vector for buggy code or security vulnerabilities. This is like saying water is wet.

Secondly, it wastes the time of the people working on the linux kernel and ruins the trust of code coming from the university of minnesota.

All of this happened due to caring about one's own research more than the ethics of doing this sort of thing. And continuing to engage in this behavior after receiving a warning.


First of all, whether something is ethical is an opinion, and in my opinion, it is not unethical.

Even if I considered it unethical, I would still want this test to be performed, because I value kernel security above petty ideological concerns.

If this is illegal, then I don't think it should be illegal. There's always debates about the legality of hacking, but there's no doubt that many illegal (and arguably unethical) acts of hacking have improved computer security. If you remember the dire state of computer security in the early 2000s, remember that the solution was not throw all the hacker kids in jail.


> I would still want this test to be performed, because I value kernel security above petty ideological concerns.

The biggest issue around this is consent. You can totally send an email saying "we're doing research on the security implications of the pull request process, can we send you a set of pull requests and you can give up approve/deny on each one?"

> If you remember the dire state of computer security in the early 2000s, remember that the solution was not throw all the hacker kids in jail.

You weren't there when Mirai caused havok due to thousands of insecure IoT devices getting pwned and turned into a botnet... and introducing more vulnerabilities is never the answer.


The kernel team literally already does this by the very nature of reviewing code submission. What do you think they do if not examining the incoming code to determine what, exactly, it does?

"because I value kernel security above petty ideological concerns"

This implies that this is the only or main way security is achieved. This is not true. Also, "valuing kernel security above other things"... is an ideological concern. You just happen to value this ideology more than other ideological concerns.

"whether something is ethical is an opinion"

It is, but there are bases for forming opinions on what is moral and ethical. In my opinion, secretly testing people is not ethical. Again, the difference here is consent. Plenty of organizations agree to probing/intrusion attempts; there is no reason to secretly do this. Again, security is not improved only by secret intrusion attempts.

"there's no doubt that many illegal (and arguably unethical) acts of hacking have improved computer security"

I don't believe in the ends justify the means argument. Either it's ethical or it isn't; whether or not security improved in the meantime is irrelevant. Security also improves in its own regard over time.

I do agree that the way the current laws regarding "hacking" are badly worded and very punitive, but crimes are crimes. Just because you like that hacking or think it may be beneficial does not change the fact that it was unauthorized access or an intentional attempt to submit bad, buggy code, etc.

We have to look at it exactly like we look at unauthorized access in i.e. business properties or peoples' homes. That doesn't change just because it's digital. You don't randomly walk up to your local business with a lock picking kit to "test their security". You don't randomly steal someone's wallet to "test their security". Why is the digital space any different?


> The kernel team literally already does this by the very nature of reviewing code submission. What do you think they do if not examining the incoming code to determine what, exactly, it does?

Maybe that's what they claim to do, but how do you know for sure? How do you test for it?

> This implies that this is the only or main way security is achieved.

It doesn't, there are many facets of security, social engineering being one of them. Maybe it's controversial to test something that requires misleading people, but realistically the only alternative is to ignore the problem. I prefer not to do that.

> Plenty of organizations agree to probing/intrusion attempts; there is no reason to secretly do this.

Yes there is: Suppose you use some company's service and they refuse to cooperate in regards to pentesting: The "goody two-shoes" type of person just gives up. The "hacker type" puts on their grey hat and plays some golf. Is that unethical? What if they expose some massive flaw that affects millions of unwitting people?

> I don't believe in the ends justify the means argument.

Not all ends justify all means, but some ends do justify some means. In fact, if it's a justification to some means, it's almost certainly an end.

> I do agree that the way the current laws regarding "hacking" are badly worded and very punitive, but crimes are crimes.

Tautologically speaking, crimes are indeed crimes, but what are you trying to say here? Just because it's a crime doesn't mean it is unethical. Sometimes, not performing a crime is unethical.

> You don't randomly walk up to your local business with a lock picking kit to "test their security".

Yes, but only because that's illegal, not because it is unethical.

> You don't randomly steal someone's wallet to "test their security".

Again, there's nothing morally wrong with "stealing" someone's wallet and then giving it back to them. Better I do it than some pickpocket. I have been tempted on numerous occasions to do exactly that, but it's rather hard explaining yourself in such a situation...

> Why is the digital space any different?

Because the risk of running into a physical altercation is quite low, as is the risk of getting arrested.


"Maybe that's what they claim to do,"

Our society is built on trust. Do you test the water from the city every time you drink it? Etc. Days like today show that, yes, the kernel team is doing their job.

How about -you- prove that they -aren't- doing their job?

"Suppose you use some company's service and they refuse to cooperate in regards to pentesting ... Is that unethical?"

Yes. You are doing it without their consent. It is unethical. Just because you think you are morally justified in doing something without someone's consent does not mean that it is not unethical. Just because you think the overall end result will be good does not mean that the current action is ethical.

"Yes, but only because that's illegal, not because it is unethical."

This is very pedantic. It's both illegal and unethical. How would you like if it you had a business and random people came by and picked locks, etc, in the "name of security"? That makes zero sense. It's not your prerogative to make other people more secure. If they are insecure and don't want to test it, then it's their own fault when a malicious actor comes in.

"Again, there's nothing morally wrong with "stealing" someone's wallet and then giving it back to them"

Yes, it is morally wrong. In that scenario, you -are- the pickpocket. This is a serious boundary that is being crossed. You are not their parent. You are not their caretaker or guardian. You are not considering their consent -at all-. You have no right to "teach people lessons" just because you feel like you are okay with doing that. If you did that to me I would not hang out with you ever again, and let people know that you might randomly take their stuff or cross boundaries for "ideological reasons".

"Because the risk of running into a physical altercation is quite low, as is the risk of getting arrested. "

This is admission that you know what you're doing is wrong, and the only reason you do it digitally is because it's more difficult to receive consequences for it.

I strongly urge you to start considering consent of other people before taking actions. You can voice your concerns, but things like taking a wallet or picking a lock is crossing the line. Either they will take the advice or not, but you cannot force it by doing things like that.


> Our society is built on trust.

Доверяй, но проверяй

> Do you test the water from the city every time you drink it?

Not every time, but on a regular basis.

> Days like today show that, yes, the kernel team is doing their job.

...and I am happy to report that my water test results did not raise concerns.

> Yes. You are doing it without their consent. It is unethical.

I disagree that it is unethical just because it lacks consent. Whistleblowing also implies that there is no consent, yet it is considered ethical. Suppose that Facebook leaves private data out in the open, then refuses to allow anyone to test their system for such vulnerabilities. It would be unethical not to ignore their consent in this regard.

> How would you like if it you had a business and random people came by and picked locks, etc, in the "name of security"? That makes zero sense.

I would find it annoying, of course. Computer hackers are annoying. It's not fun to be confronted with flaws.

The thing is, security is not about how I feel. We need to look at things in proportion. If my business was a random shoe store, then perhaps it doesn't matter that my locks aren't that great, perhaps these lockpickers are idiots. If my business houses critical files that absolutely must not be tampered with, then I can not afford to have shitty locks and frankly I should be grateful that someone is testing them, for free.

> Yes, it is morally wrong. In that scenario, you -are- the pickpocket. This is a serious boundary that is being crossed. You are not their parent. You are not their caretaker or guardian...

Can we just agree to disagree on morals?

> This is admission that you know what you're doing is wrong, and the only reason you do it digitally is because it's more difficult to receive consequences for it.

Not at all, those are two entirely separate things. I wouldn't proclaim my atheism in public while visiting Saudi Arabia - not because I think there's anything morally wrong with that, but because I don't want the trouble.

> I strongly urge you to start considering consent of other people before taking actions.

You use "consent" as if it was some magical bane word in every context. In reality, there's always a debate to be had on what should and should not require consent. For example, you just assumed my consent when you quoted my words, yet I have never given it to you.


Human Research Protection Program Plan & IRB determines if something is unethical. and while these documents are based on opinions they have weight due to consensus.

The way these (intrusive) tests (e.g. anti phishing) are performed within organizations would be with the knowledge and a very strongly worded contract between the owners of the company and the party conducting the tests.

It is illegal in most of the world today. Even if you disagree with responsible disclosure you would be well advised not to send phishing mail to companies (whether your intention was to improve their security or not is beside the point).


This would absolutely be true if this were an authorised penetration test, however it was unauthorised and therefore unethical.


How exactly do you "authorize" these tests? Giving advance notice would defeat the purpose, obviously.


"We're writing research on the security systems involved around the Linux kernel, would it be acceptable to submit a set of patches to be reviewed for security concerns just as if it was a regular patch to the Linux kernel?"

This is what you do as a grownup and the other side is expected to honor your request and perform the same thing they do for other commits... the problem is that people think of pen testing as an adversarial relationship where one person needs to win over the other one.


That's not really testing the process, because now you have introduced bias. Once you know there's a bug in there, you can't just act as if you didn't know.

I guess you could receive "authorization" from a confidante who then delegates the work to unwitting reviewers, but then you could make the same "ethical" argument.

Again, from a hacker ethos perspective, none of this was unethical. From a "research ethics committee", maybe it was unethical, but that's not the standard I want applied to the Linux kernel.


> from a hacker ethos perspective, none of this was unethical.

It totally is if your goal as a hacker is generating a better outcome for security. Read the paper, see what they actually did, they just jerked themselves off over how they were better than the open source community, and generated a sum total of zero helpful recommendations.

So they subverted a process, introduced a Use After vulnerability and didn't do jack shit to improve it.


> It totally is if your goal as a hacker is generating a better outcome for security. Read the paper, see what they actually did, they just jerked themselves off over how they were better than the open source community, and generated a sum total of zero helpful recommendations.

The beauty of it is that by "jerking themselves off", they are generating a better outcome for security. In spirit, this reaction of the kernel team is not that different from Microsoft attempting to bring asshole hacker kids behind bars for exposing them. When Microsoft realized that this didn't magically make Windows more secure, they fixed the actual problems. Windows security was a joke in the early 2000s, now it's arguably better than Linux. Why? Because those asshole hacker kids actually changed the process.

> So they subverted a process, introduced a Use After vulnerability and didn't do jack shit to improve it.

The value added here is to show that the process could be subverted, the lessons are to be learned by someone else.


> is to show that the process could be subverted, the lessons are to be learned by someone else.

If you show up to a kernel developer's house, put a gun to their head and tell them to approve the PR, that process can also be subverted...


It can also be subverted by abducting and replacing the entire development team by impostors. What's your point? That process security is hopeless and we should all just go home?


> What's your point? That process security is hopeless and we should all just go home?

That there's an ethical way of testing processes which includes asking for permission and using proven tested methods like sending a certain amount of items N where X are compromised and Y are not compromised and seeing the ratio of K where K are rejected items and the ratio of rejected items which are compromised K/X versus non-compromised items K/Y.

By breaking the ethical component, the entire scientific method of this paper is broken... now I have to go check the kernel pull requests list to see if they sent 300 pull requests and got one accepted or if it was a 1:1 ratio.


> That there's an ethical way of testing processes which includes asking for permission and using proven tested methods like sending a certain amount of items N where X are compromised and Y are not compromised and seeing the ratio of K where K are rejected items and the ratio of rejected items which are compromised K/X versus non-compromised items K/Y.

Again, that's not the same test. You are introducing bias. You are not observing the same thing. Maybe you think that observation is of equal value, but I don't.

> By breaking the ethical component, the entire scientific method of this paper is broken...

Not at all. The scientific method is amoral. The absolute highest quality of data could only be obtained by performing experiments that would make Joseph Mengele faint.

There's always an ethical balance to be struck. For example, it's not ethical to perform experiments on rats to develop insights that are of no benefit to these rats, nor the broader rat population. If we applied our human ethical standards to animals, we could barely figure anything out. So what do we do? We accept the trade-off. Ethical concerns are not the be-all-end-all.

In this case, I'm more than happy to have the kernel developers be the labrats. I think the tradeoff is worth it. Feel free to disagree, but I consider the ethical argument to be nothing but hot air.


This is the sort of situation where the best you could do is likely to be slightly misleading about the purpose of the experiment. So you'd lead off with "we're interested in conducting a study on the effectiveness of the Linux code review processes", and then use patches that have a mix of no issues, issues only with the Linux coding style (things go in the wrong place, etc.), only security issues, and both.

But at the end of the day, sometimes there's just no way to do ethically do the experiment you want to do, and the right solution to that is to just live with being unable to do certain experiments.


To play Devil's Advocate, I suspect that this would if different results because people behave differently when they know that there is something going on.


That's the thing, you just told the person to review the request for security... in a true double blind, you submit 10 PRs and see how many get rejected / approved.

If all 10 are rejected but only one had a security concern, then the process is faulty in another way.

Edit: There is this theory that penetration testing is adversarial but in the real world people want the best outcome for all. The kernel maintainers are professionals so I would expect the same level of caring for a "special PR" versus a "normal PR"


In a corporate setting, the solution would presumably be to get permission from further up the chain of command than the individuals being experimented upon. I think that would resolve the ethical problem, as no individual or organisation/project is then being harmed, although there is still an element of deception.

I don't know enough about the kernel's process to comment on whether the same approach could be taken there.

Alternatively, if the time window is broad enough, perhaps you could be almost totally open with everyone, withholding only the identity of the submitter. For a sufficiently wide time window, Be on your toes for malicious or buggy commits doesn't change the behaviour of the reviewers, as that's part of their role anyway.


There are ways to reach the Kernel Security team that doesn't notify all the reviewers. It is upto the Kernel team to decide if they want to authorize such a test, and what kind of testing is permissible.


Perhaps the research just simply shouldn't be done. What are the benefits of this research? Does it outweigh the costs?


What's the harm exactly? Greg becomes upset? Is there evidence that any intentional exploits made it into the kernel? The process worked, as far I can see.

What's the benefit? You raise trust in the process behind one of the most critical pieces of software.


> What's the harm exactly?

It is wasting a lot of peoples' time.

> What's the benefit? You raise trust in the process behind one of the most critical pieces of software.

I'm skeptical that a research paper by some nobodies from a state university will accomplish this.


> It is wasting a lot of peoples' time.

If you run a test on your codebase and it passes, do you find that writing the test was a waste of time?

> I'm skeptical that a research paper by some nobodies from a state university will accomplish this.

It did for me.


Let's take a peek at how the people whose time is being wasted feel about it:

> This is not ok, it is wasting our time, and we will have to report this, AGAIN, to your university...

> if you have a list of these that are already in the stable trees, that would be great to have revert patches, it would save me the extra effort these mess is causing us to have to do...

> Academic research should NOT waste the time of a community.

> The huge advantage of being "community" is that we don't need to do all the above and waste our time to fill some bureaucratic forms with unclear timelines and results.

Seems they don't think it is a good use of their time, no. But I'm sure you know a lot more about kernel development and open source maintenance than they do, right?


I didn't intend to convey that the answer to my question is "no". That's the whole problem with tests: Most of the time, it's drudge work and it does feel like they're a waste of time when they never signal anything. That doesn't mean they are a waste of time.

Similarly, if a research paper shows that its hypothesis is false, the author might feel that it was a waste of time having worked on it, which can lead to publication bias.


These are real malicious actors.


You don't know that, but that's also irrelevant. There's always plausible deniability with such bugs. The point is that you need to catch the errors no matter where they come from, because you can't trust anyone.


Carrying out an attack for personal gain is malicious. It doesn't matter if the payload is for crypto mining, creating a backdoor for the NSA, or a vulnerability you can cite in a paper.

Pentesting unwitting participants is malicious, and in many cases illegal.


But that's the point, you're a security researcher wanting to get the honors of getting a PhD, not a petty criminal, so you're supposed to have a strong ethical background.

A security researcher doesn't just delete a whole hard drive's worth of data to prove they have the rights to delete things, they are trusted for this reason.


It is ironic that you introduce plausible deniability here. No one as concerned about security as you profess to be should consider the presence of plausible deniability as being grounds for terminating a threat analysis. In the real world, where we cannot be sure of catching every error, identifying actual threats, and their capabilities and methods, is a security-enhancing analysis.


It is unethical. You cannot experiment on people without their consent. Their own university has explicit rules against this.


TLDR?


The previous discussion seems to have suddenly disappeared from the front page:

https://news.ycombinator.com/item?id=26887670


Thanks for pointing that out. 4 hours old, 1000+ points, it seems to have been hit with an invisible penalty.


From what I understood, when a new post has a lot of comments, it disappears from the frontpage.


Absolutely absurd and illogical


It's also related to the up votes. A post with a bad comment to up vote ratio usually means that a flame war is going on


>A post with a bad comment to up vote ratio usually means that a flame war is going on.

No it doesn't, not unless most comments typicaly get upvoted, which seems counterintuitive to me.

A bad comment to downvote ratio indicates a flamewar, since there are no flamewars without downvotes, but more comments than upvotes just means high comment velocity (which can go either way) or just that no one is saying anything particularly interesting, which isn't implicitly harmful.

A flamewar detector that hides popular threads to suppress engagement just in case there might be a flamewar is working at cross purposes with the goal of a forum, which is engagement.


It's a measure to halt flamewars


A flamewar detector is absurd and illogical?


Edit: actually it was standard moderation but in a bit of an unclear way - see https://news.ycombinator.com/item?id=26894033.

We made a mistake. I'm not sure what happened but it's possible that we mistook this post for garden-variety mailing-list drama. A lot of that comes up on HN, and is mostly not interesting; same with Github Issues drama.

In reality, this post is clearly above that bar—it's a genuinely interesting and significant story that the community has a ton of energy to discuss, and is well on topic. I've restored the thread now, and merged in the dupe that was on the front page in its stead.

Sorry everybody! Our only priority is to serve what the community finds (intellectually) interesting, but moderation is guesswork and it's not always easy to tell what's chaff.


It's already being discussed on HN [1] but for some reason it's down to the 3rd page despite having ~1200 upvotes at the moment and ~600 comments, including from Greg KH. (And the submission is only 5 hours old.)

[1] https://news.ycombinator.com/item?id=26887670


Sorry, we got that wrong. Fixed now.

Edit: turns out it was just that there were two different threads on the frontpage about this story and a moderator downweighted the earlier one. That's standard moderation. Usually we merge the threads (and I've since done so) but I'm the only mod who currently does that and I wasn't online yet.


Great, thanks for fixing it!


This is another example of HN's front page submission getting aggressively moderated for no good reason. It's been happening a lot lately.


Perhaps you've been seeing it more for some reason, or it has seemed more aggressive to you for some reason, but I can tell you that the way we moderate HN's front page hasn't changed in many years.

It's clear to me now that this case was a moderation mistake. We make them sometimes (alas), but that's also been true for many years. Moderation is guesswork. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


It would be nice to have transparency on mod actions like we have with user actions (aka showdead). People are rightfully more nervous as other platforms are switching to heavy handed moderation.


I wish the title were clearer. Linux bans University of Minnesota for sending buggy patches on purpose.


The term of art for an intentional bug that deliberately introduces a security flaw is a "trojan" (from "Trojan Horse", of course). UMN trojaned the kernel. This is indeed just wildly irresponsible.


Or just "Linux bans University of Minnesota for sending malicious patches."


Yes, and robbing a bank to show that the security is lax is totally fine because the real criminals don't notify you before they rob a bank.

Do you understand how dumb that sounds?


Please review https://news.ycombinator.com/newsguidelines.html and omit name-calling and swipes from your comments here. See also https://news.ycombinator.com/item?id=26893776.

We detached this subthread from https://news.ycombinator.com/item?id=26890035.


> Do you understand how dumb that sounds?

If you make a dumb analogy, that's on you.


Same analogy... there's a vulnerability and you want to test it? Go set up a test, and notify the people.

You really think the Linux kernel guys would change their process if you did this? They'd still do the same things they do.


> Go set up a test, and notify the people.

The vulnerability is in the process, and this was the test.

> You really think the Linux kernel guys would change their process if you did this? They'd still do the same things they do.

If they're vulnerable to accepting patches with exploits because the review process fails, then the process is broken. Linux isn't some toy, it's critical infrastructure.


You can test the process without pushing exploits to the real kernel.


> You can test the process without pushing exploits to the real kernel.

No, you can't, because that is the test! If you manage to push exploits to the real kernel, the test failed. If you get caught, the test passes. They did get caught.


You totally can... contact the kernel maintainers, tell them you want them to review a merge for security and they can give you a go/no go. If they approve your merge, then it's the same effect as purposely compromising millions without compromising millions.


Again, that's not the same, because then they will look for problems. What you want to test is that they're looking for problems all the time, on every patch, without you telling them to do so.

If they don't, then that's the vulnerability in the process.


> because then they will look for problems. What you want to test is that they're looking for problems all the time, on every patch, without you telling them to do so.

That's what they do every time.


Telling them in advance will potentially make them more alert for problem coming from specific source. It will introduce bias.

The best they can do is notify the maintainers after they got the result for their research and give the maintainers an easy way to recover from vulnerability they intentionally create.


Since there is bound to be a sort of trust hierarchy in these commits, is it possible that bonafide name-brand university people/email addresses come with an imprimatur that has now been damaged generally?

Given the size and complexity of the Linux (/GNU) codeworld, I have to wonder if they are coming up against (or already did) the practical limits of assuring safety and quality using the current model of development.


lol this is also how Russia does their research with Solarwinds. Do not try to attack supply chain or do security research without permission. They should be investigated by FBI for doing recon to a supply chain to make sure they weren't trying to do something worse. Minnesota leads the way in USA embarrassment once again.


Think of potential downstream effects of a vulnerable patch being introduced into Linux kernel. Buggy software in mobile devices, servers, street lights... this is like someone introducing a bug into university grading system.

Someone should look into who sponsored this research. Was there a state agent?


Reminds me of "It's just a prank bro" video from Filthy Frank https://www.youtube.com/watch?v=_wldE_4xjVQ


University of Minnesota is involved with the Confucius Institute... what could go wrong when a U.S. university accepts significant funding from a hostile foreign power?

https://experts.umn.edu/en/organisations/confucius-institute


The bad actors here should be expelled and deported. The nationalities involved make it clear this is likely a backfired foreign intelligence operation and not just 'research'.

They were almost certainly expecting an obvious bad patch to be reverted while trying to sneak by a less obvious one.


In other news: the three little pigs ban wolves after wolves exposed the dubious engineering of the straw house by blowing on it for a research paper.


So if an identifiable group messes with a project, but says "its for research!", then its OK? I'm just confused by your comment because it seems like you are upset with the maintainers for protecting their time from sources of known bad patches. And just... why? Where does the entitlement come from?


Being a maintainer is being a gate-keeper, by definition. Don't get me started about their "time", most of these guys are paid to work on the linux kernel, eg. Greg Kroah-Hartman is paid by the Linux Foundation. it's literally his job. Linus has balls, I'm afraid Greg KH is a Karen compared to him.

Other than that, they got caught red-handed accepting shit patch and complain about ethical issues when the fault is entirely on their side for not doing their job properly.

This whole thing points to a single question: how many times did they accept patch from black hat individuals who did not disclose their intention ?

This question the Linux development security model and highlight it being insecure to such social engineering attacks and they still manage to play victims. That's pitiful... Own it, say you fucked up accepting the patch, don't blame other for your own incompetence.


There is zero blaming happening and I defy you to point to an example. But if some one you encounter is consistently playing tricks on you, why associate with them?


> But if some one you encounter is consistently playing tricks on you, why associate with them?

Do you mean that when a small minority commit an abuse (edit: questionable here), the whole group should be condemned ? Me-think HN is as hypocrite as can be on this subject...


[flagged]


What evidence do you have that this is a spy? If you have evidence, you need to say what is in order to make a substantive post. If you have no evidence, then this comment is a smear and breaks the site guidelines badly. In that case please read https://news.ycombinator.com/newsguidelines.html and stick to the rules.

Edit: you've posted this sort of flamebait at least once before: https://news.ycombinator.com/item?id=26643049. This will get you banned here—we don't want this site to become nationalistic flamewar hell. No more of this please.


just because they chose to use Chinese names, doesnt make them less American. Are you suggesting non-Chinese Americans cant be spies?


Are you saying, in this particular case, that the Chinese researcher is an American citizen? That's a very bold claim. Source?


so, just based on the names (are not Amercian to you), you are assuming they are not?


> a Chinese spy got busted trying to poison the Linux kernel

this you?


[flagged]


This is unjustified xenophobia. And besides, if they were really trying to get bugs into the Linux kernel to further some nefarious goal, why would they publish a paper on it?

Simplest explanation is that they just wanted the publication, not to blame it on CCP or the researchers' nationality.


As I said, the research is the goal. Acknowledging China's past behaviour, and applying it to potential present actions, is not xenophobia.


> China doesn't allow its brightest and best to leave, without cause.

LOL, this is completely unfounded bollocks.


Of course, because one doesn't need permission to leave China? Or even a high enough social credit?


As of 2 years ago (pre-COVID), no. You needed a passport, and that's it. I doubt things have changed materially since then.

Some people require permission to leave (e.g. certain party members/SOE managers/etc), and I'm sure a lot of others are on government watchlists and will be stopped at the airport.

But it's patently absurd to take that and infer that every single overseas Chinese student was only allowed to leave if they spy/sabotage the West.


This is utter bullshit. I didn't need a permission or high enough social credit to leave China.


You would not have been approved for a passport, if deemed unworthy.

Whilst other countries do this, in the West, denial to issue a passport is typically predicated upon conviction of extremely serious crimes. Not merely because some hidden agency does not like your social standing.

Further you require a valid passport, or an 'exit permit', to exit China. You may not leave legally without one.

Not so in the West. You can not be detained from leaving the country, at all, passport or not. Other countries may refuse you entry, but this is not remotely the same thing.

For example, if I as a Canadian attempt to fly to the US, Canada grants the US CBP the right to setup pre-clearance facilities in Canadian airports. And often airlines handle this for foreign powers as well. However, that is a foreign power denying me entry, not my government denying me the right to exit.

As an example, I can just walk across the border to the US, and have broken not a single Canadian law. US law, if I do not report to CBP, yes.

Meanwhile, one would be breaking China's laws to cross the border from China without a passport, or exit VISA.


> You would not have been approved for a passport, if deemed unworthy.

Do you happen to know me in real life? How do you know if I'm worthy or unworthy to the Chinese state?


I did not indicate your worth, or lack of worth, to the Chinese state.

Instead, I stated that people are not granted exit VISAs, or passports, if not deemed worthy of one. It seems as if you are attempting to twist my words a bit here.


You took the thread way off topic and into nationalistic flamewar. We don't want that here. Please don't do it again!

https://news.ycombinator.com/newsguidelines.html


>This post does not deserve to be flagged.

You start with "I know this is going to be contentious", you know this is flamebait.


Why would you assume it is flamebait? The person knows they have an opinion that is at the edge of the conversation, which might invoke disagreement, and disclaims it up front?


I am concerned that the kernel maintainers might be falling into another trap: it is possible that some patches were designed such that they are legitimate fixes, and moreover such that reverting them amounts to introducing a difficult-to-detect malicious bug.

Maybe I'm just too cynical and paranoid though.


Presumably the next step is an attempt to cancel the kernel maintainers on account of some politically powerful - oops, I mean, some politically protected characteristics of the researchers.


[flagged]


Cancel Linux! Anyone?


One may wonder whether the repetitive attacks against Linus on the tone he’s using, until he had to take a break, isn’t a way to cut down Linux’ ability to perform by cutting its head, which would be absolutely excellent for closed-source companies and Amazon.

Imagine: If Linux loses its agility, we may have to either “Use Windows because it has continued upgrades” or “purchase Amazon’s version of Linux” which would be the only ones properly maintained and thus certified for, say, government purpose or GDPR purpose.

(I’m paying Debian but I’m afraid that might not be enough).


There are always the BSDs if something happens. Not quite as popular, but the major ones are good enough that to take over completely. (as in if you thought someone would kill you for using Linux you can replace all your linux with some BSD by the end of the day and in a month forget about it) Don't take that as better - that is a different discussion, but they are good enough to substitute and move on for the most part.


[flagged]


This seems like a stretch. While the main culprit is couching their accusations of slander in accessibility-oriented language as a way to deflect, there’s little to suggest “wokeness” is at play here in any respect, and to imply otherwise kind of gives away that you’ve already settled on a “culture war” lens, regardless of how well that maps to the story’s context.


Academic reputation has always mattered, but I can't recall the last time I've seen an example as stark as "I attend a university that is forbidden from submitting patches to the Linux kernel."


Somebody should have told them that since microsoft is now pro-open source this wouldnt land any of them a cushy position after the blowup at uni.


This is ridiculously unethical research. Despite the positive underlying reasons treating someone as a lab rat (in this case maintainers reviewing PRs) feels almost sociopathic.


> Despite the positive underlying reasons

I think that is thinking too kind of them. Sociopaths are often very well-versed to give "reasons" about what they do, but at the core it is powerplay.


how do I deserve -4 for this?


From an infosec perspective, I think this is a knee-jerk response to someone attempting a penetration test in good faith and failing.

The system appears to have worked, so that's good news for Linux. On the other hand, now that the university has been banned, they won't be able to find holes in the process that may remain, that's bad news for Linux.


Is it in good faith when they were already told explicitly to not continue? That's the point where it becomes intentionally malicious IMO


When James O' Keefe tries to run a fake witness scam on the Washington Post, and the newspaper successfully detects it, the community responds with "Well played!"

When a university submits intentionally buggy patches to the Linux Kernel, and the maintainers successfully detect it, the community responds with "That was an incredibly scummy thing to do."

I sense a teachable moment, here.


I think O'Keefe is scummy, too.


Being a Linux Kernel maintainer is a thankless job. Being a Washington Post journalist is nothing more than doing Bezos' bidding and dividing the country in the name of profit.


Seems to me they exposed a vulnerability in the way code is contributed.

If this was Facebook and their response was: > ~"stop wasting our time" > ~"we'll report you" the responses here would be very different.


Commenters have been reasonably accusing the researchers of bad practice, but I think there's another possible take here based on Hanlon's razor: "never attribute to malice that which is adequately explained by stupidity".

If you look at the website of the PhD student involved [1], they seem to be writing mostly legitimate papers about, for example, using static analysis to find bugs. In this kind of research, having a good reputation in the kernel community is probably pretty valuable because it allows you to develop and apply research to the kernel and get some publications/publicity out of that.

But now, by participating in this separate unethical research about OSS process, they've damaged their professional reputation and probably setback their career somewhat. In this interpretation, their other changes were made in good faith, but now have been tainted by the controversial paper.

[1] https://qiushiwu.github.io/


I suppose it depends on what you make of Greg's opinion (I am only vaguely familiar with this topic, so I have none).

> They obviously were _NOT_ created by a static analysis tool that is of any intelligence, as they all are the result of totally different patterns, and all of which are obviously not even fixing anything at all. So what am I supposed to think here, other than that you and your group are continuing to experiment on the kernel community developers by sending such nonsense patches?

Greg didn't think that the static analysis excuse could be legitimate as the quality was garbage.


I know this is old, but got a link? That LMKL thread is long


That looks like a different person from the name in the article.


Researcher(s) shows that it's relatively not hard to introduce bugs in kernel

HN: let's hate researcher(s) instead of process

Wow.

Assume good faith, I guess?


The concept of the research is quite good. The way this research was carried out, is downright unethical.

By submitting their bad code to the actual Linux mailing list, they have made Linux kernel developers part of their research without their knowledge or consent.

Some of this vandalism has made it down into the Linux kernel already. These researchers have sabotaged other people's software for their personal gain, another paper to boast about.

Had this been done with the developers' consent and with a way to pull out the patches before they actually hit the stable branches, then this could have been a valuable research. It's the way that the research was carried out that's the problem, and that's why everybody is hating on the researches (rather than the research matter itself).


To provide some parallel on how the research was carried about:

I see it as similar to

- allowing recording of people without their consent (or warrant),

- experimenting on PTSD by inducing PTSD without people consent,

- or medical experimentation without the subject consent.

And the arguments about not having anyone know:

Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".


submitting a patch for review to test the strength of the review process is not equivalent to inducing PTSD in people without consent or breaking in to the Whitehouse. You're being ridiculous. Linux runs many of the worlds financial, medical, etc etc... institutions and they have exposed how easy it is to introduce a backdoor.

If this was Facebook and not Linux everyone would look upon this very differently.


The fact that issues in Linux can kill people is exactly why they need leadership buy in first.

There are ways to test social vulnerabilities (pentesting) and they all involve asking for permission first.


Wasting the time of random open source maintainers who have not consented to your experiment to try to get your paper published is highly unethical; I don't see why this is a bad faith interpretation.


State-level actors / Nation wide actors (fancy terms lately, heh) will not ask anyone for consent


This is also unethical.


There are two separate issues with this story.

One is that what the researchers did is beyond reckless. Some of the bugs they've introduced could be affecting real world critical systems.

The other issue is that the research is actually good in proving by practical means that pretty much anyone can introduce vulnerabilities into software as important and sensitive as the Linux kernel. This hurts the industry confidence that we can have secure systems even more than it already is.

While some praise may be appropriate for the latter, they absolutely deserve the heat they're getting for the former. There may be many better ways to prove a point.


It is not hard to point a gun at someone's head.

But let's assume your girlfriend points an (unknown to you) empty gun at your head, because she wants to know how you will react. What do you think is the appropriate reaction?


With that logic you can conduct research on how easy it is to rob elderly people in the street, inject poison in supermarket yogurts, etc.


I don't like this university ban approach.

Universities are places with lots of different students, professors, and different people with different ideas, and inevitably people who make bad choices.

Universities don't often act with a single purpose or intent. That's what makes them interesting. Prone to failure and bad ideas, but also new ideas that you can't do at corporate HQ because you've got a CEO breathing down your neck.

At the University of Minnesota there's 50k+ students at the Twin Cities campus alone, 3k plus instructors. Even more at other University of Minnesota campuses.

None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.

Now the kernel doesn't 'need' any of their contributions, but I think this is a bad method / standard to set to penalize / discourage everyone under an umbrella when they've taken no bad actions themselves.

Although I can't put my finger on why, this ban on whole swaths of people in some ways seems very not open source.

The folks who did the thing were wrong to do so, but the vast majority of people now impacted by this ban didn't do the thing.


It sends a strong message - universities need to make sure their researchers apply ethics standards to any research done on software communities. You can't ignore ethics guidelines like consent and harm just because it's a software community instead of a meatspace community. I doubt the university would have taken any action at all without such a response.


>It sends a strong message

At a cost to mostly people who didn't / and I'll even say wouldn't do the bad thing.


I understand the point that you are making, but you have to look at it from the optics of the maintainer. The email made it clear that they submitted an official complaint to the ethics board and they didn't do anything. In that spirit it effectively means that any patch coming from that university could be vulnerability injection misrepresented as legitimate patches.

The Linux kernel has limited resources and if one university lack of oversight is causing the whole process to be stretched tinner than it already is then a ban seems like a valid solution.


@denvercoder9 had a good comment that might assuage your concern:

> It's not a ban on people, it's a ban on the institution that has demonstrated they can't be trusted to act in good faith. If people affilated with the UMN want to contribute to the Linux kernel, they can still do that on a personal title. They just can't do it as part of UMN research, but given that UMN has demonstrated they don't have safeguards to prevent bad faith research, that seems reasonable.


In this case, the cost is justified. The potential cost of kernel vulnerabilities is extremely high, and in some cases cause irrecoverable harm.


If that cost is high, why are they accepting and rejecting code based on email addresses?

https://twitter.com/FiloSottile/status/1384883910039986179

(Clearly the academic behavior is also a problem, there's no good justification for asking for reviews of known bad patches)


Has the university taken action yet? All I heard was after blowback, UMN had their institutional review board retroactively review the paper. They investigated themselves and found no wrongdoing. (IRB concluded this was not human research)

UMN hasn't admitted to any wrongdoing. The professor wasn't punished in any form whatsoever. And they adamantly state that their research review processes are solid and worked in this case.

An indefinite ban is 100% warranted until such a time that UMN can demonstrate that their university sponsored research is trustworthy and doesn't act in bad faith.


> I don't like this university ban approach.

I do, because the university needs to dismiss everyone involved, sever their connections with the institution, and then have a person in a senior position email the kernel maintainers with news that such has taken place. At which time the ban can be publicly lifted.


I think the ban hits the right institution, but I'd reason the other way around: is it really the primary fault of the individual (arguably somewhat immature, considering the tone of the email) PhD Student? The problem in academia is not "bad apples", but problematic organizational culture and misaligned incentives.


To me it depends on whether they lied to the ethics board or not. If they truly framed their research as "sending emails" then the individual is 100% at fault. If they clearly defined what they were trying to do and no one raised an issue then it is absolutely the university's fault.


I think it's more than whether they lied, it's whether the ethics board is even plausibly equipped to fully understand the ramifications of what they proposed to do: https://news.ycombinator.com/item?id=26890490


Well if the ethics board is not decently equipped to understand the concerns with this type of research I would say a full ban is perfectly understandable.


> The people banned didn't do anything wrong.

There are ways to do research like this (involve top-level maintainers, prevent patches going further upstream etc.), just sending in buggy code on purpose, then lying about where it came from, is not the way. It very much is wrong in my opinion. And like some other people pointed out, it could quite possibly be a criminal offense in several jurisdictions.


>There are ways to do research like this (involve top-level maintainers, prevent patches going further upstream etc.)

This is what I can't grok. Why would you not contact GKH and work together to put a process in place to do this in an ethical and safe manner? If nothing else, it is just basic courtesy.

There is perhaps some merit to better understanding and avoiding the introduction of security flaws but this was not the way to do it. Boggles the mind that this group felt that this was appropriate behavior. Disappointing.

As far as banning the University, that is precisely the right action. This will force the institution to respond. UMN will have to make changes to address the issue and then the ban can be lifted. It is really the only effective response the maintainers have available to them.


It's not a ban on people, it's a ban on the institution that has demonstrated they can't be trusted to act in good faith.

If people affilated with the UMN want to contribute to the Linux kernel, they can still do that on a personal title. They just can't do it as part of UMN research, but given that UMN has demonstrated they don't have safeguards to prevent bad faith research, that seems reasonable.


I am writing this as someone who is very much "career academic". I am all on board with banning the whole university (and reconsidering the ban once the university shows they have some ethics guidelines in place). This research work should not have passed ethics review. On the other hand, it sounds preposterous that we even would need formal ethics review for CS research... But this "research" really embodies the whole "this is why we can not have nice things" attitude.


A university-wide ban helps in converting the issue into an internal issue of that university. The university officials will have to figure out what went wrong and rectify.


Probably not because nobody else at the university is affected, and probably won't be for a dozen more years when someone else happens to get interested in something. Even in CS there are a ton of legitimate projects to work on, so a ban on just one common one isn't going to be noticed without more attention.

Note that I suspect enough people have gotten notice by the press now.


> None of those people did anything wrong. Putting the onus on them to effect change to me seems unfair. The people banned didn't do anything wrong.

Some of the people banned didn't do anything wrong. Others tried to intentionally introduce bugs into the kernel. Their ethics board allowed that or was mislead by them. Obviously they are having serious issues with ethics and processes.

I'm sure the ban can be reversed if they can plausibly claim they've changed. Since this was apparently already their second chance and they've been reported to the university before and the university apparently decided not to act on that complaint ... I have some doubts that "we've totally changed. This time we mean it" will fly.


"Some"

How many people didn't and did? The numbers seem absurd.


No way to tell. How many people at UMN do usually submit kernel patches that aren't malicious? In any case, it did hit the right people, and it potentially causes collateral damage.

Since it's an institutional issue (otherwise it would've stopped after they were reported the first time), it doesn't seem wrong to also deal with the institution.


I understand where this is coming from, and empathize with this but also empathize with the Kernel.org folx here. I think I'm okay with this because it isn't some government actor.

It is not always easy to identify who works for who at a university in regards to someone's research. The faculty member who seems to be directing this is identifiable, obviously. But it is not so easy to identify anyone acting on his behalf - universities don't maintain public lists of grad or undergrad students working for an individual faculty member. Ad in that there seems to be a pattern of obfuscating these patches through different submission accounts specifically to hide the role of the faculty advisor (my interpretation of what I'm reading).

Putting the onus on others is unfair...but from the perspective of Kernel.org, they do not know who from the population is bad actors and who isn't. The goal isn't to penalize the good folks, the goal is to prevent continued bad behavior under someone elses name. Its more akin to flagging email from a certain server as spam. The goal of the policy isn't to get people to effect change, its to stop a pattern of introducing security holes in critical software.

It is perfectly possible that this was IRB approved, but that doesn't necessarily mean the IRB really understood the implications. There are specific processes for research involving deception and getting IRB approval for deception. but there is no guarantee that IRB members have the knowledge or experience with CS or Open Source communities to understand what is happening. The backgrounds of IRB members vary enormously...


The University of Minnesota IRB never should have approved this research. So this is an institutional level problem. This is not just a problem with some researchers.

It's unfortunate that many people will get caught up in this ban that had nothing to do with it, but the university deserves to take a credibility hit here. The ball is now in their court. They need to either make things right or suffer the ban for all of their staff and students.


Agree that universities don't (and shouldn't) act with a single purpose or intent, but they need to have institutional controls in place that prevent really bad ideas from negatively affecting the surrounding community. Those seem to be lacking in this case, and in their absence I think the kernel maintainers' actions are entirely justified.


I don't like it either but it's not as bad as it sounds: the ban almost certainly isn't enforced mindlessly and with no recourse for the affected.

I'm pretty sure that if someone from the University of Minnesota would like to contribute something of value to the Linux kernel, dropping a mail to GregKH will result in that being possible.


It's definitely killing a mosquito with a nuke, but what are the alternatives? The kernel maintainers claim these bogus commits already put too much load on their time. I understand they banned the whole university out of frustration and also because they simply don't have the time to deal with them in a more nuanced way.


There's a real cost. What's your estimate for going through each of these 190 patches individually, looking at the context of the code change, and whether the "ref counting or whatever" bug fix is real, and doing some real testing to ensure that?

https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...

That looks like quite some significant effort. Now if most of those fixes were real, now after the revert there will be 190 known bugs in the kernel, before it's all cleaned up. That may have some cost too.

Looks like a large and expensive mess someone other than that UNI will have to clean out, because they're not trustworthy, ATM.


Are they even killing a mosquito?

Someone wants to introduce bugs, they can.

Meanwhile lots of people banned for some other person's actions.


Nobody else that the UMN is even contributing patches, other than these bad faith ones, so this is only banning one set of bugs. Given that a lot of bugs have come from one source banning that source bans a lot of bugs. It doesn't stop them all, but it stops some.


I don't quite understand the outrage. Quite sure most HN readers were doing/involved in similar experiments one way or another. Isn't A/B testing an experiment on consumers (people) without their consent?


There is a sea of difference between A/B testing your own property, and maliciously introducing a bug on a critical piece of software that's running on billions of devices.


>> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

"We did not introduce or intend to introduce any bug or vulnerability in the Linux kernel. All the bug-introducing patches stayed only in the email exchanges, without being adopted or merged into any Linux branch, which was explicitly confirmed by maintainers. Therefore, the bug-introducing patches in the email did not even become a Git commit in any Linux branch. None of the Linux users would be affected."



That's a false claim, though. There's evidence that at least one of the students involved did not do anything to alert kernel maintainers or prevent their code from reaching stable. https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...


That seems to directly contradict gkh and others (including the researchers) in the email exchange in the original post - these vulnerable patches reached stable trees and maintainers had to revert them.

They may not have been included in a release, but should gkh not have intervened *this would have reached users*, especially if the researchers weren't apparently aware their commits were reaching stable.


Isn't a/b testing usually things like changing layout or two things that....work as opposed to bugs?


So many comments here refrain, “They should have asked for consent first”. But would not that be detrimental to the research subject? Specifically, stealthily introducing security vulnerabilities. How should a consent request look to preserve the surprise factor? A university approaches you and says, “Would it be okay for us to submit some patches with vulnerabilities for review, and you try and guess which ones are good and which ones have bugs?” Of course you would be extra careful when reviewing those specific patches. But real malicious actors would be so kind and ethical as to announce their intentions beforehand.


It could have been done similar to how typosquatting research was done for ruby and python packages. The owners of the package repositories were contacted, and the researchers waited for approval before starting. I wasn't a fan of that experiment either for other reasons, but hiding it from everyone isn't the only option. Also, "you wouldn't have allowed me to experiment on you if I'd asked first" is a pretty disgusting attitude to have.


"you wouldn't have allowed me to experiment on you if I'd asked first"

I'm shocked the researchers thought this wasn't textbook a violation of research ethics - we talk about the effects of the Tuskegee Study on the perception of the greater scientific community today.

This is a smaller transgression that hasn't resulted in deaths, but when it's not difficult to have researched ethically AND we now spend the time to educate on the importance of ethics, it's perhaps more frustrating.


>So many comments here refrain, “They should have asked for consent first”.

The Linux kernel is a very large space with many maintainers. It would be possible to reach out to the leadership of the project to ask for approval without notifying maintainers and have the leadership announce "Hey, we're going to start allowing experiments on the contribution process, please let us know if you'd like to opt out", or at least work towards creating such a process to allow experiments on maintainers/commit approval process while also under the overall expectation that experiments may happen but that *they will be reverted before they reach stable trees*.

The way they did their work could impact more than just the maintainers and affect the reputation of the Linux project, and to me it's very hard to see how it couldn't have been done in a way that meets standards for ethical research.


Well, yeah, but the priority here shouldn't be to allow the researchers to do their work. If they can't do their research ethically then they just can't do it; too bad for them.


Yeah we get to hold people who are claiming to act in good faith to a higher standard than active malicious attackers. Their actions do not comport with ethical research practices.


Ethics in research matters. You don't see vaccine researchers shooting up random unconsenting people from the street with latest vaccine prototypes. Researchers have to come up with a reasonable research protocol. Just because the ethical way to do what UMN folks intended to do isn't immediately obvious to you - doesn't mean that it doesn't exist.


Someone does voluntary work and people think that gives them some ethical privilege to be asked before someone puts their work to the test? Sure it would be nice to ask but at the same time it renders the testing useless. They wanted to see how the review goes if they aren't aware that someone is testing them. You cant do this with consent.

The wasting time argument is nonsense too its not like they did this thousands of times and beside that, reviewing a intentional bad code is not wasting time is just as productive as reviewing "good" code and together with the patch-patch it should be even more valuable work. It not only or adds a patch it also make the reviewer better.

Yeah it aint fun if people trick you or point out you did not succeed in what you tried to do. But instead of playing the victim an play the unethical human experiment card maybe focus on improving.


> They wanted to see how the review goes if they aren't aware that someone is testing them. You cant do this with consent.

Ridiculous. Does the same apply to pentesting a bank or a government agency. If you wanted to pentest these of course you'd get approval from an executive that has power to sanction this. Why would Linux development be an exception? Just ask GKH or someone to allow you to do this.


Ridiculous comparison indeed. There was no pen testing going on. Submitted code does not attack or harming any running system and whoever uses is does so completely voluntary. I dont need anyone's approval for that. The license already states that I'm not liable in any way for what you do with it.


It's just a prank bro!


Or you could cease to do the voluntary work for them, because they clearly are not contributing to your goals. This is what the kernel maintainers have chosen and they have just as much right to do so. And you can perfectly well do this with consent, there's a wealth of knowledge from psychology and sociology on how you can run tests on people with consent and without invalidating the test.


I never said they can not stop reviewing the code. They can do whatever the heck they want. I'm not gonna tell a volunteer what they can and can not do. They just as much dont need anyone's consent to ignore submits as thous who submitting dont need their consent. Its voluntary, if you dont see a benefit you are free to stop, not free to tell other volunteers what to do and not to do.


A far better approach would be to study patch submissions and see how many bugs were introduced by the result of those patches being accepted and applied, without any interference of any kind.

Problem with that is it's a lot of work and they didn't want to do it in the first place.


Exactly, they are just seem mad and blame other for "wrong doings" instead of acknowledging that they need to improve.


You misunderstood me. I said the ones who tried to "see if the bugs would be detected or not in new submitted patches" are the lazy ones who instead of analyzing the existing code and existing bugs, attempted to submit new ones. Actually working on analyzing existing data would require more work than they were willing to do for their paper.


They had no intent to find vulnerability in the code they intended to find/proof vulnerability in the review process, totally different things.


They could do that by using all the existing patches and reported bugs already in the codebase. But that would've required them to work more than if they submitted new code with new bugs. They chose to effectively waste other people's time instead of putting in the work needed to obtain the analysis they wanted.


You are misinformed. They did use existing bugs they did wrote real patches for it and then submitted a flawed patch first and the real patch after the review was "successful". There is very little additional review needed because obviously the real patch and the flawed are almost identical. Plus the reviewer could actually profit from this. Its only a waste of time because their ego was hurt and they simply decide to throw away all the actual useful work.

Your suggested "wrongdoing by being lazy" is completely made-up nonsense.


Agreed, in fact the review process worked and now they are going to ban all contributions from that university, as it should be. I think it all worked out perfectly


Pathetic, it did not work at all, they told em whenever they missed a planted bug.


> Someone does voluntary work and people think that gives them some ethical privilege to be asked before someone puts their work to the test?

Yes. Someone sees the work provided to the community for free and thinks that gives them some ethical privilege to put that work to the test?


I have no clue what you try to say, sorry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: