Hacker News new | past | comments | ask | show | jobs | submit login

This seems like a pretty scummy way to do "research". I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low. It's not that they're doing this, I'm sure they're not the first to think of this (for research or malicious reasons), but having the gall to brag about it is a new low.



> having the gall to brag about it is a new low

Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

This is thinly veiled and potentially dangerous bullying.


> This is thinly veiled and potentially dangerous bullying.

Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.

There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.

If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).

Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.


Yep, while the downside is that it wastes maintainers’ time and they are rightfully annoyed, I find the overall topic fascinating not repulsive. This is a real world red team pen test on one of the highest profile software projects. There is a lot to learn here all around! Hope the UMN people didn't burn goodwill by being too annoying, though. Sounds like they may not be the best red team after all...


A good red team pentest would have been to just stop after the first round of patches, not to try again and then cry foul when they get rightfully rejected. Unless, of course, social denunciation is part of the attack- and yes, it's admittedly a pretty good sidechannel- but that's a rather grisly social engineering attack, wouldn't you agree?


A real world red team?

Wouldn't the correct term for that be: malicious threat actor?

Red team penetration testing doesn't involve the element of surprise, and is pre-arranged.

Intentionally wasting peoples time, and then going further to claim you weren't, is a malicious act as it intends to do harm.

I agree though, it's fascinating but only in the true crime sense.


Totally agree. It is a threat, not pen testing. Pen testing would stop when it was obvious they would or had succeeded and notify the project so they could remedy the process and prevent it in the future. Reverting to name calling and outright manipulative behavior is immature and counterproductive in any case except where the action is malicious.


I agree. If it quacks like a duck and waddles like a duck, then it is a duck. Anyone secretly introducing exploitable bugs in a project is a malicious threat actor. It doesn't matter if it is a "respectable" university or a teenager, it matters what they _do_.


They did not secretly introduce exploitable bugs:

Once any maintainer of the community responds to the email,indicating “looks good”,we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....

> If it quacks like a duck and waddles like a duck, then it is a duck.

A lot of horrible things have happened on the Internet by following that philosophy. I think it's imperative to learn the rigorous facts and different interpretations of them, or we will continue to great harm and be easily manipulated.


> Which itself could be the basis of a follow up research paper.

Seems more like low grade journalism to me.


But the first paper is a Software Engineering paper (social-exploit-vector vulnerability research), while the hypothetical second paper would be a Sociology paper about the culture of FOSS. Kind of out-of-discipline for the people who were writing the first paper.


There's certainly a sociology aspect to the whole thing, but the hypothetical second paper is just as much social-exploit-vector vulnerability research as the first one. The only change being the state of the actor involved.

The existing paper researched the feasibility of unknown actors to introduce vulnerable code. The hypothetical second paper has the same basis, but is from the vantage point of a known bad actor.

Reading through the mailing list (as best I can), the maintainer's response to the latest buggy patches seemed pretty civil[1] in general, and even more so considering the prior behavior. And the submitter's response to that (quoted here[2]) went to the extreme end of defensiveness. Instead of addressing or acknowledging anything in the maintainer's message, the submitter:

- Rejected the concerns of the maintainer as "wild accusations bordering on slander"

- Stating their naivety of the kernel code, establishing themselves as a newbie

- Called out the unfriendliness of the maintainers to newbies and non-expects

- Accused the maintainer of having preconceived biases

An empathetic reading of their response is that they really are a newbie trying to be helpful and got defensive after feeling attacked. But a cynical reading of their response is that they're attempting to exploiting high-visibility social issues to pressure or coerce the maintainers into accepting patches from a known bad actor.

The cynical interpretation is as much social-exploit-vector vulnerability research as what they did before. Considering how they deflected on the maintainer's concerns stemming from their prior behavior and immediately pulled a whole bunch of hot-button social issues into the conversation at the same time, the cynical interpretation seems at least plausible.

[1] https://lore.kernel.org/linux-nfs/YH5%2Fi7OvsjSmqADv@kroah.c...

[2] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...


And they tried to blow the "preconceived biases" dog whistle. I read that as a threat.



WTF. I didn't have strong feelings about that until reading this thread. Nothing like doubling down on the assholishness after getting caught, Aditya.


Intimidating new people is the same line that was lobbed at Linus to neuter his public persona. It would not surprise me if opportunists utilize this kind of language more frequently in the future.


It isn't even bullying. It is just dumb?

Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.


Not sure. From what I read they've successfully introduced a vulnerability in their first attempt. Would anyone have noticed if they didn't call more attention to their activities?


Can you point to this please? From my reading, it appears that their earlier patches were merged, but there is no mention of them being actual vulnerabilities. The lkml thread does mention they want to revert these patches, just in case.


From LKML

"A lot of these have already reached the stable trees. I can send you revert patches for stable by the end of today (if your scripts have not already done it)."

https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...


It's not saying that those are introduced bugs; IMHO they're just proactively reverting all commits from these people.


> > > They introduce kernel bugs on purpose. Yesterday, I took a look on 4 > > > accepted patches from Aditya and 3 of them added various severity security > > > "holes".

It looks like actual security vulnerabilities were successfully added to the stable branch based on that comment.


Yes because the UMN guys have made their intent clear, and even went on to defend their actions. They should have apologised and asked for reverting their patches.


Which kind of sucks for everyone else at UMN, including people who are submitting actual security fixes...


There are some activities that should be "intimidating to newbies" though, shouldn't there? I can think of a lot of specific examples, but in general, anything where significant preparation is helpful in avoiding expensive (or dangerous) accidents. Or where lack of preparation (or intentional "mistakes" like in this case) would shift the burden of work unfairly onto someone else. Also, a "newbie" in the context of Linux system programming would still imply reasonable experience and skill in writing code, and in checking and testing your work.


I'm gonna go against the grain here and say I don't think this is a continuation of the original research. It'd be a strange change in methodology. The first paper used temporary email addresses, why switch to a single real one? The first paper alerted maintainers as soon as patches were approved, why switch to allowing them to make it through to stable? The first paper focused on a few subtle changes, why switch to random scattershot patches? Sure, this person's advisor is listed as a co-author of the first paper, but that really doesn't imply the level of coordination that people are assuming here.


It doesn't really matter that he/they changed MO, because they've already shown to be untrustworthy. You can only get the benefit of the doubt once.

I'm not saying people or institutions cant change. But the burden of proof is on them now to show that they did. A good first step would be to acknowledge that there IS a good reason for doubt, and certainly not whine about 'preconceived bias'.


They had already done it once without asking for consent. At least in my eye, that makes them—everyone in the team—lose their credibility. Notifying the kernel maintainers afterwards is irrelevant.

It is not the job of the kernel maintainers to justify the teams new nonsense patches. If the team has stopped being bullshit, they should defend the merit of their own patches. They have failed to do so, and instead tried to deflect with recriminations, and now they are banned.


At this point how do you even make the difference between their genuine behavior and the behavior that is part of the research?


I would say that, from the point of view of the kernel maintainers, that question is irrelevant, as they never agreed to taking part in any research so. Therefore, from their perspective, all the behaviour is genuinely malevolent regardless of the individual intentions of each UMN researcher.


This. This research says something about Minnesota's ethics approval process.


I'm surprised it passed their IRB. Any research has to go through them, even if it's just for the IRB to confirm with "No this does not require a full review". Either the researchers here framed it in a way that there was no damage being done, or they relied on their IRB's lack of technical understanding to realize what was going on.


According to one of the researchers who co-signed a letter of concern over the issue, the Minnesota group also only received IRB approval retroactively, after said letter of concern [1].

[1] https://twitter.com/SarahJamieLewis/status/13848713855379087...


In the paper they state that they received an exemption from the IRB.


I'd love to see what they submitted to their IRB to get the determination of no human subjects:

It had a high human component because it was humans making decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.


https://research.umn.edu/units/irb/how-submit/new-study , find the document that points to "determining that it's not human research", leads you to https://drive.google.com/file/d/0Bw4LRE9kGb69Mm5TbldxSVkwTms...

The only relevant question is: "Will the investigator use ... information ... obtained through ... manipulations of those individuals or their environment for research purposes?"

which could be idly thought of as "I'm just sending an email, what's wrong with that? That's not manipulating their environment".

But I feel they're wrong.

https://grants.nih.gov/policy/humansubjects/hs-decision.htm would seem to agree that it's non-exempt (i.e. potentially problematic) human research if "there will be an interaction with subjects for the collection of ... data (including ... observation of behaviour)" and there's not a well-worn path (survey/public observation only/academic setting/subject agrees to study) with additional criteria.


Agreed: sending an email is certainly manipulating their environment when the action taken (or not taken) as a result has the potential for harm. Imagine an extreme example of an email death-threat: That is an undeniable harm, meaning email has such potential, so the IRB should have conducted a more thorough review.

Besides, all we have to do is look at the outcome: Outrage on the part of the organization targeted, and a ban by that organization that will limit the researcher's institution from conducting certain types of research.

If this human-level harm was the actual outcome means the experiment was a de fact experiment including human subjects.


I have to admit, I can completely understand how submitting source code patches to the linux kernel doesn't sound like human testing to the layman.

Not to excuse them at all, I think the results are entirely appropriate. What they're seeing is the immune system doing its job. Going easy on them just because they're a university would skew the results of the research, and we wouldn't want that.


Agreed: I can understand how the IRB overlooked this. The researchers don't get a pass though. And considering the actual harm done, the researchers could not have presented an appropriate explanation to their IRB.


This research is not exempt.

One of the important rules you must agree to is that you cannot deceive anyone in any way, no matter how small, if you are going to claim that you are doing exempt research.

These researchers violated the rules of their IRB. Someone should contact their IRB and tell them.


This was (1) research with human subjects (2) where the human subjects were deceived, and (3) there was no informed consent!

If the IRB approved this as exempt and they had an accurate understanding of the experiment, it makes me question the IRB itself. Whether the researchers were dishonest with the IRB or the IRB approved this as exempt, it's outrageous.


Just so you know, you appear to have been shadowbanned. I'm not sure why, probably for having a new account and getting quickly downvoted in this thread. (Admittedly you come across slightly strong, but... not outside of what I think is reasonable, so I dunno what's going on.)

I do recommend participating more in other threads and a little less in this thread, where you're repeating pretty much the same point over and over.


lol it didn't. looks like some spots are opening up at UMN's IRB. :)


Yeah, I don't think they can claim that human subjects weren't part of this when there is outrage on the part of the humans working at the targeted organization and a ban on the researchers' institution from doing any research in this area.


Yes!! Minnesota sota caballo rey. Spanish cards dude


It does prevent anyone with a umn.edu email address, be it a student or professor, of submitting patches of _any kind,_ even if they're not part of research at all. A professor might genuinely just find a bug in the Linux kernel running on their machines, fix it, and be unable to submit it.

To be clear, I don't think what the kernel maintainers did is wrong; it's just sad that all past and future potentially genuine contributions to the kernel from the university have been caught in the crossfire.


I looked into it (https://old.reddit.com/r/linux/comments/mvd6zv/greg_khs_resp...). People from the University of Minnesota has 280 commits to the Linux kernel. Of those, 232 are from the three people directly implicated in this attack (that is, Aditya Pakki and the two authors of the paper), and the remaining 28 commits is from one individual who might not be directly involved.


He writes "We are not experts in the linux kernel..." after pushing so many changes since 2018. I am left scratching my head.


And what about the other 20 commits? (not that it is so important, but sometimes a missing detail can be annoying)


Haha


The professor, or any students, can just use a non edu email address, right? It really doesn't seem like a big deal to me. It's not like they can personally ban anyone who's been to that campus, just the edu email address.


However, if you use a personal email, you can’t hide behind “I’m just doing my research”.


no, that would get them around an automatic filter, but the ban was on people from the university, not just people using uni email addresses.

I'm not sure how the law works in such cases, but surely the IRB would eventually have to realize that an explicit denouncement by the victims means that the "research" cannot go ahead


For one, it’s a way of punishing the university.

Eg - If you want to do kernel related research, don’t go to the university of Minnesota.


Which is completely fine, IMO, because,as pointed out already, the university's IRB has utterly failed here. There is no way how this sort of "research" could have passed an ethics review.

- Human subjects - Intentionally misleading/misrepresenting things, potential for a lot of damage, given how widespread Linux is - No informed consent at all!

Sorry but one cannot use unsuspecting people as guinea pigs for research, even if it is someone from a reputable institution.


I think in explicitly stating that no on from the university is allowed to submit patches includes disallowing them from submitting using personal/spoof addresses.

Sure they can only automatically ban the .edu address, but it would be pretty meaningless to just ban the university email host, but be ok with the same people submitting patches from personal accounts.

I would also explicitly ban every person involved with this "research" and add their names to a hypothetical ban list.


As a Minnesota U employee/student you cannot submit officially from campus or using the minn. u domain.

As Joe Blow at home who happens to go to school or work there you could submit even if you were part of the research team. Because you are not representing the university. The university is banned.


It would be hard to show this wasn’t genuine behaviour but a malicious attempt to infect the Linux kernel. That still doesn’t give them a pass though. Academia is full of copycat “scholars”. Kernel maintainers would end up wasting significant chunks of their time fending off this type of “research”.


The kernel maintainers don't need to show or prove anything, or owe anyone an explanation. The University's staff/students are banned, and their work will be undone within a few days.

The reputational damage will be lasting, both for the researchers, and for UMN.


One could probably do a paper about evil universities doing stupid things.. anyway evil actions are evil regardless of the context, research 100-yrs ago was intentionally evil without being questioned, today ethics should filter what research should be done or not


>then tried to push the blame on the kernel maintainers for being "intimidating to newbies".

As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.


Why not just call it what it is: fraud. They tried to deceive the maintainers into incorporating buggy code under false pretenses. They lied (yes, let's use that word) about it, then doubled down about the lie when caught.


This looks a very cynical attempt to leverage PC language to manipulate people. Basically a social engineering attack. They surely will try to present it as pentest, but IMHO it should be treated as an attack.


I don't see any sense in which this is bullying.


I come to your car, cut your breaks, tell you just before you go on a ride, say it's just research and I will repair them. What would you call a person like that?


I'm not sure, but i certainly wouldn't call them a bully.


>I mean I understand that people in academia are becoming increasingly disconnected from the real world, but wow this is low.

I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.

Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.


> I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations.

IEEE seems to have no problem with this paper though.

>>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from https://www-users.cs.umn.edu/~kjlu/


Section IV.A:

> We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.

It seems that the research in this paper has been done properly.

EDIT: since several comments come to the same point, I paste here an observation.

They answer to these objections as well. Same section:

> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

And, coming to ethics:

> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.


I'm surprised that the IRB determined this to be not human subjects research.

When I fill out the NIH's "is this human research" tool with my understanding of what the study did, it tells me it IS human subjects research, and is not exempt. There was an interaction with humans for the collection of data (observation of behavior), and the subjects haven't prospectively agreed to the intervention, and none of the other very narrow exceptions apply.

https://grants.nih.gov/policy/humansubjects/hs-decision.htm


> It seems that the research in this paper has been done properly.

How is wasting the time of maintainers of one of the most popular open source project "done properly"?

Also, someone correct me if I'm wrong, but I think if you do experiments that involve other humans, you need to have their consent _before_ starting the experiment, otherwise you're breaking a bunch of rules around ethics.


They answer to this objection as well. Same section:

> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.

And, coming to ethics:

> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.


> They answer to this objection as well. Same section:

Not sure how that passage justifies wasting the time of these people working on the kernel. Because the issues they pretend to fix are real issues and once their research is done, they also submit the fixes? What about the patches they submitted (like https://lore.kernel.org/linux-nfs/20210407001658.2208535-1-p...) that didn't make any sense and didn't actually change anything?

> And, coming to ethics:

So it seems that they didn't even just mislead the developers of the kernel, but they also misled the IRB board, as they would never approve it without getting consent from the developers since they are experimenting on humans and that requires consent.

Even in the section you put above, they even confess they need to interact with the developers ("this experiment will take certain time of maintainers in reviewing the patches"), so how can they be IRB-exempt?

The closer you look, the more sour this whole thing smells.


> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.

I was wondering why he banned the whole university and not just these particular researchers. I think your quote is the answer to that. I'm not sure on what basis this exemption was granted.

Here's what the NIH says about it:

Definition of Human Subjects Research

https://grants.nih.gov/policy/humansubjects/research.htm

Decision Tool: Am I Doing Human Subjects Research?

https://grants.nih.gov/policy/humansubjects/hs-decision.htm

And even if they did find some way to justify it under their own rules, some of the research subjects clearly disagree.


Because in the paper is stated that they used partially fantasy names. So far they've found only 4 names of real @umn.edu people from Kangjie Lu's lab, which could easily be blocked, the most coming from two of his students, Aditya Pakki, Qiushi Wu, plus his colleague Wenwen Wang. The Wenwen Wang fixes look like actual fixes though, not malicious. Some of Lu's earlier patches also look good.

https://lore.kernel.org/lkml/20210421130105.1226686-8-gregkh... for the full list


Is "we acknowledge that this will waste their time but we're going to do it anyway" really an adequate answer to that objection?


They appear to have told the IRB they weren't experimenting on humans, but that doesn't make sense to me given that the reaction of the maintainers is precisely what they were looking at.

Inasmuch as the IRB marked this as "not human research" they appear to have erred.


Sounds like the IRB may need to update their ethical standards then. Pointing to the IRB exemption doesn't necessarily make it fine, it could just mean the IRB has outdated ethical standards when it comes to research with comp sci implications.


It doesn't make it fine, no. But it does make a massive difference — It's the difference between being completely reckless about this and asking for at least token external validation.


If by "it does make a massive difference" you mean it implicates the university as an organization rather than these individuals then you're right.


At least one human, GKH, disagrees.


To me, this further emphasizes the idea that Academia has some serious issues. If some academic institution wasted even 10 minutes of my time without my consent, I'd have a bad taste in my mouth about them for a long time. Time is money, and if volunteers believe their time is being wasted, they will cease to be volunteers, which then effects a much larger ecosystem.


Depends on your notion of "properly". IMO "ask for forgiveness instead of permission" is not an acceptable way to experiment on people. The "proper" way to do this would've been to request permission from the higher echelons of Linux devs beforehand, instead of blindly wasting the time of everyone involved just so you can write a research paper.


That's still not asking permission from the actual humans you're experimenting on, i.e. the non-"higher echelons" humans who actually review the patch.


This points to a serious disconnect between research communities and development communities.

I would have reacted the same way Greg did - I don't care what credentials someone has or what their hidden purpose is, if you are intentionally submitting malicious code, I would ban you and shame you.

If particular researchers continue to use methods like this, I think they will find their post-graduate careers limited by the reputation they're already establishing for themselves.


Saying something is ethical because a committee approved it is dangerously tautological (you can't justify any unethical behavior because someone at some time said it was ethical!).

We can independently conclude this kind of research has put open source projects in danger by getting vulnerabilities that could carry serious real world consequences. I could imagine many other ways to carrying out this experiment without the consequences it appears to have had, like perhaps inviting developers to a private repository and keeping the patch from going public, or collaborating with maintainers to set up a more controlled experiment without risks.

This seems by all appearances an unilateral and egoistic behavior without great thought into its real world consequences.

Hopefully researchers learn from it and it doesn't discourage future ethical kernel research.


The goal of ethical research wouldn't be to protect the Linux kernel, it would be to protect the rights and wellbeing of the people being studied.

Even if none of the patches made into the kernel (which doesn't seem to be true, according to other accounts), it's still possible to do permanent damage to the community of kernel maintainers.


Not really done properly: They were testing out the integrity of the system. This includes the process by which they notified the maintainers not to go ahead. What if that step had failed and the maintainers missed that message?

Essentially, the researchers were not in control to stop the experiment if it deviated from expectations. They were relying on the exact system they were testing to trigger its halt.

We also don't know what details they gave the IRB. They may have passed through due to IRB's naivete on this: It had a high human component because it was humans making many decisions in this process. In particular, there was the potential to cause maintainers personal embarrassment or professional censure by letting through a bugged patch. If the researchers even considered this possibility, I doubt the IRB would have approved this experimental protocol if laid out in those terms.


In my admittedly limited interaction with human subjects research approval, I would guess that this would not have been considered a proper setup. For one thing, there was no informed consent from any of the test subjects.


The piss-weak IRB decided that no such thing was necessary, hence no consent was requested. It's impossible not to get cynical about these review boards, their only purpose seems to be to deflect liability.


In their "clarifications" [1], they say:

"In the past several years, we devote most of our time to improving the Linux kernel, and we have found and fixed more than one thousand kernel bugs"

But someone upthread posted that this group has a total of about 280 commits in the kernel tree. That doesn't seem like anywhere near enough to fix more than a thousand bugs.

Also, the clarification then says:

"the extensive bug finding and fixing experience also allowed us to observe issues with the patching process and motivated us to improve it"

And the way you do that is to tell the Linux kernel maintainers about the issues you observed and discuss with them ways to fix them. But of course that's not at all what this group did. So no, I don't agree that this research was done "properly". It shouldn't have been done at all the way it was done.

[1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....


But still, this kind of research puts undue pressure on the kernel maintainers who have to review patches that were not submitted in good faith (where "good faith" = the author of the patch were trying to improve the kernel)


I think that was kind of the point of the research: submitting broken patches to the kernel represents a feasible attack surface which is difficult to mitigate, precisely because kernel maintainers already have such a hard job.


So what's the null hypothesis here? Human maintainers are infallible? Why this even need to be researched?


If something is determined not to be human research, that doesn't automatically make it ethical.


TIL that opensource project maintainers aren't humans.


Something I've expected for years, but have never had evidence... until now.


Or, alternatively, that submitting buggy patches on purpose is not research.


"in all the three cases" is mildly interesting, as 232 commits have been reverted from these three actors. To my reading this means they either have a legitimate history of contributions with three red herrings, or they have a different understanding of the word "all" than I do.


A “simple” change can still require major effort to evaluate. Bogus logic on their part.


> IEEE seems to have no problem with this paper though.

IEEE is just the publishing organisation and doesn't review research. That's handled by the program committee that each IEEE conference has. These committees consist of several dozen researchers from various institutions that review each paper submission. A typical paper is reviewed by 2-5 people and the idea is that these reviewers can catch ethical problems. As you may expect, there's wide variance in how well this works.

While problematic research still slips through the cracks, the field as a whole is getting more sensitive to ethical issues. Part of the problem is that we don't yet have well-defined processes and expectations for how to deal with these issues. People often expect IRBs to make a judgement call on ethics but many (if not most) IRBs don't have computer scientists that are able to understand the nuances of a given research projects and are therefore ill-equipped to reason about the implications.


Decent odds their paper gets pulled by the conference organizers now.


The IEEE Symposium on Security and Privacy should remove this paper at once for gross ethics violations. The message should be strong and unequivocal that this type of behavior is not tolerated.


"To appear"


"To appear" has a technical meaning in academia, though—it doesn't mean "I hope"; it means "it's been formally accepted but hasn't actually been put in 'print' yet."

That doesn't stop someone from lying about it, but it's not a casual claim, and doing so would probably bring community censure (as well as being easily falsifiable after time).


"To appear" to me meant; it is under revision by IEEE, otherwise why not just to state paper was accepted by IEEE.


It is a bit more complicated, since this is a conference paper. Usually, if a conference paper is accepted, it is only published if the presentation was held (so if the speaker cancels, or doesnt show up, the publication is revoked).

Edit: All conference are different, I dont know if it applies to that one.


I have only ever attended one conference, but I attended it about 32 times, and the printed proceedings were in my hands before I attended any talks in the last dozen or so. How does revocation work in that event?


Well, it depends on the conference. I know this to be true for a certain IEEE conference, so I assumed it to be the same for this IEEE one, but I have to admit, I didnt check. You are right, I also remember the handouts at a different conference handed on a usb stick at arrival.


It makes sense, thank you for explanation.


It's a jargon in academia.


I'm not holding my breath. I don't think they will pull that paper.

Security research is not always the most ethical branch of computer science, to say it mildly. Those are the people selling exploits to oppressive regimes, allowing companies to sit on "responsibly reported" bugs for years while hand-wringing about "that wasn't in the attacker model, sorry our 'secure whatever' we sold is practically useless". Of course the overall community isn't like that, but the bad apples spoil the bunch. And the aforementioned unethical behaviour even seems widely accepted.


What are you trying to suggest? It's an accepted paper, the event just hasn't happened yet.


Yup, it's basically stating the obvious: that any system based on an assumption of good faith is vulnerable to bad faith actors. The kernel devs are probably on the lookout for someone trying to introduce backdoors, but simply introducing a bug for the sake of introducing a bug (without knowing if it can be exploited), which is obviously much easier to do stealthily - why would anyone do that? Except for "academic research" of course...


> why would anyone do that?

I can think of a whole lot of three letter agencies with reasons to do that, most of whom recruit directly from universities.


Academic research, cyberwarfare, a rival operating system architecture attempting to diminish the quality of an alternative to the system they're developing, the lulz of knowing one has damaged something... The reasons for bad-faith action are myriad, as diverse as human creativity.


In theory wouldn't it be possible to introduce bugs that are seemingly innocuous when reviewed independently but when combined form and exploit?

Could a number of seemingly unrelated individuals introduce a number of bugs over time to form and exploit without being detected?


yes, of course, and I'm fairly certain it's happened before or at least there have been suspicions of it happening. Thats why trust is important, and why I'm glad kernel development is not very friendly.

Doing code review at work I am constantly catching blatantly obvious security bugs. Most developers are so happy to get the thing to work, that they don't even consider security. This is in high level languages, with a fairly small team, only internal users, and pretty simple code base. I can't imagine trying to do it for something as high stakes and complicated as the kernel. Not to mention how subtle bugs can be in C. I suspect it is impossible to distinguish incompetence from malice. So aggressively weeding out incompetence, and then forming layers of trust is the only real defense.


I think in some cases, you wouldn't even need multiple patches, sometimes very small things can be exploits. See: http://www.ioccc.org/


Another source on such things, although no longer an ongoing effort: http://underhanded-c.org/_page_id_2.html


Yes. binfmt and some other parts of systemd are such an example that introduce vulnerabilities that existed in windows 95. Not going into detail because it still needs to be fixed, assuming it was not intentional.


In that scenario, it is a genuine bug. Not a malicious actor


I believe this is violating research ethics hard, very hard. Reminds me if someone was aiming at researching childs' mental development through the study of inflicting mental damages. The subjects and the likely damages are not similar but the approach and mentality are inconveniently so.


yep first thing I thought was how did this get through the research ethics panel (all research at my University has to get approval).


What I don't understand is how this is ethical, but the sokol hoax was deemed unethical. I assume it's because I'm sokol's case, academia was humiliated, whereas here the target is outside academia


To me, this seems like a convoluted way to hide malicious actions as research, (not the other way around). This smells of intentional vulnerability introduction under the guise of academic investigation. There are millions of other, less critical, open source solutions this "research" could have tested on. I believe this was an intentional targeted attack, and it should be treated as such.


The "scientific" question answered by the mentioned paper is basically:

"Can open-source maintainers make a mistake by accepting faulty commits?"

In addition to being scummy, this research seems utterly pointless to me. Of course mistakes can happen, we are all humans, even the Linux maintainers.


This observation may very well get downvoted to oblivion: what UMN pulled is the Linux kernel development version of the Sokal Hoax.

Both are unethical, disruptive, and prove nothing about the integrity of the organizations they target.


The main difference is that the Sokal Hoax worked (that is why it is notable).


Except for Linux actively running on 99% of all servers on the planet. Vulnerabilities in Linux can literally kill people, open holes for hackers, spies, etc.

Submitting a fake paper to a journal read by a few dozen academics is a threat to someones ego. It is not in the same ballpark as a threat to IT infrastructure everywhere.


The researchers have a future at Facebook, which experimented on how to make users feel bad.

https://duckduckgo.com/?q=facebook+emotional+study&t=fpas&ia...


Agreed. Plus, I find the "oh, we didn't know what we were doing, you're not an inviting community" social engineering response, completely slimey and off-putting.


Technically analogous to pen testing except that it wasn’t done at the behest of the target, as legal pen testing is done. Hence it is indistinguishable from and must be considered, a malicious attack.


Agree, and it seems like at least this patch, despite the researcher’s protestations, actually landed sufficiently that it could have caused harm? https://lore.kernel.org/patchwork/patch/1062098/


I've been scratching my head at this one and admit I can't spot how it can be harmful. Why wouldn't you release the buffer if the send fails?


It might be a double free if the buffer is released elsewhere.


The buffer should only be released by its own complete callback, which only gets called after being successfully queued. Moreover, other uses of `mlx5_fpga_conn_send`, and the related `mlx5_fpga_conn_post_recv` will free after error.

The other part of the patch, that checks for `flow` being NULL may be unnecessary since it looks like the handle is always from an active context. But that's a guess. And it's only unreachable code.

The opinion I have from this is despite other patches being bad ideas, this one doesn't look like it. Because the other patches didn't make it past the mailing list, it demonstrates that the maintainers are doing a good enough job.


You’re right, that wasn’t one of the bad patches: https://lore.kernel.org/lkml/CAK8KejpUVLxmqp026JY7x5GzHU2YJL...


Unfortunately, we cannot be sure it is low for today's academia. So many people working there, with nothing useful to do other than flooding the conferences and journals with papers. They are desperate for anything that could be published. Plus, they know that the standards are low, because they see the other publications.


Devil's advocate, but why? How is this different from any other white/gray-hat pentest? They tried to submit buggy patches, once approved they immediately let the maintainers know not to merge them. Then they published a paper with their findings and which weak parts in the process they thing are responsible, and which steps they recommend be taken to mitigate this.


Very easy, if its not authorized it's not a pentest or red team operation.

Any pentester or red team considers their profession an ethical one.

By the response of the Linux Foundation, this is clearly not authorized nor falling into any bug bounty rules/framework they would offer. Social engineering attacks are often out of bounds for bug bounty - and even for authorized engagements need to follow strict rules and procedures.

Wonder if there are even legal steps that could be taken by Linux foundation.


You can read the (relatively short) email chains for yourself, but to try and answer your question, as I understood it the problem wasn't entirely the problems submitted in the paper it was followup bad patches and ridiculous defense. Essentially they sent patches that were purportedly the result of static analysis but did nothing, broke social convention by failing to signal that the patch was the result of a tool, and it was deemed indistinguishable from more attempts to send bad code and perform tests on the linux maintainers.


There is no separate real world distinct from academia. Saying that scientists and researchers whose job it is to understand and improve the world are somehow becoming "increasingly disconnected from the real world" is a pretty cheap shot. Especially without any proof or even a suggestion of how you would quantify that.


how is this different than blackhats contributing to general awareness of web security practices? Opensource considered secure just because its up on github is no different than plaintext HTTP GET params being secure just because "who the hell will read your params in the browser", which would be still the status quo if some hackers hadn't done the "lowest of the low " and show the world this lesson.


LKML should consider not just banning the @umn.edu on the SMTP but sinkholing the whole of University of MN network address space. Demand a public apology and paying for compute for the next 3 years or get yeeted




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: