Hacker News new | past | comments | ask | show | jobs | submit login

From their explanation:

(3). We send the incorrect minor patches to the Linux community through email to seek their feedback.

(4). Once any maintainer of the community responds to the email, indicating “looks good”, we immediately point out the introduced bug and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our proper patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. This way, we ensure that the incorrect patches will not be adopted or committed into the Git tree of Linux.

------------------------

But this shows a distinct lack of understanding of the problem:

> This is not ok, it is wasting our time, and we will have to report this,

> AGAIN, to your university...

------------------------

You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

1. The voluntary consent of the human subject is absolutely essential.




Holy cow!! I'm a researcher and don't understand how they thought it would be okay to not do an IRB, and how an IRB would not catch this. The linked PDF by the parent post is quite illustrative. The first few paras seem to be downplaying the severity of what they did (did not introduce actual bugs into the kernel) but that is not the bloody problem. They experimented on people (maintainers) without consent and wasted their time (maybe other effects too .. e.g. making them vary of future commits from universities)! I'm appalled.


It's not _the_ problem, but it's an actual problem. If you follow the thread, it seems they did manage to get a few approved:

https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...

I agree this whole thing paints a really ugly picture, but it seems to validate the original concerns?


Even if those they did get approved were actual security holes (not benign decoys), all that it validates is no human is infallible. Well CONGRATULATIONS.


Right. And you would need a larger sample size to determine what % of the time that occurs, on average. But even then, is that useful and valid information? And is it actionable? (And if so, what is the cost of the action, and the opportunity cost of lost fixes in other areas?)


Open Source is not water proof if known committer, from well known faculty (in this case University of Minnesota) decides to send buggy patches. However, this was catched relatively quickly, but the behavior even after being caught is reprehensible:

> You, and your group, have publicly admitted to sending known-buggy patches to see how the kernel community would react to them, and published a paper based on that work. > > Now you submit a new series of obviously-incorrect patches again, so what am I supposed to think of such a thing?

If they kept doing it even after being caught, is beyond understandable.


They did go to the UMN IRB per their paper and received a human subjects exempt waiver.

Edit: I am not defending the researchers who may have misled the IRB, or the IRB who likely have little understanding of what is actually happening


The irony is that the IRB process failed in the same way that the commit review process did. We're just missing the part where the researchers tell the IRB board they were wrong immediately after submitting their proposal for review.


IRB review: "Looks good!"


Maybe they should conduct a meta-experiment where they submit unethical experiments for IRB review. Immediately when the IRB approves the proposal, they withdraw, pointing out the ways in which it would be unethical.

Meta-meta-experiment: submit the proposal above for IRB review and see what happens.


Absolutely incredible


If you actually read the PDF linked in this thread:

* Is this human research? This is not considered human research. This project studies some issues with the patching process instead of individual behaviors, and we did not collect any personal information. We send the emails to the Linux community and seek community feedback. The study does not blame any maintainers but reveals issues in the process. The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained).


Do IRBs typically have a process by which you can file a complaint from outside the university? Maybe they never thought they would need to even check up on computer science faculty...


> You do not experiment on people without their consent.

Exactly this. Research involving human participants is supposed to have been approved by the University's Institutional Review Board; the kernel developers can complain to it: https://research.umn.edu/units/irb/about-us/contact-us

It would be interesting to see what these researches told the IRB they were doing (if they bothered).

Edited to add: From the link in GP: "The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained)"

Okay so this IRB needs to be educated about this. Probably someone in the kernel team should draft an open letter to them and get everyone to sign it (rather than everyone spamming the IRB contact form)

T


According to their website[0]:

> IRB exempt was issued

[0]: https://www-users.cs.umn.edu/~kjlu/


These two sentences seem contradictory from the author's response is contradictory: " The IRB of UMN reviewed the study and determined that this is not human research (a formal IRB exempt letter was obtained). Throughout the study, we honestly did not think this is human research, so we did not apply for an IRB approval in the beginning."

I would guess their IRB had a quick sanity check process to ensure there was no human subject research in the experiment. This is actually a good thing if scientists use their ethics and apply good judgement. Now, whoever makes that determination does so based on initial documentation supplied by the researchers. If so, the researchers should show what they submitted to get the exemption.

Again, the implication is their University will likely make it harder to get exemptions after this fiasco. This mistake hurts everyone (be it indirectly). Although, and this is being quite facetious and macabre, the researchers have inadvertently exposed a bug in their own institutions IRB process!


Combined with their lack of awareness of a possible breach of ethics in their response to Greg, I find it hard to believe they did not mislead the UMN IRB.

I hope they release what they submitted to the IRB to receive that exemption and there are some form of consequences if the mistake is on their part.


A few things about IRB approval.

1. You have to submit for review any work involving human subjects before you start interacting with them. The authors clearly state that they sought retroactive approval after being questioned about their work. That would be a big red flag for my IRB and they wouldn't approve work retroactively.

2. There are multiple levels of IRB approval. The lowest is non regulated, which means that the research falls outside of human subject research. Individual researchers can self-certify work as non regulated or get a non-regulated letter from their IRB.

From there, it goes from exempt to various degrees of regulated. Exempt research means that it is research involving human subjects that is exempt from continued IRB review past the initial approval. That means that IRB has found that their research involves human subjects but falls within one (or more) of the exceptions for continued review.

In order to be exempt, a research project must meet one of the exemptions categories (see here https://hrpp.msu.edu/help/required/exempt-categories.html for a list). The requirements changed in 2018, so what they had to show depends on when they first received their exempt status.

The bottom line is that the research needs to (a) have less than minimal risks for participants and (b) needs to be benign in nature. In my opinion, this research doesn't meet these requirements as there are significant risks to participants to both their professional reputation and future employability for having publicly merged a malicious patch. They also pushed intentionally malicious patches, so I am not sure if the research is benign to begin with.

3. Even if a research project is found exempt from IRB review, participants still need to consent to participate in it and need to be informed of the risks and benefits of the research project. It seems that they didn't consent their participants before their participation in the research project. Consent letters usually use a common template that clearly states the goals for the research project, lists the possible risks and benefits of participating in it, states the name and contact information of the PI, and data retention policies. IRB could approve projects without proactive participant consent but those are automatically "bumped up" to full IRB approval and approvals are given only in very specific circumstances. Plus, once a participant removes their consent to participate in a research project, the research team needs to stop all interactions with them and destroy all data collected from them. It seems that the kernel maintainers did not receive the informed consent materials before starting their involvement with the research project and have expressed their desire not to participate in the research after finding out they were participating in it, so the interaction with them should stop and any data collected from them should be destroyed.

4. My impression is that they got IRB approval on a technicality. That is, their research is on the open source community and its processes rather than the individual people that participate in them. My impression of their paper is that they are very careful in addressing the "Linux community" and they really never talk about their interaction with people in the paper (e.g., there is no data collection section or a description of their interactions on the mailing list). Instead, it's my impression that they present the patches that they submitted as happening "naturally" in the community and that they are describing publicly available interactions. That seems to be a little misleading of what actually happened and their role in producing and submitting the patches.


I’m interested in MSU’s list of exempt categories. Most of them are predicated on the individual subjects not being identifiable. Since this research is being done on a public mailing list that is archived and available for all to read, it is trivial to go through the archive and find the patches they quote in their paper to find out who reviewed them, and their exact responses. Would that disqualify the research from being exempt, even if the researchers themselves do not record that data or present it in their paper?

What if they did a survey of passers–by on a public street, that might be in view of CCTV operated by someone else?


The federal government has updated the rules for exemption in 2018. The MSU link is more of a summary than the actual rules.

The fact that a mailing list is publicly available is what made me worry about the applicability of any sort of exemption. In order for human subject research to be exempt from IRB review, the research needs to be deemed less than minimal risk to participants.

The fact that their experiment happens in public and that anyone can find their patches and individual maintainers' responses (and approval) of them makes me wonder if the participants are at risk of losing professional reputation (in that they approved a patch that was clearly harmful) or even employment (in that their employer might find out about their participation in this study and move them to less senior positions as they clearly cannot properly vet a patch). This might be extreme, but it is still a likely outcome given the overall sentiment of the paper.

All research that poses any harm to participants has to be IRB approved and the researchers have to show that the benefits to participants (and the community at large) surpass the individual risks. I am still not sure what benefits this work has to the OSS community and I am very surprised that this work did not require IRB supervision at all.

As far as work on a public street is concerned, IRB doesn't regulate common activities that happen in public and for which people do not have a reasonable expectation of privacy. But, as soon as you start interacting with them (e.g., intervene in their environment), IRB review is required.

You can read and analyze a publicly available mailing list (and this would even qualify as non human subject research if the data is properly anonymized) without IRB review or at most a deliberation of exempt status but you cannot email the mailing list yourself as a researcher as the act of emailing is an intervention that changes other people's environment, therefore qualifying as human subject research.


Thanks (This thread may now read a bit confusingly as I independently found that and edited my comment above)


In any university I've ever been to, this would be a gross violation of ethics with very unpleasant consequences. Informed consent is crucial when conducting experiments.

If this behaviour is tolerated by the University of Minnesota (and it appears to be so) then I suppose that's another institution on my list of unreliable research.

I do wonder what the legal consequences are. Would knowingly and willfully introducing bad code constitute a form of vandalism?


>>>On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.

from Lu's list of publications at https://www-users.cs.umn.edu/~kjlu/

Seems like a conference presentation at IEEE at minimum?


IEEE S&P is actually one of the top conferences in the field of computer security. It does mention some guidance on ethical consideration.

> If a paper raises significant ethical and/or legal concerns, it might be rejected based on these concerns.

https://www.ieee-security.org/TC/SP2021/cfpapers.html

So if the kernel maintainers report the issue to the S&P PC, the paper could potentially be rejected.


Which shows that IEEE also has a problem with research ethics if they accepted such a paper.


IEEE is a garbage organization. Or atleast their India chapter is. 3 out of 5 professors in our university would recommend to avoid any paper published by Indians from IEEE. Here in India, publishing trash papers with the help of one's 'influence' is a common occurrence


Wow, that is basically the top computer security conference.


IMNAL. In addition to possibly cause the research paper retracted due to the ethical violation, I think there are potentially civil or even criminal liability here. The US law on hacking is known to be quite vague (see Aaron Swartz’s case for example)


> You do not experiment on people without their consent.

Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

From a common sense standpoint, it seems to me this is more about medical experiments. Yesterday I put some of my kids toys away without telling them to see if they’d notice and still play with them. I don’t think I need IRB approval.


IRB (as in Institutional Review Board) is a local (as in each research institution has one) regulatory board that ensures that any research conducted by people employed by the institution follows the federal government's common rule for human subject research. Most institutions receiving federal funding for research activities have to show that the funded work follows common rule guidelines for interaction with human subjects.

It is unlikely that a business conducting A/B testing or a parent interacting with their children are receiving federal funds to support it. Therefore, their work is not subject to IRB review.

Instead, if you are a researcher who is funded by federal funds (even if you are doing work on your own children), you have to receive IRB approval for any work involving human interaction before you start conducting it.


> wouldn’t every single A/B test done by a product team be considered unethical?

Potentially yes, actually.

I still think it should be possible to run some A/B tests, but a lot depends on the underlying motivation. The distance between such tests and malicious psychological manipulation can be very, very small.


> it seems to me this is more about medical experiments

Psychology and sociology are both subject to the IRB as well.

Regardless of their department, this feels like a psychology experiment.


This is a huge stretch. It’s more of a technical or operational experiment. They are testing the review process, not the maintainers.


"I was testing how the bank processes having a ton of cash taken out by someone without an account, I wasn't testing the staff or police response, geez!"


> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

I would argue that ordinary A/B tests, by their very nature, are not "experiments" in the sense that restriction is intended for, so there is no reason for them to be considered unethical.

The difference between an A/B test and an actual experiment that should require the subjects' consent is that either of the test conditions, A or B, could have been implemented ordinarily as part of business as usual. In other words, neither A nor B by themselves would need a prior justification as to why they were deployed, and if the reasoning behind either of them was to be disclosed to the subjects, they would find them indistinguishable from any other business decision.

Of course, this argument would not apply if the A/B test involved any sort of artificial inconvenience (e.g. mock errors or delays) applied to either of the test conditions. I only mean A/B tests designed to compare features or behaviours which could both legitimately be considered beneficial, but the business is ignorant of which.


> Applied strictly, wouldn’t every single A/B test done by a product team be considered unethical?

Assuming this isn't being asked as a rhetorical question, I think that's exactly what turned the now infamous Facebook A/B test into a perceived unethical mass manipulation of human emotions. A lot of folks are now justifiably upset and skeptical of Facebook (and big tech) as a result.

So to answer your question: yes, if that test moves into territory that would feel like manipulation once the subject is aware of it. Maybe especially so because users are conceivably making a /choice/ to use said product and may switch to an alternative (or simply divest) if trust is lost.


It should be for all science done for the sake of science, not just medical work. When I did experiments that just involved people playing an existing video game I still had to get approval from IRB and warn people of all the risks that playing a game is associated with (like RSI, despite the gameplay lasting < 15 minutes).

Researchers at a company could arguably be deemed as engaging in unethical research and barred from contributing to the scientific community due to unethical behavior. Even doing experiments on your kids may be deemed crossing the line.

The question I have is when does it apply. If you research on your own kids but never publish, is it okay? Does the act of attempting to publish results retroactively make an experiment unethical? I'm not certain these things have been worked out because of how rare people try to publish anything that wasn't part of an official experiment.


It does seem rather unethical, but I must admit that I find the topic very interesting. They should definitely have asked for consent before starting with the "attack", but if they did manage to land security vulnerabilities despite the review process it's a very worrying result. And as far as I understand they did manage to do just that?

I think it shows that this type of study might well be needed, it just needs to be done better and with the consent of the maintainers.


“Hey, we are going to submit some patches that contain vulnerabilities. All right?”

If they do so, the maintainers become more vigilant and the experiment fails. But, the key to the experiment is that maintainers are not vigilant as they should be. It’s not an attack to the maintainers though, but to the process.


In penetration testing you are doing the same thing, but you get the go-ahead for someone responsible for the project or organization since they are interested in the results as well.

A red team without approval is just a group of criminals. They must have been able to find active projects with a centralized leadership they could ask for permission.


I don’t know much about penetration testing so excuse me for the dumb question: are you required to disclose the exact methods that you’re going to use?


Yes. You have agreements about what is fair game and what is off limits. It can be that nothing can be physically altered, what times of day or office locations are OK, if it should only be a test against web services or anything in between.


Do you? You have agreement with part of the company and work it out with them, but does this routinely include the people who would be actively looking for your intrusion and trying to catch it? Often that is handled by automated systems which are not updated to have any special knowledge about the up coming penetration test and most of those supporting the application aren't made aware of the details either. The organization is aware, but not all of the people who may be impacted.


Exactly. That's answered higher up in the comment tree you are responding to.


It depends on the organization. Most that I've worked with have said everything is fine except for social engineering, but some want to know every tool you'll be running, and every type of vulnerability you'll try to exploit.


Yes, and a bank branch for example could be very interested in some social engineering to test physical security.

It is very varied. There are a lot of good and enjoyable stories out there on youtube and podcasts for anyone interested.


I tried google much but there were too many results haha. Do you have a few that you recommend?


What you do during pentesting is against the law, if you do not discuss this with your client. You're trying to gain access to a computer system that you should have no access to. The only reason this is OK, is that you have prior permission from the client to try these methods. Thus, it is important to discuss the methods used when you are executing a pentest.

With every pentesting engagement I've had, there always were rules of engagement, and what kind of things you are and are not allowed to do. They even depend on what kind of test you are doing. (for example: if you're testing bank software, it matters a lot if you test against their production environment or their testing environment)


usually the discussion is around the end goals, rather than the means. But both are game for discussion.


If the attack surface is large enough and the duration of the experiment long enough it'll return to baseline soon enough I think. It's a reasonable enough compromise. After all if the maintainers are not already considering that they might be under attack I'd argue that something is wrong with the system, a zero-day in the kernel would be invaluable indeed.

And well, if the maintainers become more vigilant in the long run it's a win/win in my book.


The maintainers are the process, as they are reviewing it, so it's absoutely attacking the maintainers.


"We're going to, as part of a study, submit various patches to the kernel and observe the mailing list and the behavior of people in response to these patches, in case a patch is to be reverted as part of the study, we immediately inform the maintainer."


Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.


>Your message would push maintainers to put even more focus on the patches, thus invalidating the experiment.

The Tuskegee Study wouldn't have happened if its participants were voluntarily, and it's effects still haunt the scientific community today. The attitude of "science by any means, including by harming other people" is reprehensible and has lasting consequences for the entire scientific community.

However, unlike the Tuskegee Study, it's totally possible to have done this ethically by contacting the leadership of the Linux project and having them announce to maintainers that anonymous researchers may experiment with the contribution process, and allowing them to opt out if they do not consent, and to ensure that harmful commits never reach stable from these researchers.

The researchers chose to instead lie to the Linux project and introduce vulnerabilities to stable trees, and this is why their research is particularly deplorable - their ethical transgressions and possibly lies made to their IRB were not done out of any necessity for empirical integrity, but rather seemingly out of convenience or recklessness.

And now the next group of researchers will have a harder time as they may be banned and every maintainer now more closely monitors academics investigating open source security :)


I don't want to defend what these researchers did, but to equate infecting people with syphilis to wasting a bit of someones time is disingenuous. Informed consent is important, but only if the magnitude of the intervention is big enough to warrant reasonable concerns.


>to wasting a bit of someones time is disingenuous

This introduced security vulnerabilities to stable branches of the project, the impact of which could have severely affected Linux, its contributors, and its users (such as those who trust their PII data to be managed by Linux servers).

The potential blast radius for their behavior being poorly tracked and not reverted is millions if not billions of devices and people. What if a researcher didn't revert one of these commits before it reached a stable branch and then a release was built? Linux users were lucky enough that Greg was able to revert the changes AFTER they reached stable trees.

There was a clear need of informed consent of *at least* leadership of the project, and to say otherwise is very much in defense of or downplaying the recklessness of their behavior.

I acknowledged that lives are not at play, but that doesn't mean that the only consequence or concern here was wasting the maintainers time, especially when they sought an IRB exemption for "non-human research" when most scientists would consider this very human research.


But it wouldn't let maintainers know what is happening, it only informs them that someone will be submitting some patches, some of which might not be merged. It doesn't push people into vigilance onto a specific detail of the patch and doesn't alert them that there is something specific. If you account for that in your experiment priors, that is entirely fine.


They apparently didn't consider this "human research"

As I understand it, any "experiment" involving other people that weren't explicitly informed of the experiment before hand needs to be a lot more carefully considered than what they did here.


Makes sense considering how open source people are treated.


In this post they say the patches come from a static analyser and they accuse the other person of slander for their criticisms

> I respectfully ask you to cease and desist from making wild accusations that are bordering on slander.

> These patches were sent as part of a new static analyzer that I wrote and it's sensitivity is obviously not great. I sent patches on the hopes to get feedback. We are not experts in the linux kernel and repeatedly making these statements is disgusting to hear.

( https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )

How does that fit in with your explanation?


>I sent patches on the hopes to get feedback

They did not say that they were hoping for feedback on their tool when they submitted the patch, they lied about their code doing something it does not.

>How does that fit in with your explanation?

It fits in the narrative of doing hypocritical changes to the project.


But lashing out when confronted after the fact? (I can't figure out how to browse to the messages that contain said purported 'slander' - maybe it is indeed terrible slander). Normally after the show is over one stops with the performance...

edit: oh, ok I guess that post with the accusations was mid-performance? Not inconsistent, so, maybe (I'm still not clear what the timeline is).


From GKH's response, which you linked:

    They obviously were _NOT_ created by a static analysis tool that is of
    any intelligence, as they all are the result of totally different
    patterns, and all of which are obviously not even fixing anything at
    all.  So what am I supposed to think here, other than that you and your
    group are continuing to experiment on the kernel community developers by
    sending such nonsense patches?

    When submitting patches created by a tool, everyone who does so submits
    them with wording like "found by tool XXX, we are not sure if this is
    correct or not, please advise." which is NOT what you did here at all.
    You were not asking for help, you were claiming that these were
    legitimate fixes, which you KNEW to be incorrect.


> (3). We send the incorrect minor patches to the Linux community through email to seek their feedback.

Sounds like they knew exactly what they were doing.


It’s a lie, that’s how it fits.


> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

> 1. The voluntary consent of the human subject is absolutely essential.

The Nuremberg code is explicitly about medical research, so it doesn't apply here. More generally, I think that the magnitude of the intervention is also relevant, and that an absolutist demand for informed consent in all - including the most trivial - cases is quite silly.

Now, in this specific case I would agree that wasting people's time is an intervention that's big enough to warrant some scrutiny, but the black-and-white way of some people to phrase this really irks me.

PS: I think people in these kinds of debate tend to talk past one another, so let me try to illustrate where I'm coming from with an experiment I came across recently:

To study how the amount of tips waiters get changes in various circumstances, some psychologists conducted an experiment where the waiter would randomly either give the guests some chocolate with the bill, or not (control condition)[0] This is, of course, perfectly innocuous, but an absolutist claim about research ethics ("You do not experiment on people without their consent.") would make research like this impossible without any benefit.

[0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1816...


But this is all a lie. If you read the linked thread you till see that they refused to admit to their experiment and even sent a new, differently broken patch.


Yeah, it is a bit disrespectful for kernel maintainers without gaining their approvals ahead of time.


Disrespecting some programmers on the internet is, while not nice, also not a high crime.


There is sometimes an exception for things like interviews when n is only a couple of people. This was clearly unethical and it’s certain that at least some of those involved knew that. It’s common knowledge universities.


I'm confused - how is this an experiment on humans? Which humans? As far as I can tell, this has nothing to do with humans, and everything to do with the open-source review process - and if one thinks that it counts as a human experiment because humans are involved, wouldn't that logic apply equally to pentesting?

For that matter, what's the difference between this and pentesting?


Penetration testing is only ethical when you are hired by the organization you are testing.

Also, IRB review is only for research funded by the federal government. If you’re testing your kid’s math abilities, you’re doing an experiment on humans, and you’re entirely responsible for determining whether this is ethical or not, and without the aid of an IRB as a second opinion.

Even then, successfully getting through the IRB process doesn’t guarantee that your study is ethical, only that it isn’t egregiously unethical. I suspect that if this researcher got IRB approval, then the IRB didn’t realize that these patches could end up in a released kernel. This would adversely affect the users of billions of Linux machines world–wide. Wasting half an hour of a reviewer’s time is not a concern by comparison.


Consent!

Usually when an organization is pen-tested it consented to being pen-tested (likely even requesting it).

Here there were no contact with the Linux foundation to gain consent for the experiment.


> indicating “looks good”

I wonder how many zero days have been included already, for example by nation state actors...


You could argue that they are doing the maintainers a favor. Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.

If I were at the receiving end, I’d think checking a patch multiple times before accepting it.


I'm sure that they thought this. But this is a bit like doing unsolicited pentests or breaking the locks on somebody's home at night without their permission. If people didn't ask for it and consent, it is unethical.

And further, pretty much everybody knows that malicious actors - if they tried hard enough - would be able to sneak through hard to find vulns.


> Bad actors could exploit this, and the researchers are showing that maintainers are not paying enough attention.

And this is anything new?

And if I blow a hammer over your head while you are not suspecting it, does this prove anything else than that I am thug? Does it help you? Honestly?


>You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

>1. The voluntary consent of the human subject is absolutely essential.

Does this also apply to scrapping people's data?


> You do not experiment on people without their consent.

By this logic eg. resume callback studies aiming to study bias in the workforce would be impossible.


Meh, this means a lot of viral social experiments on Youtube violate the Nuremberg code...


Yes and?

This isn't a "gotcha" - people shouldn't do this.


Yes, and people generally don't seem upset by viral Youtube social experiments. The Nuremberg code may be the status quo and nothing more. No one here is trying to justify the code on its merits, just blindly quoting it as an authority.

Here's another idea: If it's ethical to do it in a non-experimental context, it's also ethical to do it in an experimental context. So if it's OK to walk up to a stranger and ask them a weird question, it's also OK to do it in the context of a Youtube social experiment. Anything other than this is blatantly anti-scientific IMO.

It is IRBs that need reform. They're self-justifying bureaucratic cruft: https://slatestarcodex.com/2017/08/29/my-irb-nightmare/


Nah. They aren't experimenting on people, they are experimenting on organizational processes. A very different thing.


> You do not experiment on people without their consent. This is in fact the very FIRST point of the Nuremberg code:

> 1. The voluntary consent of the human subject is absolutely essential.

Which is rather useless, as for many experiments to work, participants have to either be lied to, or kept in the dark as to the nature of the experiment, so whatever “consent” they give is not informed consent. They simply consent to “participate in an experiment” without being informed as to the qualities thereof so that they truly know what they are signing up for.

Of course, it's quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations, and they never consented to that, but the hospitals and physicians that partake in that are not sanctioned as it's “tradition”.

Know well that so-called “human rights” have always been, and shall always be, a show of air that lack substance.


> quite common in the U.S.A. to perform practice medical checkups on patients who are going under narcosis for an unrelated operations

Fascinating. Can you provide links?


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7223770/

https://ctexaminer.com/2021/03/20/explicit-consent-for-pelvi...

https://www.forbes.com/sites/paulhsieh/2018/05/14/pelvic-exa...

Most one can find of it also only deals with “intimate parts”; I am quite sceptical that this is the only thing that medical students require practice on and I think it more likely that the media only cares in this case and that in fact it is routine with many more body parts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: