Hacker News new | past | comments | ask | show | jobs | submit login

While I don't disagree with your description, there is an awful lot of scientific output which is really just fishing for significance (IE, runnings lots of tests without corrections), publication, claim credit for discovering something, and it never actually gets followed-up on to see if the claim generalizes.

https://en.wikipedia.org/wiki/Data_dredging

I think probably the most important thing is to get scientists to spend more time identifying and teasing out correlated variables to identify plausible mechanisms.




This problem is mostly a problem of social sciences. In physics it is important to have a theoretic explanation, if there is no explanation then physicists become excited and start to dig really hard. They value theoretic explanations not correlations. In contrast social sciences lack a good theory, they substitute quality of a theory with a quantity of theories. So it is even important correlations often impossible to explain without resorting to ad hoc theories.

You can see in this article that authors already suggest a theoretic explanation, and I do not doubt that we'll see follow up studies trying to clarify situation.


it's quite common in medical research, even highly quantitative work. And I've seen it in every field I've worked in, which spans biology, physics, chemistry, typically with a quantitative bent.

I once had an advisor edit my draft over night and submit it as a paper with a bunch of juiced up numbers that weren't true, but made sense to the advisor even if the underlying scripts I ran didn't support it. I complained to them and the paper was withdrawn before publication, and immediately left their group. this was in quantitative biology- hard core bioinformatics with very sophisticated modelling.

But yeah, real experimental physics is hard to fake since reproduction is usually more straightforward than in other fields.


> I once had an advisor edit my draft over night and submit it as a paper with a bunch of juiced up numbers that weren't true,

I'm stating the obvious here, but that is not a good advisor in any sense. It must have been difficult to leave, but it would be the only reasonable response.


Well, if I'd wanted a career in science and didn't have ethics, then they would have been a good advisor because they knew exactly how to ride their wave of falsehood to a professorship at Berkeley.

It wasn't hard to leave, I just contacted another professor at berkeley and joined their lab the next day. The new advisor, while fairly dull, was methodic and pedantic and the idea of faking or juicing results would probably never have occured to him.

In short, in science if you're not a super-genius, it can be hard to compete with the super-geniuses and the cheaters. I found it easier to move to computer engineering than stay in science.


So, you're basically telling me that there is (or was) a bioinformatics prof @ Berkeley who was fucking with the data.

Yeeeesh.

I guess my science career was relatively clean. I knew a few fellow students who got screwed over by their advisors in the sense that the advisors demanded an excessive amount of publishable work to graduate.

And I saw plenty of personality conflicts, many of which could be lain squarely in the lap of the advisor.

But I never saw or heard of outright fraud, which makes me happy.

I'm not naïve. I know fraud is everywhere. And I know there's a lot of pressure to produce interesting results. I probably just got lucky.

edit: for anyone taking the plunge into grad school. I made my choice of advisor largely based on his reputation of looking out for his students ... and on his research as a secondary consideration. That may have helped me.


If I would to make a comprehensive list of everything I've seen it would be depressing.

When I first got to grad school I immediately went to a group that had published a paper "solving parts of protein folding" using a lab-written code. Some 1 year after the paper was written, the PI could not give me that code, "because it had been lost when an SGI was reinstalled". I don't really trust results in papers unless I can see and hold code and reproduce the author's work, or a highly competent scientist implements their own version (I'm no good at reading papers and writing code to implement it, then run through all the steps of reproducing the original paper.)

Another enlightening moment was when a more senior grad student told me: make sure everything you do ties back to medical research, even if the relationship is extremely distant. You can get money from NIH from curing senator's family's diseases (cancer and heart disease).

When I was finally starting to apply for funding on my own through the NIH R01 grant program, I was turned down, without a score (meaning it was worthless and never should be funded). The next year, I was on the study section for that grant section and saw several more experienced PIs submit proposals that were very similar to (likely copied from) mine, and they were funded. I later learned I needed to spend several years reviewing grants before I knew enough to write a successful grant (oh, and make friends with everybody else in the study section, too).

On another study section dedicated to funding moving academic data and compute to the cloud, I turned down several grants because they asked for money for closet (on-prem) clusters. I was not asked to return, because the people I turned down were influential.

Basically, as has been pointed out many times before, the incentive system in academia is perverse and does not help people like me who just want to do high quality research but take our time to get the details right, and not get in competitions with other, more aggressive scientists. Many of us self-select out of science and end up as computer engineers or ML engineers or whatever in industry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: