Hacker News new | past | comments | ask | show | jobs | submit login
We don't know how to fix science (2021) (worksinprogress.co)
106 points by bookofjoe on July 26, 2022 | hide | past | favorite | 143 comments



Science does indeed have a lot of problems, but is the problem really that science is broken, or that our expectations of it are.

> Ideally, we’d like to measure the benefit provided by a study to society. We might ask: had this piece of research not been funded, would a given invention have been delayed? If so, for how long? And what was the impact of that on societal welfare? We could also try to estimate intermediate metrics of research usefulness, like whether a given basic research paper will end up being used as input to an application. It is an argument for humility, not epistemic nihilism.

The whole promise of basic science research is that it may be valuable in ways that are immeasurable, or certainly uncapitalizable. One major issue with science is that we're always hoping that it will say more than it should. I would take a more mindfulness-y approach to science. You do science because it is a good end, and if you do it for that reason, good things will come. If you chase "good science" you get the problems we have.


Capital S “Science” where the goal is continuous funding and reputation growth are in a likely unresolvable tizzy as long as long as humans have confirmation bias and a desire to be right. There’s far too much reason to cheat and to lie with p values, even if unintentionally. This isn’t new, it’s just had a light shown on it.

Applied sciences where the end goal needs to be something that actually works and not just getting published are in much better shape and arguably innately of more value to begin with.


But isn’t the solution to the former more funding? Make studies that check on existing ones worthy (especially if they contradict those!)


There’s a general inability to replicate as researchers aren’t publishing their data (probably because they’d be caught).

This is an incentive problem, and more funding does not fix that issue


Many people would like to get things like "cure to disease X" or "reliable way to make people healthy" or "way to transport crops to hungry regions" before things like "green jelly beans linked to acne", and this is what the complaint is about. We can fool around with jelly beans _after_ we have done the more important things.


If that's true, then it doesn't make sense for the public to invest so heavily in it. Humanities research also is valuable in ways that are immeasurable and is also a good end.

The public got invested because it had a lot of useful outcomes in the 20th century.


The point is that you don't know up front if or when that's going to happen.


Expectation of high variability or long time frame sounds different from what the other poster said.


Nobody had any use for number theory for some two and a half millenia or so, until we figured out that it can be used to secure communications and data.

How would you assess things like that?


Once, again, the original poster said that we should pursue science with little to no expectation of usefulness, and should pursue it for its own sake. I am merely arguing that IF that is the case, then it may not be worth pursuing with public money.

To your suggestion, I am not arguing that we should not fund things, just because we don't have an initial understanding of how it will be useful.


I see a lot of appeals to "fund people, not projects", but the main issue I have with it is that I think it strongly biases people to select "super stars" who have impressive credentials / come from high status organizations. Of course, there's already a lot of "rich get richer" effects in science, as the article points out, but I think it would be nice to try to shift away from those things, not increase them. (Besides, isn't the resounding message in academia these days supposed to be that assessing people is full of biases that we ought to avoid? Of course you always assess the people behind a grant application to some extent, but focusing on the project would seem to be better in terms of avoiding these biases.)

The other alternative of "funding lotteries" is more suspect. As the article summarizes, the argument for lotteries is:

> Advocates of lotteries make two key critiques: a) the current system forces researchers to spend a lot of time preparing grants; and b) peer reviewers cannot reliably identify “good” grant applications. They claim that a lottery system would reduce the time spent on review (because reviewers would mostly skim the proposals to check for minimal scientific robustness) as well as the time spent on preparing proposals (because there would be less of an incentive to meticulously craft proposals, given that no matter how detailed and well written they are, they are going to be chosen at random).

Of course (a) is true, but as the article suggests, the evidence for (b) is a bit shaky. Moreover, we should have strong priors against (b). To be sure, reviewing is not perfect, far from it! But, scientists must at some level be able to judge the future prospects of work, or else they wouldn't be able to make any fruitful decisions about what research to conduct. So the primary plausible way I see that (b) could perhaps be true for a given pool of applications is if all of the ones in the pool pass some thresh-hold bar for quality that makes it hard to further distinguish between them. It's not clear to me that that quality level would be maintained if we move to a lottery.


I see a lot of appeals to "fund people, not projects", but the main issue I have with it is that I think it strongly biases people to select "super stars" who have impressive credentials / come from high status organizations.

Please, someone, correct me if I'm wrong, but I think this is essentially how the European Commission allocates scientific funding--professors basically get big block grants to fund their labs to do whatever. Then there are some big grants that go to institutions, with those like ETH Zurich soaking up a lot of the funds, and then those institutions can distribute to the smaller ones as they see fit. Rubbing up to high-profile professors at ETH or GeoForschungsZentrum feels a lot like petitioning King Frederic of Prussia, rather than anything merit-based.

It's really not clear to me that this is a better system; at least in my field (geosciences) I don't think that the European labs are more innovative or productive than their American counterparts, which is what the 'fund people, not projects' idea is supposed to enable. Furthermore, it completely blocks nonacademic research funding, or much project-specific funding. For example, I work for an applied science nonprofit based in Italy, and we have a lot of ideas that would (in principle) be big hits at NSF, but there is no real avenue to fund ~€100-300k projects on an ad-hoc/project-specific basis.


> but there is no real avenue to fund ~€100-300k projects on an ad-hoc/project-specific basis.

Long time ago I had a colleague who lamented that his projects were never selected by the R&D funding committee of our employer. I suggested him to multiply by ten his proposed budget and then at the next round his proposal was accepted! This was at a time were there where much more money in the Telecom sector than now.

So please do not hesitate to multiply by 10 your proposed budget. It does not have to be more ambitious on your side, all is needed is to find additional partners who could bring value to your proposal, for example by implementing it.

(I was both work package and project leader for EU FP7 projects)


How does it benefit the EU tax payer when projects that could be successfully done for 300k, are blown up to 3M with questionable added value? That's exactly the kind of problem that needs to be solved: You are just advocating to game the EU bureaucracy better.


> How does it benefit the EU tax payer when projects that could be successfully done for 300k, are blown up to 3M with questionable added value?

At least in FP7, the EU wanted implementations and new markets or businesses, because they didn't want to pay for something they didn't understand.

Trade metrics are easy to understand and less objectionable than funding someone with an innovative idea on a domain where there are few experts. Th EU commission themselves has also to report about their actions to EU parliament and EU council.

> You are just advocating to game the EU bureaucracy better.

You may not know but it happens that EU commission, or national authorities ask for reimbursement of funds if they thinks they have been gamed. Tricking the "EU" is not advisable.


You are just restating that for the EU, bureaucracy is more important than genuine research. Furthermore, the "gaming" I referred to is of course the legally allowed game that the EU is actually expecting you to play, and that you advised in your previous comment.


> You are just restating that for the EU, bureaucracy is more important than genuine research.

Perhaps but that's not my impression. My impression is (as in this thread) there is no good intrinsic metrics for research, except if it shows an impact on society.


There is no good intrinsic metrics for research, we can agree on that. Impact on society is not a good measure either, because if you had funded 10 smaller research projects instead of one big one, you might have gotten a much bigger impact on society, but not so easy to measure and attribute, and maybe not as quick to do the impacting.

So yes, bureaucracy. Because instead of actually doing the job that needs to be done, the EU is more interested in covering their own asses so that it looks as if they were doing the job. But the EU is not special in any way here, except that it is a very big example of a bureaucracy, of course.


This reminds me of many anecdotal reports of companies having difficulty selling in Japan, only to be told to "add a zero" to their prices and find huge success. Too cheap = not very good/desirable, in other words.


> Too cheap = not very good/desirable, in other words.

I think that's the EU mindset (at least in FP7), it wants their funds having the greatest impact as possible, certainly a small project can't have much impact.


I think funding lotteries are rarely proposed as straightforward lotteries, there is usually still an evaluation step that filters some 50% of proposals. It’s based on the observation that there is usually consensus about what is a reasonable grant proposal, and very little consensus about what is a great grant proposal.


Yes, I agree that it's the fine distinctions at the upper end that are hard to make. But if we're going to eliminate 50% of the proposals, is that really going to reduce the amount of time people spend on them? It might reduce reviewer time (which would certainly be welcome!), but I'm not so sure it reduces the submitter's time.

It comes down to whether the large amount of time spent is from people polishing and re-submitting stuff that's good but not "great", or whether it comes from the initial stages of getting something to that good quality to start. In my experience, it's the latter. And if that's typical, then I don't see how the lottery helps.


> But if we're going to eliminate 50% of the proposals, is that really going to reduce the amount of time people spend on them?

It should do - if you're only aiming for the top 50% of proposals rather than aiming to be the best, then you can submit a proposal that's "good enough" rather than putting in a lot of time squeezing out the final few percent of marginal improvement.


If it’s a lottery you don’t need to make a hard cut off of 50%. You just pass all proposals that meet the criteria. That means you only have to design the proposal so it meets those criteria and any extra work on it is useless since it will be ignored and just get thrown in the same pile to be randomly chosen.


What happens when, and I know this will be rare, our ideas of reasonable are wrong? Seems like that ends up being a dead end where the research needed to correct our misunderstanding can't get funded.


In Ricón's Fund People Not Projects III article [0] he relates:

> Ioannidis et al. (2014) note there are 15M scientists that published anything in the 1996-2011 period, but only 1% that has published every single year in this period. This smaller 150k-strong group accounts for 40% of all papers and 87% of all papers with >1000 citations.

One percent seems like a good size for a major niche; those who do the bulk of the publishing.

This is important, but if that's the only niche you're looking at you're bound to miss 99 percent of what you want to find.

[0] https://nintil.com/newton-hypothesis

Edit: Maybe the people that need to be funded most are secretaries for some of the other 99 percent.


> you're looking at you're bound to miss 99 percent of what you want to find.

Or those 1% are real professional that know their stuff while the other are publishing because they are doing some internship, or publishing low quality articles on an obscure domain is a way to evade criticism (how many ALS experts in academia?) or other short commitment to a topic.

I read a lot of literature on ALS (Lou Gehrig's disease) and clearly 99% of publications have no value at all, even when their university claims of "a breakthrough".

As far I know there were breakthroughs only every ~5 years (TDP-43 in 2006, C9orf72 in 2011, FUS 2010) and no similar breakthrough since. Big companies like Biogen, which bet much on various genetic (ASO) therapies, are now removing drugs from their pipeline. We clearly do not understand the field, despite publishing 15,000 articles per year!


The main issue with the people-not-projects idea is a transition problem. New systems are desired because the current system isn't working well enough. How do you identify the superstars? By relying on metrics created by the current system.

In particular, any system that identifies superstars would probably have identified Lesné, the guy who seems to have been forging Alzheimer's research. He had already obtained huge and lavish grants for his lab on the back of this work. Winner-takes-all is a system that would throw accelerants on the fire of fraud, as presumably once a scientist has managed to get a blank cheque cut, nobody is incentivized to scrutinize how that money is being spent too closely. They're a superstar, after all.


I think that a lottery has potential if we use peer review as the bar for quality.

My prior is that peer review is better than random chance. This seems sensible; after all, a human being should be better at determining scientific validity versus random chance. Under that assumption, it makes sense to have peer review to narrow the pool down to a smaller number of grant proposals.

It doesn't seem unreasonable that peer review could get the "best" proposal into a pool of 10-15 applicants around 90% of the time. I think that would reduce the pressure on proposals by a decent amount.


Isn’t that what’s broken? Everyone is trying to game peer review.


there are so many major problems with 'fund people not projects'. Science is already a little bit broken. Scientists aren't supposed to dragnet data until they get the results they want, but it still does happen. however, if we were funding people and not projects, this would happen almost every time.

I also think politics in science would become even more of a problem. all of our money towards science would go toward funding research that backs their worldview.

I also think we'd see even more social media like driven science. "I am going to research this because I can write on social media/youtube on this and get lots of click/likes-> money"


Yes we do. Stop the “publish or be damned” daftness and hire people who are actually trained in the disciplines.

That will cost more but that’s a solution.


Q: Science is 'broken' (faked / rosy bias in data and publication, less useful results, lack of public trust, etc). // "How do we ''fix'' Science?"

Hypothesis: TL;DR Goodhart's law: 'the measure became the target and thus is no longer a good measure' -- Publish or die / secure funding or die has lead to risk averse research, a lack of research into new fields, and a lack of reproduction / verification of existing research (checking the known is not 'sexy' / profitable).

Experiment: Proposal: Create a 200 year funding guaranteed tenure tracking inflation funded public domain / commons expansion researcher reward that ensures a life long appointment (and retirement) based on some other excellence requirement. Maybe revisit former initiatives like 'project paperclip' (WWII) for criteria suggestions.

Data Collection / Reports: Conduct the real experiment, for the first of at least 200 years OR until all enrolled applicants retire / expire. Issue interim reports every couple years (4, 10, something) to see if clear results contrary to the typical environment persist.


Read the first paragraph. We don't know if those strategies will work; there are no data.


Sounds like a great moment to try an experiment.


Which is the point of the piece.


Maybe the ponderings of a highly-respected thinker of the 20th-century might be useful in this discussion, this PDF includes Bertrand Russell's essay "Science and Values". The essay's found on p. 619. of The Basic Writings of Bertrand Russell, a section of Part XVI, 'The Philosopher and Expositor of Science'.

[https://emilkirkegaard.dk/en/wp-content/uploads/The-Basic-Wr...]

"Science used to be valued as a means of getting to know the world; now, owing to the triumph of technique, it is conceived as showing how to change the world."

That it has.


Something that a lot of people forget is that the pondering of those highly-respected thinkers was due, in large parts, to them hailing from influential and rich families.

Russell? British aristocracy. Von Neumann? Heir to an Hungarian and newly titled magnate. Wittgenstein? Wealthiest family in Austro-Hungaria. De Broglie? French aristocracy. They, of course, were insanely smart, but putting that intelligence to such a wonderful use was enabled by their background.

This is a sad state of affairs for researchers today because, by default of having the pre-existing money to explore, they must become subservient to it: grants and fundings.


This is basically the difference between the proletariat and the bourgeoisie: one needs to work for someone (even if only for its customers, as a business owner) to survive, and the other can do whatever they want while living off of previously accumulated capital.

Not having to worry about how to get bread on the table opens up the door for all sorts of interesting creative labour!


Einstein? Clock-makers son.

Newton? Grew up with a reverend as a step father.

Kelvin? Son of a teacher.

Faraday? Son of a blacksmith.

Plank? University professors.

Our obsession with credentials is preventing more people from being scientists today than the classism of the 19th century did.


private tutors are probably the key: https://wikipedia.org/wiki/Bloom%27s_2_sigma_problem


In case of such research prodigies, it is a risky thesis. More important were the interaction factors.

That said, almost all of them come from families that did not have to struggle to put food on the table or quickly eacaped this environment with external funding. Let finding is that you need "free" time to actually do science.


What is the thesis here? Inaction because of insufficient certainty? Isn't that the exact issue that landed us in this mess?

Science requires action in the face of uncertainty.


The thesis is that we should be experimenting on how to get best results. In other words, we should treat science funding scientifically, testing hypothesis, examining results and course correcting based on the evidence.

To your point about inaction, from the article:

"""

This does not mean we should wait decades before implementing any change. Waiting for strong, crystal-clear evidence to act would be engaging in the same flawed thinking that led to the claim that “there is no evidence that masks work” which we heard last year—the costs of delays or inaction would be high. Demands for open access, “red teaming” science, or running multiple-lab reproducibility studies (be it in the social or life sciences, or elsewhere) shouldn’t get stalled by the lack of RCTs. Where there are strong theoretical considerations, indirect evidence, and broad agreement that a proposal will improve science, without a serious cost if we’re wrong, we should just go ahead in at least some cases, and assess the benefits afterwards.

"""

The last paragraph sums up the thesis:

"""

Those interested in meta-science may disagree about what the best way to reform science is, but all of us can agree that we need more evidence about the proposals being made. We have many interesting, reasonable ideas ready to be tried. It is a glaring irony that the very same institutions that enable practitioners of the scientific method to do their work don’t apply that same method to themselves. It is time to change that.

"""


Do people really find the thing mentioned in the last paragraph that ironic? To be honest, I do not (except, maybe, if the scientists are economists who work in industrial organization, or something). Many natural scientists I know are deeply skeptical of our ability to accrue this kind of subtle knowledge about complex human processes and social organizations through experiments...


>. Waiting for strong, crystal-clear evidence to act would be engaging in the same flawed thinking that led to the claim that “there is no evidence that masks work” which we heard last year—the costs of delays or inaction would be high.

We are getting a real time test to the question of if mask mandates work in the wild: https://www.abc.net.au/news/2022-07-22/nz-jacinda-ardern-cov...

Australia has no mask mandates, NZ does. NZ has a higher death and hospitalization rate than Australia. The answer is that it's health theater on par with shoe removal at airports.

Of course we have the typical bait and switch of pretending that masks as used in an operation theater are in any way the same as the crumpled cloth mask from your pocket.


I don't really know what you're talking about.

NZ looks to me to have a lower death rate than Australia. See [0].

I haven't done a deep dive of the literature but from what I understand wearing face masks is pretty well established to correlate with lower infection rates. A Google search gives [1] which seems pretty reputable to me.

It sounds like you're picking a fight about your personal cause celebre? The original article is talking about trying to figure out how to build evidence. The underlying assumption is that once you have evidence, you use it to inform decisions. You're pushing a false narrative by not only ignoring evidence but also misinterpreting or lying about the evidence that's available.

[0] https://aatishb.com/covidtrends/?data=deaths&location=Austra...

[1] https://www.publichealthontario.ca/-/media/Documents/nCoV/CO...


He mentioned masks because the article itself uses masks as an example of a belief the author claims is false, as cited in the post above.

By the way, lots of people believe masks don't work. You can line up places that differ only by mask mandate status and observe that the case curves are the same. There are many situations like that, but in theory you only need one to disprove the claim that mask mandates (always) work. The case against masks is made by Ian Miller in this book:

https://www.amazon.com/Unmasked-Global-Failure-COVID-Mandate...


>You're pushing a false narrative by not only ignoring evidence but also misinterpreting or lying about the evidence that's available.

In the past month Australia has not had a mask mandate, NZ has. NZ has had more than twice the per-capita deaths and hospitalization of Australia. If masks worked it should have been the reverse.

https://ourworldindata.org/coronavirus/country/new-zealand


I just don't see what you're talking about.

The same link but with Australia and New Zealand in every chart [0] just blatantly disproves your narrative.

I'm just not that familiar with the mask mandates of NZ and AUS. AUS looks to have lifted their mask mandate on June 17th 2022? Looking at the link you sent, to my eyes, there's a clear upward trend of deaths in AUS nearly three weeks after June 17th (June 10th 2022).

The numbers for both are so low, relative to other countries like the US, so it's kind of hard to really read too much into them but NZ looks at worst to be 20% more per capita than AUS.

[0] https://ourworldindata.org/coronavirus/country/new-zealand?c...


The entire premise of the argument is flawed anyway. There are many factors for cases and hospitalizations / deaths. You can't cherry pick one comparison, at one point of time, and determine the attribution of one specific policy out of all the noise.

However, the only gold standard study done on cloth masks to my knowledge found no statistically significant benefit. If we are talking about cloth masks, I think we would have to accept that any benefit from them is very minimal at best.

Other types of masks like surgical or of course n95 do prove beneficial.


Now divide by population, 5 million for NZ, 25 million for Australia and 330 millions for the US.

Currently NZ has the equivalent of 2000 US deaths a day, Australia 1300 US deaths a day and the US has 500.


I see, you're looking at "Daily new confirmed COVID-19 deaths per million people" which has a 7-day rolling average.

So, basically, during the last week or so, NZ had roughly 25 deaths per day whereas AUS had 50-70 per day in that week.

My apologies for not interpreting the data correctly.

The data still disproves your narrative. That one graph does have AUS at 2x NZ in a few key dates but overall no.

I'm going to stop arguing about this. I think the utility of further discussion has already diminished to the point of minimal reward.


If Yoda were a scientist he would say something like,

No proposal

No committee

No eminence

No hesitation

Just do


I would love to just do!

But I need money to just do. Approximately a million dollars.

Who is going to give me a million dollars without a proposal as to my plan on how to spend that?

Is one person going to be the decision maker on who gets money? That seems like it would be susceptible to strong biases. A committee could make decisions with less bias.

And how will they discern between proposals? Money isn’t unlimited, and a million dollars is a lot of money — you don’t want to just give that away to anybody. Some prior track record of success on the part of the applicant would help demonstrate their ability to accomplish the proposed research. Maybe they would demonstrate that prowess somehow, through written means perhaps, like a document, published for all to read.

So from first principles we’ve arrived back at the current situation. I don’t think things are perfect, but throwing out the whole system without understanding why it’s there will just lead to more dysfunction.

Yes no one likes writing grants, and yes publishing can be gamed. But until you solve the fact that research costs money, and a few people have control over most of the available money, then we’re (scientists) going to have to spend a lot of time convincing them to give us some of what they have.


Everything you say is so true and the last line is the way its always been:

>research costs money, and a few people have control over most of the available money, then we’re (scientists) going to have to spend a lot of time convincing them to give us some of what they have.

Definitely, and I think it pays to be very adaptable whether you are operating with an academic approach or not.

Institutions that have developed over sometimes hundreds of years can be hard to beat. I wouldn't want to stop that kind of progress. But I think there should be alternatives, and I do think a lottery has its place.

>I need money to just do. Approximately a million dollars.

I'm a lot worse off than that.

It would take $10 million to build the kind of lab I need, to deploy the full one percent of my findings.

So I would have to accept the situation, being likely to settle for less than one percent and focus on that.

And to potentially scale up from there I think I'd have much better luck by personal meetings & communication with money people compared to what results I could expect from issuing scientific papers. Where I have had some good luck when reaching a commercially viable milestone.

But that's seriously a lot of money so I would have to really be ready for some high-touch sales.

>Who is going to give me a million dollars without a proposal as to my plan on how to spend that?

I know what you mean.

And I'm at the other end of the spectrum, without a PhD. Making outright grants even more elusive today. I would have to be very persuasive about how I would return their money, and in multiples rather than just margins. So that would be a lot of explaining to do.

Which is why I built my first lab 30 years ago entirely from recycled materials.

There wasn't $10 million forthcoming back then either.

I'm glad I didn't wait.


Science and software both have the problem of dealing with complexity.

Imagine what software development would look like if we applied the principles of science:

We wouldn't start from scratch with a new software program. Instead, we would refer to code from the 1960s that has been proven to be a development-theoretic consensus and accepted by developers.

We would not speak in software life cycles, but would either "revolutionize" a piece of software or extend the existing standard.

We would allow that we trust code from developers because some other developers say it will fit. However, we don't know if the "other developers" would have the same financial interests.

And we would allow interested parties with enough resources to influence this process at will by publishing permutations of similar software and trusting each other.

We would have our own marketing agencies that are adept at injecting arbitrary constructs of their own into the "developer consensus".

And every developer who refuses to do this, who develop their own frameworks and pursues their own software ideas in a detached manner, would be branded as a liar and their reputation be destroyed with the help of the "developer consensus" spoke persons under the payroll of corporate interests.


I started reading your list thinking I'd get an essay of how ridiculous software development would look like, only to get a list that more or less matches what a significant part of software development looks like.

I guess Poe's law strikes again.


There is no "right" way in how to deal with complexity.

Yes, both disciplines have their own way of handling that subject. And each discipline is struggeling how to find a way to gain evolution.

For me the key performance indicator is that in software development people controlled open source now takes a crucial part in the basis of most software projects whereas in science everything is much more corporate infected.

In science knowledge is created where money is. And money comes from corporate interests.

In software corporate interests tried to control the market but in the end had to capitulate. Most servers are run based on open source, most software is developed with tools being open source. Of course, there is no standard. You can write an application in Vue, React, Angular, JSP or whatever.

Yes, in the end software industry is insance. They have managed to get accepted that every five year the old software is scrapped and basically the same software is developed again.

But letting old software die and creating a new version with new ideas in mind is much closer to the way nature is doing things.

And to be honest - the only one who really can deal with complexity is nature hands down.


This is of course not a "fix" for science (I don't believe such a universal fix actually exists), but a method to highly increase the effectiveness of the scientific process:

Let the scientists simply do science or teaching (which is actually a method to educate the next generation of scientists) instead of having them handle an insane amount of academic bureaucracy in their work time.


Right - novel idea: how about academics and post docs are paid a fair salary for their work. You could require that post docs need a position with a PI to be paid and that everyone is assessed on what they achieve not what they say they're going to achieve. Post docs can follow the PIs that interest them until they make tenure.

The only question then is around equipment, which though is not trivial is at least smaller than salaries and equipment.


perhaps we should scale back government subsidies of mediocrity and remove the bureaucratic bloat that is ill-informed, ill-equipped and ill-incentivized to determine what is the "right amount" and "right goals" of basic research.

despite popular beliefs to the contrary, private citizen donors and industry does indeed understand the value of basic research, and may even be better positioned to value it properly as a whole, despite the private interests of them on an individual basis.


Maybe make academia not have a reputation for burning people out?


Tbh a lot of academics burn themselves out. The pressure is real, don’t get me wrong, but a lot of fellow academics I’ve come across are people who have no clue what a work/life balance is. The pressure is internal as much as it is from “academia”.


What if we separate collecting data from analyzing data?


This sounds like the "waterfall" model of project management. I could see it working in some areas and not others. In fact, a friend of mind wrote a thesis on a re-analysis of an existing data set from one of the big physics experiments, looking for a new effect that he had proposed based on theoretical work.

But I come from a background in small-lab experimental physics, and my spouse from synthetic chemistry. In both cases, a model of planning, followed by execution, followed by analysis, doesn't work, for a number of reasons. Often, an experiment fails, over and over again, until the design and operating conditions are refined to the point where the data begin to make sense. In my experiment, the equipment didn't even exist until I built it. And in experiments such as mine and my spouse's, the researcher (grad students) are also developing and refining their own abilities as they progress. I was my own electrical, mechanical, and software engineer. Sometimes, preliminary results change the direction of the project.

An additional issue is that experiments are rarely documented well enough to hand the data off to another team to analyze, without significant back-and-forth.

What little I understand about the "agile" model, seems more applicable to this kind of science.


Very different operating modes are required depending on how many people you're trying to coordinate. The allegory to software development is pretty clear - sometimes you're working on a personal project and zero documentation is tolerable, but sometimes your work is going to be translated into 200 languages and distributed to a hundred million users the day after it ships, and it needs ten times as many lines of tests as lines of code.

It seems to me that our model of "collect the data and analyze it yourself" is a sort of "ten engineer startup" scale process. Now that many fields of science have four or five digits of PhDs collaborating between countries, there's an increasing need for specialization, specifically in creating reusable data. It'll make us less efficient on a small scale, but creating any artifact at all that can be reliably used by ten thousand people is much higher leverage than a fully packaged data+conclusion that's even odds to be nothing but noise.


This is true in some disciplines, but largely inapplicable in others. A great idea, but I'm not sure it would work outside of physics or chemistry. I mean imagine trying to apply that to medical research. But it's a novel idea, so thanks for sharing.


and the collected data should be always available


I feel like some suggestions try to fix the problem without changing much. We could instead change the roots of the problem. Education and access.

- Making public funded research publicly available (since it was paid with public funds): papers, etc. for anybody to access, legally, for free.

- Lowering the cost of textbooks.

With more people having access to the materials we might realize there are a lot of latent geniuses out there in need of a chance. The easier to get access to those papers and materials the easier it is to double check the results, too, and to cross-pollinate between fields.


DOE [1], NASA [2], NSF [3] already require most research papers to be made publicly available.

---

[1] https://www.osti.gov/

[2] https://www.ncbi.nlm.nih.gov/pmc/funder/nasa/

[3] https://par.nsf.gov/


That still leaves most papers in expensive journals or available via institutional access only.

(PubMed, arXiv and biorXiv help a bit with that, but it's still not enough.)


> That is the kind of thinking that we need to change. Instead, NIH should have considered selecting a subset of investigators and applying a cap to them, and then compared results a decade into the future with those that were left to accumulate more traditional funding.

If you are one of the researchers who is in the experimental group with lifetime capped funding (or at least a decade of capped funding, until the study is up), your career is going to be really impacted, isn't it?

I can't believe the OP suggested this with a straight face, it's obvious why it wouldn't fly, and why NIH was considering only doing it universally, to at least treat all researchers equally, right?

But yes, I understand the argument for how an experimental approach would help us determine the best funding models... although it's not really clear to me how this experiment works. Even assuming that we know how to measure which group did "better" after 10 years... wait, what are you expecting to see in the comparison between the research-capped group vs not in determining if it worked? I don't actually underestand this experiment.

Anyway, regardless, it's not a mystery why it's not going to happen, right? Experimenting with researchers careers.


> If you are one of the researchers who is in the experimental group with lifetime capped funding ... your career is going to be really impacted

If you believe, as I do, that a functional "science" is the most important force in making the future better than the past, would not this be a worthy sacrifice? In terms of its impact on all of humanity, now and forevermore. I would also think that true scientists would at least be the most likely to be sympathetic to this rationale.


Do you know many employed academic scientists?

Why don't you ask them if they would be willing to accept a cap on their salary, to be excluded from the most prestigious and well-paying institutions, and to do less research than they otherwise would, for a period of ten years, because they will be participating in an experiment which, if it was designed and executed well, might result in findings that might help improve research funding.

Perhaps very few actually practicing scientists are "true scientists".


I think it is a totally impracticable solution. I’m asking in more of a hypothetical sense. As another replied points out, scientists would just find a place where funding was available.


Or theyd move overseas where they can get more funding.


If you offered a floor as well as a cap, I suspect a lot of researchers would be willing to take that deal.


Uncritical funding of projects submitted by superstars seems to be one significant problem in funding that overly reinforces the status quo (like other rich get richer phenomena). In some fields, only a small fraction of applications (8-15%) are funded and much of it goes to "superstars". I would propose that the bottom 50% of applications be removed by review and the rest be funded at random. In order to reduce spam applications, it would be necessary to adjust the probability of funding by the total amount of money being applied for the previous 24 month by a given PI.

To encourage cross-disciplinary research, I would award the top 1% of projects non-randomly but only if the PI has not been previously funded by the granting agency.


We have the answer. Peer reviews should include a separate team replicating the results (follow methods, without knowing the results). Those that are replicated go into a higher acceptability tier. Those that do not get published in another tier. If replicating is not possible (for instance in the case of the LHC), the paper should go through 3 independent and simultaneous peer reviews.

The problem is not enough “gatekeeping”. Accepting a publication should be more stringent.


Okay, sounds reasonable. Except that peer review is a volunteer activity. Who is going to pay for all of these replication studies? Where does that money come from? If we can’t increase funding budgets, which are already paid extremely paltry, you’ve effectively cut research output in half. Maybe that’s a win, maybe not.

This reminds me of the welfare debate. On one hand, we could erect a giant bureaucracy to take a fine tooth comb over every application to make sure only the “worthy” get aid, which creates holes through which needy people fall. Or we can accept that driving fraud to 0 is impossible and get rid of the bureaucracy, then put the savings toward more aid. Sure some unworthy people will get it, but others who are worthy (and otherwise wouldn't have) will, and that’s more important.


If it’s standard practice that a publication involves replication, it becomes a standard part of funding. Will we have fewer studies? Maybe. Will they be much higher quality? Yes.

This is not like welfare. We are trying to enact a stringent gatekeeper so studies are completely reliable. The analogy doesn’t work here.


I've often said that grants should include funding for at least two independent reproductions. Also, journals should prioritize publication of negative results on par with positive results.

Trial pre-registration and open science are good first steps though.


One problem is that finance and big business siphon off bright graduates in search of benjamins, and all you have left is academics looking for grants.


In my observation, the graduates who go to finance and big business are a different breed of people than those who would love to stay in science.

I don't think the typical person who deeply loves science becomes happy in finance of big business. On the other hand, the people who go to the latter don't have the deep love for research that is necessary to bear an academic career.

Also, in my experience (I know some examples), the typical hiring processes of finance and big business weed out the bright people who have too much of a love for science.

So, I don't think what you mentioned is a problem.


Although it may help us locate underlying weaknesses in the scientific method, this level of self-consciousness displayed in the public eye is hurting society, ignoring the value of rigor and letting nutjobs get away with their ridiculous claims for a quick buck.


I disagree 100%. Science journalism and unreproducible results of the last few decades is what has hurt science's credibility. These public conversations of science's failings and how to fix it are the only thing that can restore its reputation.


From my perspective as a scientist, a lot of the discussions here about "how to fix science" and "what's wrong with science" suffer from the same problem as science journalism; namely, people who are not experts in a subject trying to act as experts for others who are even less informed. For instance, the author of this TFA is a blogger and a journalist on "various topics". Don't you imagine he suffers the same problem as science journalists, which you call out as hurting science's credibility?


Science journalists have hurt science's credibility not only because they reported poorly on preliminary results, but because they reported on results before they survived replication. As we know from the replication crisis, well over 50% of positive results averaged across all scientific disciplines fail replication. This has led to people seeing articles claiming one thing is true, then seeing an article claiming the contrary the very next week. This clearly erodes public trust in science over time, irrespective of the quality of reporting itself. You can only partly solve this by improving the scientific process.

While expertise in science is a valuable perspective to have when discussing the philosophy of science and how to improve the public's trust in science, it's not strictly necessary. We're reimagining how to fund science to achieve multiple objectives, not just scientific objectives. For instance, there's a misleading impression that scientists are only after grant money and that biases them. I say "misleading" and not "wrong", because I think you know the skewed incentives in this process all too well, and this has resulted in some kinds of bias.

Some of the proposals discussed here, like grant lotteries, would entirely eliminate that argument against scientists while also solving a few other problems with the grant process itself. I expect many scientists haven't even considered the public view of their incentives when evaluating lottery grants though, and instead focus on whether it's effective in funding good science. Public trust in science is a serious issue though, so if a slightly suboptimal funding process can make scientist's motives unassailable, that could be a justifiable trade-off.


Congratulations on being super strict, but this doesn't address half the problems. So you can only disagree 50%.


What is there to "fix?"

The system is working as intended in many ways. People just aren't aware who the meaningful stakeholders are. It's not the PhD, it's not the taxpayer, and it certainly isn't "science."


This article could use some peer review. Not the author's peers, who appear to be "science pundits," but from people with experience in managing research portfolios.


How to fix science? Outside of that directly tied to government activities like national defense, stop all government funding. Consequently, any funding science will end up by people who have skin in the outcome, grounding the entire endeavor.


Corporate sponsors of research solicit favorably selective scientific literature all the time. Too often it's bad sample sizes, bad samples, imprecise methods, lack of rigor, overenthusiastic conclusions not supported by data, etc.


The same we didn't know if cigarettes were harmful, right?

There is just so much we don't know, every time someone is profiting real nicely.

And so much we do know every time someone is profiting real nicely.


The answer to "how do we fix science?" could only be some committee or bureaucracy, and that will probably work about as well as getting a bunch of HR people together to decide "how do we fix Engineering?"

The author's bio says:

José Luis Ricón is a book reviewer and blogger on various topics including longevity and a roadmap for the future of science at Nintil. You can follow him on Twitter here.

This is certainly ad hominem but is there a reason to think Mr. Ricón has any insights, experience, or publications that make him worth listening to?


You're absolutely right.

I'd argue that to make science what it used to be, one would even have to cut funding.

Scientists used to be aristocrats interested in truth. These days they're just interested in grants.


Ha! Just yesterday, I was talking to a colleague in the humanities about the relative pleasures of working in a field where there were practically no grants to be sought.


> Scientists these days are just interested in grants.

Ladies and gentlemen, to add insult to injury, the greedy scientist myth strikes again.

There is not a better feeling in the world than to work for free for several years, six days a week (plus Sundays of course), with the weak promise of receiving a "turtle grant" someday. Maybe. (Tomorrow perhaps, the turtle is blocked somewhere. You will be paid in two years) and then being called "greedy".

Yes, there are people getting rich with the money for science, It just happens that they aren't the scientists. They are the politicians that keep the grants as their own personal loans. Science can't be fixed because is the guarantor of politicians that only need to go creative and put impossible requirements year after year to be allowed to keep most of the money for themselves.

In 2010 under the president Zapatero, the spanish government de-funded first and then keep for themselves the 25% of the money compromised for science. In 2015 under president Rajoy the government keep the 48% of the funds allocated for science and in 2016 the 62% of the promised money never reached any researcher. The money was used instead for, who knows... a new swimming pool for each minister maybe, or fixing accounting holes in other sectors... I wouldn't discard cocaine parties neither.

https://elpais.com/diario/2011/04/02/sociedad/1301695203_850...

https://elpais.com/elpais/2017/10/04/ciencia/1507133529_8680...


Oh, I absolutely think we see things the same way. It's not greedy, they just need to pay the bills and well... Truth might suffer in that environment


> Scientists used to be aristocrats interested in truth. These days they're just > interested in grants.

I'm not sure if that's actually what you're advocating, but personally I'm very much against science only being the domain of rich people who can afford to do it as a hobby.


I don't strictly disagree with that, but I do think a significant amount of science (not all) should be led by rich people. Rich people aren't random, there is a natural selection in whom becomes rich. Imperfect as that selection process is, I think it is at least better than the alternatives of popular vote or bureaucrat decision. I say this not being rich myself.

I also believe a significant amount of science should be democratically driven.


>Rich people aren't random, there is a natural selection in whom becomes rich.

Generally it's the kids of rich parents.


that reminds about the history of amateur and professional sports, including Olympics, at the end of the 19th and beginning of the 20th century - aristocrats wanted to preserve and cultivate the "pure" sports spirit and thus were pushing for amateur sports and against payments to the sportsmen, while regular people to be able to seriously play sports needed to make a living out of it.


People I know in academia complain about the same issues, publish or perish and all that. They even opine about formal economic models that predict that more funding/subsidies for science leads perversely to less outcome in total and not just relative to the size of the additional investment. However when you take all this as a reason to not take this industry as serious as they would like they get all offended.


> However when you take all this as a reason to not take this industry as serious as they would like they get all offended.

Just because there exists valid criticism of a thing, does not mean all criticism of it are valid: and in my experience, I mainly roll my eyes in exasperation when people criticize science for unfair reasons.

What the public thinks is broken in science is very often not what experts think is broken in science.


> Just because there exists valid criticism of a thing, does not mean all criticism of it are valid

But in this case the valid criticism is just being fully acknowledged, rather than falsely extending the validity to all criticism.


> in this case

In which case exactly? The comment I replied to was referring to unspecified instances.


How does what I say validate all or any criticism of science, I just don't think that people who work in this industry get to take all progress though human history and put that on their own lapel.

Do you know this podcast decoding the guru, it is funny at times in breaking down internet persona like Jordan Peterson and Eric Weinstein but what annoys me about it though is this "researchers" talking about fringe internet phenomena. Sure you can find your cooks on the internet but maybe they should point their sharp criticism at their own sometimes and draw conclusions from it, as in stop funding it.

What do you call people that disagree to agree on something ... a research community. For all the good science I know I can equally find you the most asinine navel gazing triviality repackaged and sold as groundbreaking progress. Please don't get me started on what experts in innovation management ISO 56002:2019 can do to help valorise all this knowledge as they piss on us from from the top floor of their ivory tower.


> How does what I say validate all or any criticism of science

I never suggested that it did.

You sound very frustrated, my friend.


Unrelenting these people pretending to be your friend, life so futile it borders on evil.


Sorry, what? English is my second language, and I don't know what that means.


[flagged]


Science can't tell you how to structure a society because science does not assume any values beyond truth. Science can only tell you what your values will do, not what values you should have.

For instance, science can't tell you not to enslave people, it can only tell you that there are little to no factual differences between different types of people, but that would not prevent a society from allowing slavery to pay off a debt, as but one example.


Science is built on trust, no one reproduces every single finding and in the first place we measure the likelihood of some conclusion holding up in reality so we have probabilistic measures of trust in our conclusions, turns out that we can definitely build a society on this kind of trust. See my other responses to people responding to me (or datalisp.is which is a work in progress) to piece together what I am describing.


It's not based on trust, but verification by replication. The closest description is perhaps the security adage, "trust, but verify".


So you trust that a conspiracy is less likely than people just replicating and getting the same results (within tolerance).

If you are unsure you can convince yourself by replicating and then others can trust easier because you're probably not part of a conspiracy.

But it's all based on trust in the end. Again, no _one_ replicates everything.


I don't trust that a conspiracy is less likely, I infer that due to logical arguments of parsimony (see Solomonoff induction), and then I verify that such a conspiracy didn't take place to the best of my ability.


people have tried this with various ideologies (Christianity, Islam, communism, Nazism, etc).

I am not aware of a case where structuring a whole society around a single belief system or value system has worked out well.

Maybe diversity is actually a good thing.


Diversity is a wonderful thing.

Science is based on ethical persuasion, where you make it clear why you believe something. The things you listed are not of that nature, they try to persuade you by appealing to authority or employing coercion of some sort.


What would you say the ethical persuasion of science is based upon?


Reality


How does reality show us what is ethical persuasion, or that ethical persuasion is objectively the correct approach? You need to convince me this chain of questioning won't lead to either:

-Pointing to a historical line of moral tradition and figures that hold a feeling of authoritativeness to us

-"Reality is reality" and an implicit appeal to yourself and your aura of "rationalistic candor" as the authority

Either way, I predict you'll ad-hom me for daring to ask the question... Because the most thoughtful option would be conceding that there's basically not a substantive enough basis for the virtue of ethical persuasion that makes it inherently part of science. And then, science isn't especially distinct in this way from any moral tradition that values ethical persuasion, and then your tribe loses some of its unique sense of popular appeal, and then society won't be as easily coerced into the shape you want it to be. And you understandably wouldn't want that.

You said in a cousin comment:

"It [science] is no more (or less) a religion than what we have now."

And I think this applies here also and flattens out the appeal to ethical persuasion into less of a rigid, dogmatic declaration of "reality". It's rather that you want to be ethically persuasive in your reporting and discourse of what we try to agree upon as scientific evidence, which I think is good of you as a matter of faith and instinct, but I do not accept as self-evidently inherent to the epistemological basis of science itself nor necessarily to reality if we try to approach this rationalistically and divorce ourselves from appeals to authority.

The only other direction I can see for you is to try to bum-rush past the Peripatetic Axiom, and say that something about "reality" is somehow bypassing the fallibility of our sense-data and instinctual human appeals to authority - and therefore absolutely rock-solid, utterly unmistakable to anyone sincere... And that would be like what we call "Divine Revelation" in the metaphysics business. Anything like it is a strictly religious dogma - anyone who questions that epistemology as some branches of philosophy do, becomes a heretic, someone who needs to just embrace the model and "get with the program" already.


So if you take the p2p model seriously (for example) then you have no way to force a peer to accept an update, there is no coercion when it comes to beliefs.

All I can do is be patient and wait for others to reach the same conclusions I have. Also if people are interested (like you are) I can attempt to communicate my beliefs but I have no right. The communication may backfire into me accepting different beliefs.

What I am saying with my original comment is just that I don't believe in coercion as a way to coordinate society. I believe people would rather be persuaded (i.e. presented with information in as coherent a way as we know how to and then given free choice w.r.t. their behaviour) than coerced (threatened with censorship), I do acknowledge that there is a fine line between the two and I am not always sure how to draw it.


I think this makes a ton of sense and reflects a lot of how I try to live.

I think many people will never accept science as the guiding light by which they live their lives. Indeed, I don't - I think of it as a vey useful methodology for figuring out ways to reliably predict certain classes of behavior (a.k.a. "developing a consensus reality").

But, as JetAlone suggested, I see no way to scientifically conclude almost anything about correct personal values.

I may personally value truth, beauty, justice, empathy, and love, but it's clear to me that's not because they're scientifically validated as Good Things, nor because they've been scientifically shown to be pragmatically-optimal things to value.

If there's not an ironclad, undeniable path between the scientific method and optimal human values, then the only way to arrive at a culture where science is the shared value and ideology is coercion.

Hence my concerns above.


> If there's not an ironclad, undeniable path between the scientific method and optimal human values, then the only way to arrive at a culture where science is the shared value and ideology is coercion.

Let's leave the scientific method aside for now and focus on persuasive arguments as the way to change behavior (scientific method is persuasive but is not necessarily the only method to persuade someone ethically).

> I see no way to scientifically conclude almost anything about correct personal values.

Well the way I would attempt to persuade someone is by teaching them about the tragedy of the commons, i.e. when the security of the many is threatened by the incentives afforded to the individual, usually we call this pollution or corruption but I'll call it the tragedy of the commons.

Now, the persuasive argument would be that you would rather be persuaded than coerced (or that is my assumption here) but if you were to undermine the security of the whole to enrich yourself then it follows that if the whole is properly coordinated then they would balance out your gains by refusing to work with you, thereby eliminating the advantage you achieved with your actions and if you are made aware of this preemptively then your calculations change and you realize that the only way for you to gain an advantage is to contribute to the commons.

There are some assumptions in there and the argument is not completely persuasive as I delivered it here but I am trying to make a scientific experiment of sorts by building a system on these principles and seeing if the resulting economics can become more legitimate than the current system without being founded on coercion (i.e. whether people will willingly move to this new economic system because they perceive their interests better aligned with this method of assessment) ... you can check out datalisp.is, I was recently on the street and I am still flat broke (because I refuse to go against my morals - all part of the experiment ;) so this project has been crawling along, now my situation is on an upswing so I am hoping to make a lot of progress in the coming months.


I've been left out in the cold a small handful of times. Glad to hear you're off the streets now.


Terence McKenna's take on it:

https://youtu.be/Q-J09gk0mJk


Thank you for sharing this. It's an interesting thought and I found it helpful.


Science is not a belief system in the same way that religions and political ideologies are belief systems.

Science reigns supreme in epistemology.


The scientific method is a powerful tool for constructing what might be called "consensus reality".

Given its total inability to prove positive statements (no scientific "law" is ever proven correct, only "not observed to be wrong so far"), and thus its general irrelevance to mathematics, I would recommend some caution about declaring it the ultimate epistemological authority.


> Given its total inability to prove positive statements

This is recognized in the natural sciences, but not in the pure sciences. Considering the context, I thought it was clear that I was not talking about pure sciences, and that I was rather referring to the natural or social sciences.

There is no source of knowledge, method or tool that surpasses science in epistemological authority. That does not mean it gets us to a total and absolute truth, it just means it's the least-bad way possible of knowing things.


> There is no source of knowledge, method or tool that surpasses science in epistemological authority. That does not mean it gets us to a total and absolute truth, it just means it's the least-bad way possible of knowing things.

What do you base this statement on? How do you know science is the strongest possible way of knowing?

What do you mean by "science"? I may be misunderstanding you due to a divergence in our understandings of the word.


I’m not using any fancy or special definition, I use the word “science” much like it is used here: https://en.wikipedia.org/wiki/Science


So science is a method of constructing falsifiable hypotheses about how to predict phenomena reliably?

Any response to my other questions?


epgui doesn't know what he's talking about. Science in it's purest form can only falsify things. Absolutely nothing can be proven correct. There is no "pure" science. That's garbage.

What's going on is epgui likely mixed up logic, math and science. In math and logic things can be proven because math and logic is a game where you make up axioms and prove theorems... but in science and reality, nothing can be proven.

Basically by posting that wikipedia article he proved you right. and demonstrated to you that he doesn't know what science truly is.


English is my second language, and I believe the correct term for what I called “pure science” is “formal science”. In French, it’s common to talk of “sciences pures” in contrast to “sciences appliquées”, so it’s possible this is where that came from. Thank you for pointing this out!

I do have 12 years of postsecondary education in the applied/natural sciences, so I think that first sentence of yours is perhaps a little exaggerated.


In English science exclusively refers to science as Wikipedia defined it.

Formal science is a rarely used term. In fact much of the (English) academic world doesn't consider logic, math, or computer science to be actual sciences. The term is basically unheard of. You may find some people who use the term but most people don't know about it.

The reason is simple, the nature of what science is, is not discussed by scientists. It is more discussed by philosophers or French people.

If you weren't suffering from a language issue, I would indeed be 100% correct that you don't know what you're talking about; but given your language impairment and your claim that French academics in common parlance demarcate a difference in sciences between formal and applied it makes sense that you can make this mistake.


I think you're quite right but I am curious, is there a reason communism is the only ideology you didn't capitalize in your list?


I'm honestly not sure why I didn't. I guess subconscious habit.


Fair enough. I know that some journalists on the left have openly made a conscious point of capitalizing the "black" in "black people" but not the "white" in "white people" because they just know that will do an amazing job at improving race relations. So I was wondering if this was something similar.

Edits: little spelling and punctuation here and there.


This sounds like religion not a tool. Science is the latter.


It is no more (or less) a religion than what we have now.


It certainly doesn't mix well with capitalism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: