The article reminded of me of Robert Axelrod's excellent "The Evolution of Cooperation." Axelrod uses similar computer modeling to create a tournament to help tease out how cooperative strategies emerge from primarily self-interested behavior. It is one of the best things I have ever read in my life, and I'd recommend that book to anyone.
The real issue with scientific publishing is that there is simply no penalty for publishing shoddy research. I know several academics who made quite a big name for themselves on research that was later partially or fully retracted. No one cared about that; there was no real reputational damage done. To tackle poor science, such "poor" scientific inquiry should be "punished" in some way. Similarly, it is terrible for the advancement of science that only novel or significant results get published -- there should be a way for researchers to benefit from publishing well-designed research which simply did not yield interesting results.
How to do that? I think some of Axelrod's tournament provides an answer. Like in his examples, the individual incentives align to yield a pretty poor outcome to the members of his population (he runs an iterated prisoner's dilemma game). However, correctly setting up the iteration parameter's, slowly a cooperative strategy becomes the evolutionarily stable strategy.
I can see how this could also be the case for academics. There is no law from up above that dictates that "number of papers published" is the ultimate metric of success. There is a culture, and processes, and institutions which have led that to be a leading indicator of academic success. If there were real motivation and impetus to change this, there is no reason to imagine that other metrics (and processes) could emerge that would much more highly value scientific integrity and thoroughness.
> I know several academics who made quite a big name for themselves on research that was later partially or fully retracted. No one cared about that; there was no real reputational damage done.
I have heard a suggestion that PHD eligibility should require a replication. This would not only help validate (or invalidate) studies but would also over time reinforce the importance of replication culturally.
Yea, because it is really all the overworked and underpaid graduate students faults... I have never met a single graduate student who rushed to publish shoddy work on his/her own. However, I have watched time and again while young impressionable graduate students are slowly but surely guided to publish shoddy work by overworked and desperate professors who feel they have no choice but to keep getting "academic currency" (e.g. publications).
I think GP is saying that you should have to replicate another's work, not that your dissertation must be replicated. The latter would tack on
I think we should make it more precise: in order to reach <stage>, you should have to replicate another's work that has not been replicated more than three times as of 6 months ago and produce either a record of a successful replication or a bug report detailing a failed replication.
As a best practice, a lab where few of the incoming grad students can produce a definitive (either successful or failed) replication should checked-on by the university to see if they've just got a terrible working environment.
This would be great training, too. Not only would grad students learn how to conduct experiments in their field, they would also learn the value of a good Methods section in a paper.
I don't know if making replication a "grad student chore" that you need to get through is a good way to remove the stigma behind "wasting" time replicating other's people scientific results instead of building your own. Perhaps we should just admit that replicating scientific results is just as much science as creating original work is, and grant degrees to students who manage to replicate a lot of work and publish it in a rigorous analysis.
"universities and funding agencies [should].. stop rewarding researchers who publish copiously"
You have to look at the media as well. When was the last time you read an article about a failed experiment, ie: an hypothesis was DISPROVED? There's rare/no coverage of this yet it is an important aspect of science[1], and ties into what the OP talks about.
If we're going to improve how science is done, this is an equally important area to focus on because it's the current bias for positive results that plays a part in driving labs to produce lots of papers.
Agreed. It could be as simple as improving how negative results are reported. Mythbusters was all about disproving theories and the general public loves it.
(Yes, most disproved hypotheses don't involve explosions, but you can still make the reports interesting, because everything that is not true has broad implications)
We'll they do when there's common myths that get disproved. Like that guy who cracked his knuckles on only one hand for nearly all of his life. But yeah I get your point.
Thats the only time when things have a general interest. Negative results are very important to people directly involved in studying something but what general interest (or value) is there in a story "chemical xyz123 does not work as an effective catalyst for the reaction between abc123 and def678" for anyone not directly researching those things?
There is no interest to the general public. But that doesn't mean scientists can't carve out their own online community like Wikipedia and start to categorize all of this information. It would for them probably be what stackoverflow is to programmers.
I would say it is less that poor methods are rewarded, and more that the investment in proper methods is expensive, and thus penalized.
That is, nobody is particular looking for people doing poor work and handing out rewards. However, the "proper" methodology that we want necessitates time. Something that is expensive and there are already plenty of things eating at the budgets of work out there.
I do think this can be made better. But I see no reason to think it is just us chiding people for not doing better.
> That is, nobody is particular looking for people doing poor work and handing out rewards.
I'd challenge that. If you look for people with high impact factors you are asking for people doing poor science. If you only publish "significant" results and refuse to publish negative results you are asking for bad science.
There are structural problems that lead to bad science, and often it's not about costs.
I argue that you are modeling out real expense. In this case, the cost of not publishing significant findings.
You could try to penalize bad faith actors. You could also try to stop putting undue pressure on good faith actors.
It isn't just spending money. But opportunity cost of throw away work.
As it is we have what can be modeled akin to baseball where most at bats are failures, but not even swinging is frowned upon and eventually leads to you not being sent to bat.
Given that plan to decimate the economy of America's 4th largest and most racially diverse city (Houston, TX), I am quite keen to know if these issues also affect climate science.
Don't get me wrong, I'm still pretty convinced that anthropogenic global warming is happening. But if there is a change that it isn't and we can avoid all of the murders and suicides that come with throwing large numbers of people in one area out of work, we should investigate the possibility.
These issues affect all science. That's not an argument for ignoring science when it's politically convenient. (At least not any more than you already were based on the results. Whatever your all-in number for how often science is right or wrong is, that number already incorporates all these issues).
If anything I would expect standards to be much higher in climate science than less controversial areas. There are a lot of people who would stand to gain (or, equivalently, avoid losing) huge amounts of money if the climate science was wrong, they will be going over papers with a fine-tooth comb in a way that doesn't happen to most papers.
> ignoring science when it's politically convenient
And I'm not making an argument for it. I'm making the argument that "getting this wrong is personally costly for a lot of people, therefore we should be examining these results with a fine-toothed comb before basing massive social change on them."
But you're right, there would already be monetary incentive to do that.
It makes no difference whether you portray it as "poor methods are rewarded" or "good methods are penalized", either way the incentives are for poor methods to win out over good ones.
It is actually a huge difference. Raising kids, for instance, it isn't enough to just reward good behavior. You have to actively and fairly penalize bad behavior. And if you are setting up a losing game or your kids, expect bad behavior.
Which is what most of this comes down to. For most participants, they are in a losing game. The absolute best they can do is get through with their project and move to the next one. About the worst they can do is spend more time on what they are currently doing.
So, I don't want my comment to be a "screw this article, they got it wrong." No, contrarily, this is a discussion that needs to be had. I personally think it is not enough to just look at the rewards people set themselves up for on the bad behavior. You have to look at the penalties they have for "good" behavior.
This is not new. Few years ago the lament was we don't teach how to fail. That is all I am echoing. Failure is heavily punished. Maybe it shouldn't be.
The article found that even strong punishments for failure were not enough in their toy model. So it isn't as simple as just punishing discovered bad behavior.
I'm reminded of an economics article that I read on cheating in sports. It turns out that no amount of punishment of cheaters would be enough to stop cheating in bicycling. What would be required is that if ANY bicyclist is caught cheating, EVERYONE who had worked with him who hadn't reported it would have to be punished. Don't just punish the cheaters, punish those in a position to have conceivably known who didn't report it.
The scientific equivalent would be to not punish bad research articles, but to punish everyone who went through the lab who could have raised the flag and didn't.
I'm pretty sure that they didn't try modeling that. It would have been interesting to know what that would have shown in their model.
Don't just punish the cheaters, punish those in a position to have *conceivably known* who didn't report it.
There's a significant difference between punishing those who did know about it (and didn't report it) and those who might have known about it (and didn't report it).
The second sounds a lot like "collective punishment", which is prohibited by the Fourth Geneva Convention, and the Human Rights Act. [1]
My question is how do you model the ridiculous punishment that they already face for not getting done with their work?
In the sports world, there is already a huge punishment for not winning. To the point that most punishments of cheating are actually just on par with never having competed at that level in the first place.
That is, don't just try to dream up punishments for people that act in negative ways. Try to come up with scenarios that don't require superhuman levels of achievement just to play. Make sure failure is OK at all levels. As it is, many look at the punishment of getting caught acting negatively, and the reality of failing, and conclude somewhat rationally, that cheating might be worth it.
I don't think such a system would work out, or at least it would breed a huge amount of resentment towards it. Vaguely reminds me of Legalist philosophy: it attempted to "abolish penalties by the means of penalties", that is, through a liberal use of (capital) punishment not only for criminals but for their whole families, so that no one would dare commit a crime.
> You have to actively and fairly penalize bad behavior
> Failure is heavily punished. Maybe it shouldn't be.
Fair is the keyword, but I didn't understand the part quoted last, until I read the VikingCoder's comment. Edit: and now there are sibling-comments regarding that point.
I disagree completely. Ackowledging that "good methods are penalised" would challenge the line generally taken by the Economist in its socio-political pieces: that if classical liberal economic models were applied to everything, everything would work better.
What's being described here is the consequence of the application of "business thinking" and market-driven funding of science, with all the unintended consequences of trying to apply quarterly metrics to pure research. Suggesting the problem is "not enough punishment" avoids having to ask philosophically hard questions.
I believe we are taking past each other. I specifically do not want more punishment. I want cheaper failures. I want it such that if you spend ten years chasing a dead end, you aren't a failure. You can still have a successful and happy family where your kids have a fair shot, as well.
"... his finding also suggested some of the papers were actually reporting false positives, in other words noise that looked like data. He urged researchers to boost the power of their studies by increasing the number of subjects in their experiments."
Conclusions based on low sample rates, should be seen as poor technique, an indicator of potential bias and suggest a lot more validation is required before acceptance.
The easiest way to address this is to simply move the p value necessary for significant results - instead of p < 0.05 use p < 0.001 or even p < 0.0000003 (3 x 10^-7), as is needed in particle physics. Though the 5 sigma threshold is a bit ridiculous. Lowering the p-value threshold significantly increases the cost of hunting for positives due to random chance, at the expense of needing more sensitive experiments.
Nope. It just means that you have to come up with a shittier null hypothesis. Of course, it still has to get past peer review, but this can often be accomplished by complication and obfuscation, which have the side benefit of giving you plausible deniability in case you are discovered. Sufficiently advanced cluelessness is indistinguishable from malice, and science, in its present form, rewards them both.
In a lot of fields, you just can't do this. We have to remember that a single article should not be taken as gospel, but more as a step in a direction, like a single log entry in a captain's journal. "We saw X happen Y many times in a 1000+Z step process with ABC cells in FGH media at P temperature at N phase of the moon. Our p value is being required by the journal, but is meaningless because we are just reporting that you should look out for this thing, if it exists at all"
My phrasing "seems to have been" reflects both my opinion, and the fact that it is far from certain.
That said, regardless of what happens with this specific theory, it is well worth reading Richard Feynman's remarks from 42 years ago in http://calteches.library.caltech.edu/51/2/CargoCult.htm. Until a field of research can demonstrate the kind of integrity that Feynman talks about there, nobody should believe what they claim. They can talk all that they want about how much work they did, the publication history, the supporting evidence - it is all BS until you have that.
Increasingly people call themselves "scientists" and call what they are doing "science" while lacking that fundamental foundation. The result is that it is becoming harder and harder to tell which claims about reality deserve provisional belief versus healthy skepticism.
Which is a mild way that our education is poor since rewarding status over competence.
Would parents sending their kids to Ivy leagues accept a kid from downtown going to public school being given a better recognition as theirs?
We let parents influence education, and let schools bend the «natural» competition in order to satisfy influential parents.
This cannot happen without regulations to tie diploma to a job. Without academy's and authorities. The first winner in the bending of the knowledge competitions all other the world are the kids of teachers.
There is a worldwide corruption without bribe of teachers achieved by a social pressure.
It is my personal experience that bad science persists not only because it is rewarded but also because a large portion of scientists just don't do good work.
This is the one part of the article that is counterintuitive:
> Worryingly, poor methods still won—albeit more slowly. This was true in even the most punitive version of the model, in which labs received a penalty 100 times the value of the original “pay-off” for a result that failed to replicate, and replication rates were high (half of all results were subject to replication efforts).
How can bad results still confer a net reward on their producers with a penalty like that?
I went back and read the original description of the model, this is what I think is going on:
The average performance of a bad lab is worse then a good lab. However, a bad lab might get lucky and not have any of their false-positives subjected to replication. As a consequence, the top-performing labs tends to be bad labs that got lucky. The selection method heavily favors being the top performer, and thus the poor but lucky lab tends to win out, which is why it takes over the population.
This casts doubt for me on their model, since fitness proportionate selection would probably have quite different results.
There is another dimension to this problem; universities in developing countries are pushed to match what their colleagues in developed nations are publishing. In doing this professors where I am currently studying (Brazil) foist off research on undergrads and masters students.
The system gets even more ludicrous when it comes to translation and publication, but that is a tangential issue.
>> Ultimately, therefore, the way to end the proliferation of bad science is not to nag people to behave better, or even to encourage replication, but for universities and funding agencies to stop rewarding researchers who publish copiously over those who publish fewer, but perhaps higher-quality papers.
I don't see why being prolific should be punished. No, a better solution is to reward research that is replicated more than research that is not yet so.
This would be much easier to do than punishing anyone. It's hard to find a good reason to punish someone because they published something; after all, even very thorough research has a chance to be wrong, and let's not get into what is "correct" anyway. And it's a bad idea to discourage people from going out and tyring things that have a very slim chance of working.
Rewarding replication might even result in some labs specialising in replication studies and that would be a net benefit for everyone, especially the researchers who would love to do replication but really don't have the resources to do it.
Yeah, but it's hard to get funding to do a rigorous replication of something that doesn't have an immediate commercial justification (for example legal liability or FDA requirements in drug trials). It's a big investment where the best case scenario is that you can be a little more confident in prior results. And if you can't replicate the results, it doesn't necessarily mean the original finding was wrong, it could just mean the original scientists were better than the replication scientists.
Also, among scientists who have invested the time to earn a Ph.D., there is a culture of wanting to break new ground and push humanity forward. There would have to be a strong incentive to motivate them to invest the time in replicating research for which they won't get the glory.
You would have to change the way science is funded to make research replication a required step in the modern scientific method.
He's not saying anything about punishing anyone, just about stopping to reward quantity over quality. That's nicely compatible with rewarding replicable research, as replicability is a sign of quality.
Lying by omission seems to be widely accepted now as necessary and expected in order achieve tenure, secure funding, etc. I find it abhorrent. If you go back and read experimental papers from the 1800's, before science became an industry, they are full of detailed methods (including where to purchase items), pitfalls encountered along the way, and potential detractors from the main result. Not so today. Today, it seems to be all about selling a result that may or may not be significant.
It's a society-wide problem that is caused by competition and status seeking. Pretty much everything is optimizing for "winning", whatever that means, instead of doing things correctly, conscientiously, and carefully. The latter will often make you "lose", and there's no system rewarding you for it besides an internal one. Then there's nothing left for people to do but try to "win", because resources are scarce. The more you optimize for "winning", the more nasty your methods get, because there's always something you can sacrifice, until nothing is left.
Another unseen side effect of this is that a lot of people see this, combined with the low returns from science, and decide to go do something else, because if they are going to put up with the garbage they can as well earn more money doing it. And some among them then realize that they can have a greater effect by generating billions of dollars and then using them on whatever they want, instead of wasting time in a lab and asking for scraps from the government.
Untangling this requires a complete mentality shift.
> However this is wholly about how competition and status seeking are channeled.
You can't truly channel competition anywhere because it's always routed to itself. Competition implies a provider of gifts and those who chase those gifts. The provider will always use faulty proxies to determine who to provide to, thus becoming disproportionately important to the process. Acquiring the gifts is always disproportionately more important than anything else. Status is one such gift, with no redeeming value.
Competition is always about the agent that is winning, and the provider that arbitrarily bestows prizes. That is where the focus lies by definition - the objective of a competitor is to eliminate all others, not to produce anything of value. Science should be about the results, and should bow to no provider.
> They are instincts, they come naturally.
A lot of bad things are caused by instincts that come naturally. That's actually specifically why I pointed those two out: they're natural, and they're extremely dangerous and we should always be aware of them creeping up and keep them in check.
Instead, we created whole institutions to worship them.
> Clearly in the present order they are not channeled in ways that lead to superior or interesting outcomes since mediocrity is everywhere.
The "mediocrity is everywhere" thinking is part of the problem, as it drives everyone to worry about how "mediocre" they look and rush after the best thing ever all the time, instead of calming down and doing the right things. Publishing a failing study is mediocre. Helping people in need is mediocre. Doing a little bit of fitness to keep yourself healthy is mediocre. Yet, that is exactly what we need more of.
Mediocrity will always be everywhere because it's a mathematical fact - most people are in the middle. So, yes, it's everywhere. That's tautological.
> Competition implies a provider of gifts and those who chase those gifts. The provider will always use faulty proxies to determine who to provide to, thus becoming disproportionately important to the process. Acquiring the gifts is always disproportionately more important than anything else.
I think we are on the same page with respect to the dangers.
I am reminded of the experiments in which rats are getting hits of dopamine. What is it called? Behavioral conditioning? Skinner conditioning? Very simple, but very powerful.
And this I think is the specter that haunts Science/Education. Sheer force of collective habit maintains the status quo. There is lots of evidence for 'work' (even workaholicism) but paradoxically diminishing results. I am reminded of those students who write down everything the lecturer says in class but don't have the time to understand what the lecturer is trying to get across.
> You can't truly channel competition anywhere because it's always routed to itself.
Here we sort of disagree. I agree that it is potentially an uncontrollable positive feedback loop but I think competition (and also our sense of fairness, demotism, voting, equality) can be channeled.
To use an analogy, these instincts we have are like Water or Electricity. Dangerous but controllable to great effect. The goal of good governance is to 'traffic shape' these forces into productive causes.
Competition in the wild is simply Hobbesian violence, one man against the other. It requires the halter of a market with its price system and exchanges to be funneled productively. There are huge problems of course but we at least know a halter actually exists.
What I am calling 'fairness' in its natural state is a powerful raw instinct to enforce conformity and uniformity among the tribe members. There is an obvious biological connection to these instincts, to our genetics, our genes in their voting blocks trying to spread their influence. This is known in biology as R selection, where the system is optimizing for volume (the overall hypothesis is called R-k selection).
In my opinion there exists no 'halter' (as of yet, but I hold out hope) for fairness. I do not believe a proper mechanism has yet been invented by humankind. Yes humans build institutions to control it, like standardized education, democracy, republics, but I am unconvinced as to their strength against a sudden surge in the tide of popular feeling. Sooner or later, like the rise of the Communists, brute strength wins out over elegant attempts to moderate, basically mob rule followed shortly by blood rule with tyrants. Year Zero. A Reset. It has happened countless times in history.
Democracy leads to entropy. Competition has the potential to take us to Moloch.
Basically you either need to choose your poison, or we need to come up with a better system. I currently err on the side of Moloch, because it is a slower death than the alternative.
> A lot of bad things are caused by instincts that come naturally. That's actually specifically why I pointed those two out: they're natural, and they're extremely dangerous and we should always be aware of them creeping up and keep them in check.
> Instead, we created whole institutions to worship them.
That's about the size of it, yes.
> The "mediocrity is everywhere" thinking is part of the problem, as it drives everyone to worry about how "mediocre" they look and rush after the best thing ever all the time, instead of calming down and doing the right things. Publishing a failing study is mediocre. Helping people in need is mediocre. Doing a little bit of fitness to keep yourself healthy is mediocre. Yet, that is exactly what we need more of.
> I think you were trying to say something else.
This is interesting, thank you. You're quite right that over-optimizing is a threat (Moloch) but I shall reformulate my complaint here.
Mediocrity is a threat (Entropy), but, and I feel this is important, if we have enough fields or subfields of exploration, avenues of research then Mediocrity basically loses its meaning. The division of labour allows everybody to potentially be a winner.
It does not eliminate average results but it disperses them over a large area, which should mean net progress across the board is higher. Does that make sense?
> Mediocrity will always be everywhere because it's a mathematical fact - most people are in the middle. So, yes, it's everywhere. That's tautological.
We all know smart people achieving mediocre results (via Molochian competitions).
We all know not-as-smart people achieving, if not success in the sense of being on the first page of a top ten Google search results, achieving success in their local maximum.
In my opinion the second is in some sense smarter than the first because they are better adapted to their niche, they have a better foundation on which to ultimately progress.
I suppose I could summarize my feeling on the subject by saying I care more about increasing the whole population's IQ (or other metric) by 1 point, than about collecting the 'cream of the crop' and curating talent, because although the second initially appears most impressive, the truth is that the whole network benefits much more from the first.
I started responding to your post but after a while I found that we're not on the same page on so many things that you just seem to be stating as fact that I'm not sure if this is bridgeable. All of these things are worthy of a major discussion of their own.
> Sooner or later, like the rise of the Communists, brute strength wins out over elegant attempts to moderate, basically mob rule followed shortly by blood rule with tyrants. Year Zero. A Reset. It has happened countless times in history.
> Democracy leads to entropy. Competition has the potential to take us to Moloch.
> Mediocrity is a threat (Entropy).
> We all know smart people achieving mediocre results (via Molochian competitions). We all know not-as-smart people achieving, if not success in the sense of being on the first page of a top ten Google search results, achieving success in their local maximum.
These are major statements that you can't really just assume to be true or that the other person finds them true. And I don't agree with any of them. Not to mention the lack of context on highly vague terms like entropy.
> fter a while I found that we're not on the same page on so many things that you just seem to be stating as fact
I try to make clear what my priors, assumptions are. And which portions I think are personal opinion.
I don't mind my assumptions being attacked, but I do mind that people "show their work".
How many internet discussions have you had with people who did not clearly delineate what their premise was?
That leads nowhere because the premise has a tendency to evolve (ever more detailed refinement) when disagreement.
If you know what I think you're not shadow boxing with a set of stereotypes.
> These are major statements that you can't really just assume to be true or that the other person finds them true.
Of course not, but I'm not making an argument for them here. I'm writing an internet comment, not a book.
> And I don't agree with any of them.
That is fine, although I don't really yet know your interpretation of what we see.
> Not to mention the lack of context on highly vague terms like entropy.
Here I must take askance. Entropy is widely used across topic areas but it is not a vague concept. Entropy is another word for randomness and disorder.
In the context of a society, you could say a Dark Age was less ordered than the previous age when the Roman empire ran the show. Typical hallmarks of anti-entropy effort include roads, aqueducts and cities. Ruined cities, disintegrating roads and falling bridges are results of entropy. The social and physical meanings of entropy are intertwined.
If that is not evident to you then you are correct to say our discussion is not bridgeable. If there is one thing I find disagreeable it is a relativism that posits all states are equally good. At that rate people will be flinging their brains into bins.
> A lot of this smells like neoreactionarism.
And what if it is? You shall find everything interesting happens at the peripheral. Conservatism and liberalism are after all, if nothing else, definitely repeated routines of thought. Hard to take good observations there. Of course if there is nothing to fix then there is no need to!
> How many internet discussions have you had with people who did not clearly delineate what their premise was?
Too many. And I often have the same problem since my premises are often far from what is expected. I try to bridge it as I can, but limitations of the human language and text space and all that.
> Of course not, but I'm not making an argument for them here. I'm writing an internet comment, not a book.
But it seems like you're making an argument /from/ them. Your definition of mediocrity is different from mine (mine is defined as: a person who will by others be considered mediocre in some context, which often resolves to a person who has low-to-medium relative ability in a skill). I don't have a definition of "entropy" to begin with, because it's not really something I care about. Randomness and disorder is the official definition, yes. I can't splice "randomness and disorder" onto reality. Reality seems to actually tend in the opposite direction, at least on a planetary level (i.e., dust -> planets -> creatures on planets). Mediocrity, given these definitions, seems utterly orthogonal to entropy. A football player called "mediocre" by others because he fumbled during a random game, or a person who is moderately OK at tech support being considered "mediocre" because they're not great, to me do not at all contribute to entropy. So I can only imagine your use of the word "mediocre" is something else.
> In the context of a society, you could say a Dark Age was less ordered than the previous age when the Roman empire ran the show. Typical hallmarks of anti-entropy effort include roads, aqueducts and cities. Ruined cities, disintegrating roads and falling bridges are results of entropy. The social and physical meanings of entropy are intertwined.
My problem is that the definitions are too vague. As well as culturally biased. Rome is more ordered than mud huts. But is Rome more or less ordered than modern USA? Mongol clans? Soviet Russia? Czarist Russia? Maybe there are different flavors of orderliness? I'm not really sure. Not to mention, if you consider it on a global level, you have to sum up the results in some way, and then determine whether the entire Earth is more or less ordered than some other timeframe.
I think there's a difference between "everything is relative and all states are equally good" and "everything is not relative but I'm not convinced on what's more ordered and I don't trust your methods, either".
Nonetheless, I still fail to see the relevance of the football player being called mediocre or the tech supporter not doing too great of a job to the fall of Western civilization.
> We all know smart people achieving mediocre results (via Molochian competitions). We all know not-as-smart people achieving, if not success in the sense of being on the first page of a top ten Google search results, achieving success in their local maximum. In my opinion the second is in some sense smarter than the first because they are better adapted to their niche, they have a better foundation on which to ultimately progress.
From the earlier post. This to me sounds like an amalgamation of just-world hypothesis and "nature is good". I don't correlate any specific outcomes to one's "smartness" since I don't believe in either. I don't respect "niches" too much since I don't believe in the latter. I disagree with both, and THAT disagreement is fairly fundamental, it's very hard to find a bridge between the "nature is bad and the world is not fair" and "nature is good and the world is fair" groups, or even the other combinations.
> And what if it is? You shall find everything interesting happens at the peripheral. Conservatism and liberalism are after all, if nothing else, definitely repeated routines of thought. Hard to take good observations there. Of course if there is nothing to fix then there is no need to!
Sure, let's say all the interesting ideas are there, but there is still a problem in assuming a specific vocabulary or axioms when talking to someone who is not already familiar. I have my own framework that's not conservatism, liberalism, or neoreactionism, but I try to keep it more or less under wraps or contextualize parts of it, because otherwise discussion gets very confusing. The whole line about communists and democracy and entropy and year 0 didn't make any sense to me until I considered neoreactionist thought. For other people I think a large portion of the post is likely just gibberish.
> Your definition of mediocrity is different from mine (mine is defined as: a person who will by others be considered mediocre in some context, which often resolves to a person who has low-to-medium relative ability in a skill).
That's a definition I'd accept. Perhaps we're not alien species after all ;-)
I definitely see that mediocrity is context dependent. Nobody is a super-man excelling in all. People like Sam Altman and Peter Thiel have rotten days as do us all.
However it seems like there are pools of mediocrity in society. It is not a stand-alone individual phenomena. You don't find many Nobel Prize winners in council estates in Britain. Or any kind of winner apart from the Lottery.
While nobody can excel at everything, we should be a bit suspicious when we see a place where there is mediocrity in everything apart from alcoholism and gambling addictions. That would suggest either extreme assortative mating (highly unreasonable, in the time scale of centuries) or that a group of people is being subdued systemically on the memetic level. What does one say when we see an entire area code suffering from what seems like depression or some existential crisis of purpose? The contrast with former glories is particularly sharp in the North of England, but if you've ever been to a similar region you get it. It is like a giant question mark in the landscape, unexplained. We should see a roughly random distribution of talents, there would be elites, yes, but there would also be a long tail. I don't see much of a long tail in present society in America or Europe, whereas I am convinced there was.
These thoughts lead me to the belief that mediocrity is primarily caused by societal structure. This suggests that talent is allowed to be exposed sometimes and not others, which depending on how you look at it, is either a very dystopian view of society and/or a potentially very hopeful one.
And of course we are seeing these same 'pools' arising in Science and Education. Whatever the 'rot' is, it is surely spreading.
> But it seems like you're making an argument /from/ them.
I have a proselytizing streak in me. When I believe something is so, then others must know. However like yourself I am not a purist, I cherrypick my way through various lines of reasoning and often am of two or more contradictory minds on one topic. Currently I believe NRx's positions on Western society and its politics (The Cathedral) are useful insights with predictive power. I think also that every model has limits. The utility of Conservatism and Liberalism has run its course in the West and now it is time for something new or old again. In the Asian countries such as China this may not be true.
> Randomness and disorder is the official definition, yes. I can't splice "randomness and disorder" onto reality.
> Reality seems to actually tend in the opposite direction, at least on a planetary level (i.e., dust -> planets -> creatures on planets).
Let us stick to centuries or thousands of years at most!
I think depending on the scale of resolution you look at physical reality, you'll see different things, but this is another topic.
> My problem is that the definitions are too vague. As well as culturally biased. Rome is more ordered than mud huts. But is Rome more or less ordered than modern USA? Mongol clans? Soviet Russia? Czarist Russia?
I can easily imagine comparing them! There are absolutes, like average life expectancy, and then confounding factors such as efficiency gains making less things to measure. As a brute estimate, you could multiply the number of years of life by the number of people the empire or system can support. That by itself should give us a fair guess at complexity. Or we could look at density, since average city size should correlate with complexity.
> Not to mention, if you consider it on a global level, you have to sum up the results in some way, and then determine whether the entire Earth is more or less ordered than some other timeframe
We already do this really. GDP. I realize all these metrics will have flaws, but if you are consistent about using them you can obtain a fair idea of trends and won't miss unusual events e.g. The Black Death. If GDP is negative from 2017 - 2117, projektir will be far from shocked if armed with that information he/she steps from the Time Machine and observes humankind.
> I think there's a difference between "everything is relative and all states are equally good" and "everything is not relative but I'm not convinced on what's more ordered and I don't trust your methods, either".
That is fine, my methods I just came up with a minute ago notwithstanding there may be good objective metrics for orderliness.
> I still fail to see the relevance of the football player being called mediocre or the tech supporter not doing too great of a job to the fall of Western civilization.
Many functions in society are options or choices.
Items like the ability to travel over distances, to not be killed, to feed and water oneself, to provide for a family, these are not really options. If enough people can't accomplish those kinds of tasks then society does actually collapse.
Fundamentally, are you saying that if the symptoms of a degeneration appear in a society, that the people of that society just don't or can't be trusted to recognize them?
I mean we have records, letters from Rome and Egypt, and they seem to indicate a very acute sense of catastrophic decline.
> From the earlier post. This to me sounds like an amalgamation of just-world hypothesis and "nature is good". I don't correlate any specific outcomes to one's "smartness" since I don't believe in either. I don't respect "niches" too much since I don't believe in the latter. I disagree with both, and THAT disagreement is fairly fundamental, it's very hard to find a bridge between the "nature is bad and the world is not fair" and "nature is good and the world is fair" groups, or even the other combinations.
You don't believe some people's native intelligence is higher or lower than others?
You don't believe societal niches exist?
Perhaps this is a straight disagreement but maybe I just don't understand what you're saying.
> but there is still a problem in assuming a specific vocabulary or axioms when talking to someone who is not already familiar.
Usually, true, but not in these circles I find, after all you understood what I was saying.
> The whole line about communists and democracy and entropy and year 0 didn't make any sense to me
Entropy I explained, Year Zero is a reference to the Khmer Rouge.
The only thing I can think to add here is that in NRx thought Democracy is taken as being Communism Lite, like how there exists a Diet Coke soda.
> For other people I think a large portion of the post is likely just gibberish.
That is partly by design. Journalists have short attention spans and suffer from buffer overflows. They then fall back on 'Fascists!', which not even they wholeheartedly believe.
Yes but why are scientists in the position of selling?
Why is that their position in whatever this negotiation is?
First I should explain I believe Science to be a special type of Education, and that by addressing the specific here on HN we are not attacking the general problem.
There is nothing complex about this subject. It is just bad governance leading to institutional weakness. Let us look at long term trends, since that is what ultimately matters.
In 1900 to a first approximation there existed 1000 physicsts on earth. A century later that number is larger by a factor of (at least) 1000x.
Yet I wouldn't claim physics has produced 10x the results, let alone 1000. This requires explanation.
I am not ganging up on physics, the problem is across all the sciences.
This means either of two possibles.
1. There exists diminishing returns even in the discovery of new ideas themselves. We should be seeing an explosion of new fields of science, for every 1 field in 1900 we should see 100 new ones today but we do not.
2. Science as an institution has gradually succumbed to some kind of rot from within.
I submit to you it looks like the first but is actually the second proposition which is the truth.
Now I will come clear. It is democracy that has accomplished this. This is because public money must be tethered to some kind of measurable outcome, an easily understood metric, for future flows of monetary assistance.
The causality goes like this.
Public finances must be justified in the development of Education and Science. Bureaucrats are in essence the farmers of students or scientists in a similar way to cash crops or Prussian forestry management. More units, more volume equal more success. Thus, the limits on the network of bureaucracy are artificial limits on the potential for growth.
Going 'wild' is not tolerated even if it would lead to more interesting outcomes (which is what we want!). It is seen as expensive when it is nothing of the sort.
The resultant output is the mass production of mediocrity.
The larger the apparatus, the more all encompassing it becomes until its standards and objectives become universal. Its gravity even influences the trajectory of systems outside of it e.g. privately funded research due to its incredible size (because those privately funded researchers were inoculated into conformity with a public system or a private system which a curriculum based on the public model).
All that I have said should be testable.
Suppose you have the following arrangement.
1. Students or Scientists with independent source of income. We may have a filter for motivation but that's about all.
2. Educational assistance which is funded through present and former students/scientists.
3. There does not exist a standardized curriculum or set of objectives. It is all blue sky. Any resultant consensus on common topic areas is by choice. Interdisciplinary activity would explode because the subjects are not focused on a 'path', which necessarily distorts and cuts down on the number of potential possibilities.
Then I am convinced you shall see a thousand flowers bloom.
Edit: The more 'elite' a university is, the more it resembles the arrangement I describe. It is undeniable that the best parts of the Educational system are the most aristocratic, while the worst are the most demotic.
While I'd like to slag science for this, we also have a bit of counterexample in other fields.
In art, we have studies that show that those who produce more also produce better. And it compounds. There is no reason to believe that at least some of this isn't operant in science.
The real problem is the lack of positive incentives for negative results. And I don't know how you fix that.
we have studies that show that those who produce more also produce better.
This is true of people producing more of the same thing. An artist who makes a lot of pottery doesn't increase their ability in painting, or piano-playing.
In other words, do the same thing lots of times, and the results become better and better. This sounds a lot like replication in science.
The active, open, free discussion on findings and theories has been regarded as one of the big ideas of modern science. That's what journals were for.
However, for some reason I'm doubtful if the "online comments section" is the most conductive for the behaviour we want to encourage, after witnessing how that particular concept did manifest in the newspapers when they went online (compared to the traditional opinion / to the editor page).
How do you police comments and keep them from turning into an open sewer? Shills for those threatened by the work being disruptive and sock puppets for involved parties already have a long and storied history.
I would guess that having everything online might help in some areas, if executed well (open-source style), but I wouldn't be too optimistic, as we could end up with something more like the comments section of a major newspaper.
Right now there's a lot of different silos of post-publication comments sections, each with their own benefits and flaws. Leaving comments on PubMed is probably the best way to get your concerns noticed, but PubPeer is popular for its anonymity (though it comes with all anonymity's usual problems, as well.) Journals also publish comments on their own articles, which are vetted for quality but less frequent and rarely read.
>"Not only are dodgy methods that seem to produce results perpetuated because those who publish prodigiously prosper—something that might easily have been predicted. But worryingly, the process of replication, by which published results are tested anew, is incapable of correcting the situation no matter how rigorously it is pursued."
This isn't referring to science, and these studies aren't being done by scientists. It is research, and these are researchers. I wish people would stop dragging science through the mud over these problems.
Is it just me, or is there a lot less "bad science" being done in chemistry and (testable) physics?
Maybe one way to decide if a discipline is "scientific" in nature is how well the scientific process actually works within that discipline?
And if we discover evidence that the scientific process doesn't work well in a particular discipline (for whatever reason—doesn't matter why, actually), maybe we should stop calling that discipline a "science" and come up with some other descriptor?
I disagree. People working in the 'soft sciences' should still be aiming to work scientifically, so we should continue to describe them as such.
In practice it is very hard to measure up to the standards of good science in a lot of disciplines, where there's enough complexity to make constructing good experiments difficult.
I'm thinking in these rough categories:
1. Real science that demonstrably works now.
2. An attempt at science that may become (1) in the future, but doesn't quite work yet.
3. Pseudo-science. The trappings of science without the intellectual honesty.
4. Claims about truth that don't pretend to be scientific at all.
So, for category (2), we can have some respect for the scientific integrity of the practitioners, while at the same time being skeptical of the results and accepted theories for the time being.
There's also the side-matter of avoiding the engineer's sneer towards people who try to understand things but refuse to learn the maths. I don't think it's a helpful attitude, although I don't know what the right fix is.
I do partially agree with you though. It would be good to make sure that people making political decisions based on economics or sociology do understand that those subjects aren't quite ready yet.
One of the things I noticed in college is that the classes I took in this category never actually taught "the scientific method."
It was only ever in the so-called "soft" sciences that we got the scientific method lecture about creating a hypothesis, doing some tests to verify, checking results, etc.—usually in the first week.
So maybe a way to identify classes still in your category "2" is to see if they explicitly teach the scientific method or not. If they do? Category 2. If they're obviously scientific and don't teach it? Category 1.
(I've also noticed that an easy way to identify a crackpot on the Internet is to see if they trot out the scientific method discussion in the first 1000 words. If they do? Crackpot.)
There are lots of reasons for higher failure rates in one discipline than another that have nothing to do with how "scientific" they are. Cancer biology is indisputably a science, and has a higher rate of non-replicable results than non-organic chemistry; that's because it's much more complicated, not because it's some sort of unscientific philosophizing.
Psychology, which gets the most grief from hard physical STEM types whenever this issue comes up, is hard to reproduce because it's hard in general. It's the study of the most complex object we know about, and we can only access it through indirect and creative means. Small differences in study design can have large impacts on outcomes because the brain is sensitive, and complicated, and often counter-intuitive. It's not that there's more "bad science" being done in some disciplines, it's that those disciplines require much more scrutiny towards their methods and inferences, because they're harder to handle. But they're still science, and it would be a horrible disservice to abandon the scientific method for them just because it's hard.
What evidence/data have you used to form your opinion in regards to hard sciences? I don't necessarily disagree, but considering the subject matter, I feel compelled to press you for a data-supported justification.
I've seen copious bad papers in EE and MechE journals. I'm not qualified to evaluate chem or physics research, but you'd think if any field would be hard to fudge, it would be engineering. Turns out this is not the case. The bad engineering papers I've seen tend to draw overly-broad conclusions while failing to document the methods adequately.
Same question. I would assume this is partially based on replicability, or doing the study over and getting the same results. Recently, there was an attempt to replicate major studies in psychology and only about 36% were replicated. I think something similar was in medicine.
> Changing the descriptor would be a classic case of bike-shedding.
Strongly disagree. Things that are "scientific" get way more acceptance by the public then things that are "hey, that's just, like, your opinion, man."
For instance, in what sense at all is "political science" an actual science in the sense of chemistry or physics? It seems to me that it's an entirely different thing, and so I'm not surprised at all when a political science "result" fails to replicate or is outright bullshit.
Science has come to mean, in many cases, "area you can get a PhD in" vs. what we tend to act like it means, which is "area where the scientific method applies." I guess I'd like to see the word science only used for the latter meaning if we're going to continue to give "scientific" disciplines extra respect.
If instead we want to continue to use "science" to denote "anything you can get a PhD in", then we should seriously degrade or eliminate altogether our instinct to trust "scientific" results.
Poor methods which is a result of inadequate data/descriptions being provided by researchers to the publication (also an issue with publications not requiring it).
In many of the hard sciences there is not a requirement to list the products used such as chemical reagents or plasticware in a given experiment.
The publications don't seem to want adequate data and descriptions. They want papers that have the traditional format and fit in 6-8 pages.
Every paragraph you spend on practical details is one less paragraph you can spend on making a good pitch for the relevance of your work, and citing lots of people in "Related Work" so that the reviewer's favorite people are in there.
They don't want your code, because that would interfere with the supposedly blind review process.
Occasionally, I've seen a submission process that includes a way to submit supplemental data, but it's always extremely half-assed. There's no guidance about what this supplemental data should be beyond "maybe an Excel file or a zip file or something", nobody will ever see it, and if you include too much data it probably gives you an error and maybe even crashes and makes you start over.
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it" -Max Planck
The real issue with scientific publishing is that there is simply no penalty for publishing shoddy research. I know several academics who made quite a big name for themselves on research that was later partially or fully retracted. No one cared about that; there was no real reputational damage done. To tackle poor science, such "poor" scientific inquiry should be "punished" in some way. Similarly, it is terrible for the advancement of science that only novel or significant results get published -- there should be a way for researchers to benefit from publishing well-designed research which simply did not yield interesting results.
How to do that? I think some of Axelrod's tournament provides an answer. Like in his examples, the individual incentives align to yield a pretty poor outcome to the members of his population (he runs an iterated prisoner's dilemma game). However, correctly setting up the iteration parameter's, slowly a cooperative strategy becomes the evolutionarily stable strategy.
I can see how this could also be the case for academics. There is no law from up above that dictates that "number of papers published" is the ultimate metric of success. There is a culture, and processes, and institutions which have led that to be a leading indicator of academic success. If there were real motivation and impetus to change this, there is no reason to imagine that other metrics (and processes) could emerge that would much more highly value scientific integrity and thoroughness.