Gotta love footnote 1, about the ordering of author names :)
> 1. In keeping with our argument, and following in an extensive tradition of subverting traditional scarce markers of prestige, the authors have adopted a redistributive approach to the order of their names in the byline. As an international collaboration of uniformly nice people (cf. Moran, Hoover, and Bestiale 2016; Hoover, Posch, and Bestiale 1987; Hoover et al. 1988; see Tartamelia 2014 for an explanation), lacking access to a croquet field (cf. Hassell and May 1974), and not identifying any excellent pun to be made from ordering our names (cf. Alpher, Bethe, and Gamow 1948; Lord, de Vader, and Alliger 1986), we elected to assign index numbers based on alphabetical ordering by surname and to randomise these using an online tool. For the avoidance of doubt, while several of the authors have pets none of them are included as authors (cf. Matzinger and Mirkwood 1978). None of us are approaching a tenure decision (cf. Roderick and Gillespie 1998). And none of us are fictional entities who generate their papers algorithmically using SciGen (see Labbé 2010 for the contrasting case of Ike Antkare nevertheless greatly outranked all of the authors on several formal measures of excellence before he [it?] was outed).
This reminds me of the time in my doctoral program when I turned in a draft journal article to my advisor. He returned it a week later with no marks or written commentary whatsoever, other than telling me "do it again." I asked what was wrong with it and what I needed to do differently on the next draft. He replied "this isn't excellent. Don't give it back to me until it's excellent." I asked what excellence entailed and he said "you just know."
People ask me if I learned anything in grad school.
This sounds similar to feedback I just got on my dissertation's conclusion section. They told me to rewrite it, noting that it should be different, but they weren't sure how. Asked for more detail, they said, "Make it punchier."
So did you? I'm not endorsing that minimal feedback technique, but it would certainly make me think pretty hard about how to improve the paper. Bang-bang control can work.
If you're the boss of any enterprise, and you want to make things good ('excellent') the first time, you only have your instincts and experience to go on.
But in the poster's context, "excellent" is a non-objective undefined term. In an enterprise, the incentive is for you to define how your subordinates do their jobs.
So, it's a different subject altogether.
This actually gets at what the paper describes as the problem with defining what "excellence" means. (I read about half the paper.) Everyone defines it differently, so excellence basically just means that it's judged well by whoever is doing the judging.
This is an old chestnut of a philosophical problem that many youngsters of my generation first experienced in the form of 'Zen and the Art of Motorcycle Maintenance'.
There's no solving it. And yet we all know that differences in quality are real. This is not an argument I want to engage in.
My question was, did the GP learn anything from being told 'not good enough, do it again'? They imply not, but I bet they did.
I've learned lots of things that way. Grant proposals, job applications, asking someone out on a date: lots of things you try out and don't get feedback beyond 'nope'.
I think part of the problem was that the person who had been writing all of his research papers for him the last several years had left the lab unexpectedly and the advisor had no feedback to give. I don't know, I became pretty demoralized when each attempt to write up my research got that kind of response. The advisor was super enthusiastic about me when I started and told me that my research was so good that my first paper was going in Science or Nature. He kind of built me up and let me crush myself with high expectations. It was an interesting education.
An unfortunate fact about academia is that you do tend to be punished for being overly honest. e.g. "Our lab is #1 in the world at X." usually translates roughly to, "Our lab is doing X, and it's so bloody obscure and masochistic that nobody else has bothered. Although what we're doing seems a bit pointless, We hope it will prove useful for something eventually." Saying the latter is honest, but won't get your grants renewed.
The really frustrating thing is when your lab is doing something that another lab is doing better because you're trying to get into what looks like a really interesting and useful area. Then you have to justify why you're the worst in the world at Y but still deserve funding vs a lab that's #1 in the world at X.
> The really frustrating thing is when your lab is doing something that another lab is doing better because you're trying to get into what looks like a really interesting and useful area. Then you have to justify why you're the worst in the world at Y but still deserve funding vs a lab that's #1 in the world at X.
...which more often than not leads to copious amounts of bullshit.
I think "excellence" is about owning mistakes, not being 100% perfect 100% of the time. Here is an interview with a former Blue Angel #8
One of the tenets we are taught as a Blue Angel is the ability to admit when we've made a mistake, or have not "achieved perfection." This is done in the form of "Safeties." Each debrief is started by an around the room tally of "safeties", starting with the Boss and working down to the supply officer in order. For example when it was my turn I might say "I'll take a safety for late hits on the Low Break Cross and Fortus and an additional safety for an early hit on the Delta Roll. I'll also pay $5 for not shaving before I went downstairs for a coffee and I'll pay $5 for a zipper. I'll fix it tomorrow, Glad to Be Here." This is essentially telling the Team that I made mistakes on 3 maneuvers and also recognize I violated policy by being in public unshaven and for having a zipper on my flight suit unzipped. Since the latter two are policy violations, they cost me $5. The essence of my mistakes is not important, that will come out during the meat of the tape review during the debrief, but the fact that I recognized them and owned up to them IS important.
Did you read the paper? The first section deals with the fact that "excellence" essentially has no definition, because every group, or every evaluator, has a different opinion of what "excellence" means.
In this paper, "excellence" is specifically defined as a "Boundary Object."
From https://en.wikipedia.org/wiki/Boundary_object, "In sociology, a boundary object is information, such as specimens, field notes, and maps, used in different ways by different communities. Boundary objects are plastic, interpreted differently across communities but with enough immutable content to maintain integrity."
I'll admit I started skimming after pg. 15, but this paper is mainly an argument for using "excellence" in academia (and particularly publishing articles).
While I don't disagree, the paper falls flat to me because:
1) Journals are usually judging by first "interest to the broader academic community" and then "technical accuracy" not "excellence".
2) There wasn't any new data presented. Citations to prior work, but even then it was just an argument of statements not fact.
>> “Excellence” is not excellent, it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship.
That's one person's opinion: one that I happen to agree with. Yet the faux- objectivity in the style and language of this paper ironically apes usage of the term; exactly in the manner they purport to condemn.
You did not come close to proving that "excellence" is pernicious. So don't write like this is science. It is just (in my opinion) a sound and interesting opinion
I like this comment, and I'd like to deplore the ongoing charade that science is uncovering truth through an objective process (the purpose of the aseptic, toneless writing style of scientific publication), rather than just creating more stupid, boring human idealizations of reality. For fuck's sake, why did we spend all of the twentieth century tearing down God, dogma, and the notion of a unitary truth only to conduct the same debased worship in the church of Science?
I think partially the authors are pulling our legs, and partly they're pointing out that academia's current usage of the word "excellence" commits a Fundamental Attribution Error by attributing the desirable qualities of research to an innate individual trait of the researcher rather than to methodological rigor and a good knowledge of math.
In human words, much of academia now talks about "excellence" to mean "extremely horribly steep competition for jobs", which amounts to pretending that the Best of the Best of the Best individual academic laborers can and should produce the Best of the Best of the Best research results out of some shining individual genius rather than because they had a good way to check whether theories are right or wrong before they published and their reviewers could also check and other researchers could also replicate the results.
In even more human words, "excellence" has become the academic equivalent of "success". It denotes that society's reward mechanisms have rewarded you a lot, without specifying anything at all that you actually needed to do to deserve all that rewarding.
Fair enough, although one could argue that there's also a lot of fetishization of that methodology (e.g. this is Science because it _looks_ like Science). I suppose we are after all monkeys and look at both X and Y when evaluating 'X says Y'. As for individual track records, it is true that some researchers are good racehorses and worth betting on. That's the common meaning of excellence.
>As for individual track records, it is true that some researchers are good racehorses and worth betting on. That's the common meaning of excellence.
Sure, but the point is that almost all science consists of plowing fields, not racing. If you only hire racehorses, you're leaving the fields unplanted by the workhorses.
Aaaaand agricultural metaphors get inaccurate very quickly, but oh well.
Actually that's almost my favourite agricultural metaphor about 'normal' science; it's like subsistence farming and everyone has their little turf to tend.
I made it to about page 16. This was where the paper basically pointed out that no one gets published for confirming or denying results of a previous publication. Reproducibility is needed for science.
The author's style seems a necessary component for serious academic research given the culture of excellence. Only English departments give credit for blatantly non-objective literature. Even fucking philosophy reads this way.
This is along the lines of a train I have been on for the past week or so, fulminating about institutionalization of authority. Reification is death - so much of what we do is taking organic, living masses dripping with pus and nutrients and clean them down to bone. "This is the essence!" we say, holding up this dead thing. Then we get surprised that our dead system has all sorts of deficiencies - abuse of power, incompetent dunderheadedness, fraud, deceit.
Hi, I'm one of the authors of this piece. This is just a note to say thanks for the comments. For some context this is what we call a "preprint" (terrible term) of an article which has been submitted for peer review at a research journal. That means that we'll be able to make changes based on comments from reviews by appointed researchers as well as in response to the comments here. In particular your comments are useful in suggesting some changes in how we signpost our argument to make it clearer.
The article is in "academic-ese" and that's deliberate, if understandably irritating to a more general audience. We're targeting a specific audience, and we'll have a shorter and snappier version as well for the final version. As people here know well, the pitch has to be shaped to fit the audience and here we trying to change (a specific kind of) researchers' minds.
> 1. In keeping with our argument, and following in an extensive tradition of subverting traditional scarce markers of prestige, the authors have adopted a redistributive approach to the order of their names in the byline. As an international collaboration of uniformly nice people (cf. Moran, Hoover, and Bestiale 2016; Hoover, Posch, and Bestiale 1987; Hoover et al. 1988; see Tartamelia 2014 for an explanation), lacking access to a croquet field (cf. Hassell and May 1974), and not identifying any excellent pun to be made from ordering our names (cf. Alpher, Bethe, and Gamow 1948; Lord, de Vader, and Alliger 1986), we elected to assign index numbers based on alphabetical ordering by surname and to randomise these using an online tool. For the avoidance of doubt, while several of the authors have pets none of them are included as authors (cf. Matzinger and Mirkwood 1978). None of us are approaching a tenure decision (cf. Roderick and Gillespie 1998). And none of us are fictional entities who generate their papers algorithmically using SciGen (see Labbé 2010 for the contrasting case of Ike Antkare nevertheless greatly outranked all of the authors on several formal measures of excellence before he [it?] was outed).