I quite agree with most of what you are saying, and you are saying it quite well. My main thrust is that we shouldn't throw the baby out with the bathwater by saying, 'Some people aren't rigorous so let's abandon rigor' (I don't think you're saying that, it's a strawman). Even in the paper critiqued in the original post, there was some rigor (in the midst of clear blindness and hubris), sufficient for the team in the article to take it apart on its own merits.
There's real examples of cases where no rigor is applied at all, and some facsimile of rigor can improve things in a psychological context. For example, much ink has been spilled in Hacker News about traumatic interview practices adopted and popularized by the FANG companies, such as so-called 'brain teaser' questions. Included in these practices was the widespread notion that interviewees needed to be 'put on the spot', or 'think on their feet'. Interview practice has been studied by the field of organizational psychology for decades, and such practices were from day one counter to several findings (such as that putting the candidate off-balance or in a state of discomfort during the interview and having them successfully navigate that would somehow predict job performance). Eventually several large tech companies conducted internal studies and concluded that the practices had no bearing on job performance.
I too have seen BS titillating psychology results, but the antidote is almost always to review the literature. For example you might often hear "conscientiousness on a Big 5 personality test correlates with job performance". Yes it does, but review the literature: it is quite weak and will not do a good job of explaining individual variance.
Let's say I had heard about this magical ratio in the OP's article in a job context. My BS meter would have gone off, most certainly. Some actionable number that can explain a tremendously complex and poorly understood set of dynamics? Hmph! I would have reviewed the literature and found the paper in question. I would have seen the large number of citations. Ok, that gives it weight as an idea, but let's see if the idea has legs. Where are the meta studies showing multiple independent corroborations of the number in different contexts and performed by different researchers? Non existent. As someone who takes science and applies it, for me that puts it firmly in the area of 'very interesting finding, but not actionable'. Honestly I think that's probably what was quietly happening with that ratio even before the retractions. Such findings make for great media news, corporate power points, motivational speech meetings, annoying HR meetings, etc., but hopefully (I do say hopefully :) ) in almost every real world setting if someone told a senior manager, "Your team is going to be measured on this positivity ratio, because science!", that manager would make a stink. Of course, maybe not. I do believe an important skill that needs to be increasingly taught is the ability to parse and understand the landscape of papers on a subject.
There's real examples of cases where no rigor is applied at all, and some facsimile of rigor can improve things in a psychological context. For example, much ink has been spilled in Hacker News about traumatic interview practices adopted and popularized by the FANG companies, such as so-called 'brain teaser' questions. Included in these practices was the widespread notion that interviewees needed to be 'put on the spot', or 'think on their feet'. Interview practice has been studied by the field of organizational psychology for decades, and such practices were from day one counter to several findings (such as that putting the candidate off-balance or in a state of discomfort during the interview and having them successfully navigate that would somehow predict job performance). Eventually several large tech companies conducted internal studies and concluded that the practices had no bearing on job performance.
I too have seen BS titillating psychology results, but the antidote is almost always to review the literature. For example you might often hear "conscientiousness on a Big 5 personality test correlates with job performance". Yes it does, but review the literature: it is quite weak and will not do a good job of explaining individual variance.
Let's say I had heard about this magical ratio in the OP's article in a job context. My BS meter would have gone off, most certainly. Some actionable number that can explain a tremendously complex and poorly understood set of dynamics? Hmph! I would have reviewed the literature and found the paper in question. I would have seen the large number of citations. Ok, that gives it weight as an idea, but let's see if the idea has legs. Where are the meta studies showing multiple independent corroborations of the number in different contexts and performed by different researchers? Non existent. As someone who takes science and applies it, for me that puts it firmly in the area of 'very interesting finding, but not actionable'. Honestly I think that's probably what was quietly happening with that ratio even before the retractions. Such findings make for great media news, corporate power points, motivational speech meetings, annoying HR meetings, etc., but hopefully (I do say hopefully :) ) in almost every real world setting if someone told a senior manager, "Your team is going to be measured on this positivity ratio, because science!", that manager would make a stink. Of course, maybe not. I do believe an important skill that needs to be increasingly taught is the ability to parse and understand the landscape of papers on a subject.