context matters...in both the research and way it is reported.
correlation is not too vague, but implying causality from correlation without further tests is
conversely...
assuming that a paper that only reports correlation or reports correlation as well as further evidence of causality (i.e., prior work on causal mechanisms, qualitative evidence, etc.) is faulty simply because it has correlation is equally problematic.
The major problem of the article is that it displays the same flaws as many of the underlying problems in science itself: It makes massive generalizations without nuance or appreciation of context. The problems are paradigmatic (i.e., Kuhnian) and the fact that they appear in the research AND in the critiques of the research are far more problematic than a p of .05.
Fair point of clarification...I was specifically referring to the ontological and epistemological components of the argument. The perception of many scientists (especially young ones) that their research is inherently independent of them as a researcher is a big problem. These are issues that do not appear at surface or even values levels...they are base level assumptions about knowledge that manifest very subtly.
I'm not sure what you mean by results. If you mean the results of a thought process that does not go beyond predicting positive/negative correlations, then I see no point to publishing it.
If you have reliable measurements then that falls under my number 1. There is no need for any vague theorizing, but go ahead and include it in the speculation sections (Intro/Discussion) if you like.
Edit:
If all that gets published is a correlation coefficient and not the distribution, scatterplots, etc. Then no, that paper is worthless.
Can nobody publish these results? Is it not a sensible building block on the way to precise predictions?