> I did not say it was conclusive evidence; I said it was evidence.
But it isn't. The null hypothesis requires us to assume that there's nothing but chance at work, and let evidence force a different conclusion. The fact that A and B appear correlated is not by itself evidence of anything other than chance.
> I'm well aware that "A is correlated to B" does not prove "A causes B" or even "A causes B or B causes A", but it is a data point in favor.
No, this is false. Without testing a hypothesis, and without a careful examination of a mechanism, the correlation has precisely no meaning apart from chance.
Here's an example selected at random from a vast literature that tries to make this point:
Title: "Creating a phony health scare with the power of statistical correlation"
Quote: "In the United Kingdom, the more mobile phone towers a county has, the more babies are born there every year. In fact, for every extra cell phone tower beyond the average number, a county will see 17.6 more babies. Is this evidence that cell phone signals have some nefarious baby-making effect on the human body? Nope. Instead, it's a simple example of why correlation and causation should never be mistaken for the same thing."
I could link to a thousand similar stories, many being mistaken for actual scientific results.
> But the realist shouldn't underrate it either.
A realist -- a scientist -- always begins by assuming the association is the result of chance (the null hypothesis), and then examines evidence that might argue for another explanation. This is why all self-respecting scientific papers include a p-value. The p-value describes the probability that the result arose from chance, not the hypothesis under test.
Quote: "In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true."
Translated into layman's language, the p-factor is the probability that the observation -- the "correlation" -- arose by chance.
A properly educated scientist always assumes the null hypothesis is true, i.e. that the observation arose from chance factors. She then tests this assumption with evidence.
But it isn't. The null hypothesis requires us to assume that there's nothing but chance at work, and let evidence force a different conclusion. The fact that A and B appear correlated is not by itself evidence of anything other than chance.
> I'm well aware that "A is correlated to B" does not prove "A causes B" or even "A causes B or B causes A", but it is a data point in favor.
No, this is false. Without testing a hypothesis, and without a careful examination of a mechanism, the correlation has precisely no meaning apart from chance.
Here's an example selected at random from a vast literature that tries to make this point:
http://boingboing.net/2010/12/20/creating-a-phony-hea.html
Title: "Creating a phony health scare with the power of statistical correlation"
Quote: "In the United Kingdom, the more mobile phone towers a county has, the more babies are born there every year. In fact, for every extra cell phone tower beyond the average number, a county will see 17.6 more babies. Is this evidence that cell phone signals have some nefarious baby-making effect on the human body? Nope. Instead, it's a simple example of why correlation and causation should never be mistaken for the same thing."
I could link to a thousand similar stories, many being mistaken for actual scientific results.
> But the realist shouldn't underrate it either.
A realist -- a scientist -- always begins by assuming the association is the result of chance (the null hypothesis), and then examines evidence that might argue for another explanation. This is why all self-respecting scientific papers include a p-value. The p-value describes the probability that the result arose from chance, not the hypothesis under test.
http://en.wikipedia.org/wiki/P-value
Quote: "In statistical significance testing the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true."
Translated into layman's language, the p-factor is the probability that the observation -- the "correlation" -- arose by chance.
A properly educated scientist always assumes the null hypothesis is true, i.e. that the observation arose from chance factors. She then tests this assumption with evidence.