No, you're right, but I suppose in the grand scheme of things, I was alluding to the fact that in this case (and in many other cases) we do have incomplete and imperfect prior knowledge, but prior knowledge nonetheless, which would allow us to make causal inference. With some uncertainty of course.
I would argue that making these causal inferences without formal proof is actually somewhat essential and desirable, so long as assumptions and uncertainty are recognized, and that they lead to further probing (informed by such inference).
Many people seem to think science is about "finding the truth" (ie.: obtaining "first-order" information about the world), and that's certainly one way to see it... But I think it's more interesting to think of science as being all about finding, quantifying and describing uncertainty.
This might seem pedantic, but you don't need to abandon all uncertainty in order to make causal inference. I think we're saying mostly the same things, in different ways and with different implicit assumptions about what it means to do science. It's obvious from your answer that you also understand all of this very well.
PS.: I have an MSc in biochem: not claiming to be a great scientist, but I do have formal training so I'm at least familiar with the matter :)
Oh, certainly, you can make causal inference under uncertainty. But then, rather than privileging a single hypothesis just because it's the first one you intuitively generated, you should probably instead:
1. generate a good few hypotheses;
2. rank them by their seeming plausibility;
3. discard the ones below some cut-off of "being worth the mental effort";
4. and then try to construct isolated contingent mental models of each of the worlds following from each of the hypotheses being true, making sure to store them in your mind as contingent models, rather than as "facts-in-the-world."
You can reason with inferences over contingent facts, but—especially in situations where several of the hypotheses you generate are equally plausible—it's very useful to sort of mentally "tag" those contingent-facts as contingent, so that you'll realize when you're using them in your reasoning; back up; and "drive down all the roads" (i.e. work out what the answer would be under all the contingent models you're holding onto) instead of just one.
However, if after some actual effort there's only one plausible hypothesis you can think of, then sure, just update on it directly. If there's only one contingent world, keeping it tagged as "contingent" in your mental model isn't doing any useful work for you. You can just learn it, and then unlearn it later if it's not true. (And, of course, that comes up all the time in regular life. Some things really are just "predictable.")
-----
In domains that are about resolving uncertainty — like scientific research — I would say it's pretty unlikely that you'll ever run into an interesting hypothesis (i.e. the kind you get a grant to study) that is so plausible that its alternatives — or even its null hypothesis! — can be entirely mentally discarded in advance of doing the experiment.
But, on the other hand, this doesn't matter so much; science is nice because it actually is quite tolerant of its participants' mental models being all over the place! "Scientific rigor" is externalized to the scientific process (enforced by peer review) — sort of like rigor in programming can be externalized to the language, and enforced by the compiler. There doesn't need to be much of anything happening within the minds of the researchers. (Thus incrementalism, scientific positivism, etc.)
But this isn't true once you leave the realm of process rigor, and enter the realm of regular people deciding what they should do when they read about scientific studies: how they should—or shouldn't!—seek to apply the "potential facts" they hear about from these studies in their everyday lives.
This is especially relevant in areas where non-scientists are closely following — and attempting to operationalize — the cutting-edge of scientific research, where there is not enough aggregate evidence to prove much. In such domains, it's the consumer of the science that needs good epistemic hygiene, not the scientists themselves. (Good examples of such areas: nootropics; sports nutrition; macroeconomics; and, amusingly, software engineering.)
I would argue that making these causal inferences without formal proof is actually somewhat essential and desirable, so long as assumptions and uncertainty are recognized, and that they lead to further probing (informed by such inference).
Many people seem to think science is about "finding the truth" (ie.: obtaining "first-order" information about the world), and that's certainly one way to see it... But I think it's more interesting to think of science as being all about finding, quantifying and describing uncertainty.
This might seem pedantic, but you don't need to abandon all uncertainty in order to make causal inference. I think we're saying mostly the same things, in different ways and with different implicit assumptions about what it means to do science. It's obvious from your answer that you also understand all of this very well.
PS.: I have an MSc in biochem: not claiming to be a great scientist, but I do have formal training so I'm at least familiar with the matter :)