Oh, certainly, you can make causal inference under uncertainty. But then, rather than privileging a single hypothesis just because it's the first one you intuitively generated, you should probably instead:
1. generate a good few hypotheses;
2. rank them by their seeming plausibility;
3. discard the ones below some cut-off of "being worth the mental effort";
4. and then try to construct isolated contingent mental models of each of the worlds following from each of the hypotheses being true, making sure to store them in your mind as contingent models, rather than as "facts-in-the-world."
You can reason with inferences over contingent facts, but—especially in situations where several of the hypotheses you generate are equally plausible—it's very useful to sort of mentally "tag" those contingent-facts as contingent, so that you'll realize when you're using them in your reasoning; back up; and "drive down all the roads" (i.e. work out what the answer would be under all the contingent models you're holding onto) instead of just one.
However, if after some actual effort there's only one plausible hypothesis you can think of, then sure, just update on it directly. If there's only one contingent world, keeping it tagged as "contingent" in your mental model isn't doing any useful work for you. You can just learn it, and then unlearn it later if it's not true. (And, of course, that comes up all the time in regular life. Some things really are just "predictable.")
-----
In domains that are about resolving uncertainty — like scientific research — I would say it's pretty unlikely that you'll ever run into an interesting hypothesis (i.e. the kind you get a grant to study) that is so plausible that its alternatives — or even its null hypothesis! — can be entirely mentally discarded in advance of doing the experiment.
But, on the other hand, this doesn't matter so much; science is nice because it actually is quite tolerant of its participants' mental models being all over the place! "Scientific rigor" is externalized to the scientific process (enforced by peer review) — sort of like rigor in programming can be externalized to the language, and enforced by the compiler. There doesn't need to be much of anything happening within the minds of the researchers. (Thus incrementalism, scientific positivism, etc.)
But this isn't true once you leave the realm of process rigor, and enter the realm of regular people deciding what they should do when they read about scientific studies: how they should—or shouldn't!—seek to apply the "potential facts" they hear about from these studies in their everyday lives.
This is especially relevant in areas where non-scientists are closely following — and attempting to operationalize — the cutting-edge of scientific research, where there is not enough aggregate evidence to prove much. In such domains, it's the consumer of the science that needs good epistemic hygiene, not the scientists themselves. (Good examples of such areas: nootropics; sports nutrition; macroeconomics; and, amusingly, software engineering.)
1. generate a good few hypotheses;
2. rank them by their seeming plausibility;
3. discard the ones below some cut-off of "being worth the mental effort";
4. and then try to construct isolated contingent mental models of each of the worlds following from each of the hypotheses being true, making sure to store them in your mind as contingent models, rather than as "facts-in-the-world."
You can reason with inferences over contingent facts, but—especially in situations where several of the hypotheses you generate are equally plausible—it's very useful to sort of mentally "tag" those contingent-facts as contingent, so that you'll realize when you're using them in your reasoning; back up; and "drive down all the roads" (i.e. work out what the answer would be under all the contingent models you're holding onto) instead of just one.
However, if after some actual effort there's only one plausible hypothesis you can think of, then sure, just update on it directly. If there's only one contingent world, keeping it tagged as "contingent" in your mental model isn't doing any useful work for you. You can just learn it, and then unlearn it later if it's not true. (And, of course, that comes up all the time in regular life. Some things really are just "predictable.")
-----
In domains that are about resolving uncertainty — like scientific research — I would say it's pretty unlikely that you'll ever run into an interesting hypothesis (i.e. the kind you get a grant to study) that is so plausible that its alternatives — or even its null hypothesis! — can be entirely mentally discarded in advance of doing the experiment.
But, on the other hand, this doesn't matter so much; science is nice because it actually is quite tolerant of its participants' mental models being all over the place! "Scientific rigor" is externalized to the scientific process (enforced by peer review) — sort of like rigor in programming can be externalized to the language, and enforced by the compiler. There doesn't need to be much of anything happening within the minds of the researchers. (Thus incrementalism, scientific positivism, etc.)
But this isn't true once you leave the realm of process rigor, and enter the realm of regular people deciding what they should do when they read about scientific studies: how they should—or shouldn't!—seek to apply the "potential facts" they hear about from these studies in their everyday lives.
This is especially relevant in areas where non-scientists are closely following — and attempting to operationalize — the cutting-edge of scientific research, where there is not enough aggregate evidence to prove much. In such domains, it's the consumer of the science that needs good epistemic hygiene, not the scientists themselves. (Good examples of such areas: nootropics; sports nutrition; macroeconomics; and, amusingly, software engineering.)