If your argument is that the human mind functions primarily in terms of valid statistical inferences, it doesn't support your case if it processes its input in ways that are inherently flawed and biased.
If you mean "mathematically correct" then yes that is my premise.
If you mean "correct in reality" then no.
If you have a prior belief "god created humans" with a 99% confidence, it will take a lot of evidence to override that - especially since the evidence will be a long way down a causality chain (as any online argument about evolution will show). Additionally, the confidence in that belief will be erroneously increased by lots of things you see in nature (eg - humans cannot create other life forms, therefore it must have been done by something supernatural).
This thread could be summed up as "my priors tell me that if people were Actually Doing This (tm) it would look like X and it doesn't look like X, thus they aren't!" and then other people saying "uh, if you estimate the priors wrong that doesn't mean you aren't doing bayesian, it just means you estimated the priors wrong and got the wrong answer"
Ha. In my opinion, you and others are treating the brain doing statistics like religious people treat the existence of God. Absolutely any evidence can be interpreted as support for the hypothesis if you just tell the right story about it.
I'm not even saying the hypothesis is necessarily false. However I will never trust the results of people who become so attached to an idea that they focus entirely on confirming it rather than rigorously testing whether it actually reflects reality.
The whole point of the experiment is "how does this machinery operate?" not "does this machinery produce the same answers as I would expect?" There's a huge difference. Just because you reason through things one way and get one set of answers as a result doesn't mean that's how everyone should operate. Different priors can (and probably should) yield different results.
Different priors probably mean entirely different lines of reasoning, so you can't even say "well they should have thought about it the way I do" because it's not like the human brain is a single 10x10 neural net with no feedback, it's probably 1Mx1M and it is recurrent; the output from one time step feeds into the input at the next one.
You could get very, very different output for the same input from two neural nets that have been trained on different data. But that doesn't somehow make them not both neural nets; the meta-structure is the same even though all the weights from node to node are different.
> The whole point of the experiment is "how does this machinery operate?"
Yes, exactly!
If you say "the machinery operates in way X" (where X="Bayesian statistics"), you have to actually prove that. You can't just assert it and say that people who disagree with you have bad priors.
I have no doubt that you can construct a hypothetical Bayesian decision-making process that could have yielded any possible decision that any human has ever made. Just like any religious person can take any story that has ever happened and construct a story about why it is compatible with the existence of God.
But unless you have a falsifiable experiment to prove that the brain actually operates this way, your explanation is an unverifiable just-so story. And in this case I don't find it at all convincing.
Don't get me wrong, I understand your theoretical argument. I just think you are taking an unjustified (in my perspective) leap of faith to presume that your story reflects reality.
"Statistically valid" != Correct (if your input assumptions are incorrect).
See up thread for ways to build falsifiable experiments.