More fundamental than Bayes's theorem is the probabilistic counterpart of modus ponens: P(A/\B) = P(B|A) P(A). This corresponds to the logical rule of inference A, A->B |- A/\B. Note that modus ponens is usually stated in the form A, A->B |- B. But this throws away useful information, namely that proposition A is true, so it's a weaker form.
Bayes's theorem is a direct consequence of this axiom and the commutativity of conjunction.
In other words, modus ponens is:
P(A=true)=1, P(B=true|A=true)=1 |- P(B=true) = 1
Let P'(A) be the distribution on A when we know nothing about B (in a world with only A and B). Plausible reasoning is possible through Bayes/joint-conditional probability rule and yields:
P(B=true|A=true)=1, P(B=true)=1 |- P(A=true) > P'(A=true)
where logic alone can't conclude (A->B, B |- ?)
Also:
P(B=true|A=true)=1, P(A=false)=1 |- P(B=true) < P'(B=true)
Bayes's theorem is a direct consequence of this axiom and the commutativity of conjunction.