I'm trying to puzzle out the logic behind chipmakers' products being less reliable than the software that runs on them. Does the weakest link in the chain not break first?
They're different units. When chipmakers claim six-sigma (about 0.999999 reliability) they mean that when you order a million chips from them, on average 1 will be bad.
It's not apples-to-apples, but if you look at the number of lines of code executed before a noticeable bug it's in the trillions.