Desktop computers have tens of millions of lines of active software, and for most users they only exhibit a bug every week or so. By any measure, that level of reliability far exceeds any other industry. Chipmakers achieve 0.999999 reliability, which is impressive, but if software had only that level of reliability it would crash every millisecond.
Setting aside all other factors (complexity, progress made, resources available, etc), I would argue that the culture of bug acceptance has lead to more bugs in software than would have existed had it been less tolerant.
I'm trying to puzzle out the logic behind chipmakers' products being less reliable than the software that runs on them. Does the weakest link in the chain not break first?
They're different units. When chipmakers claim six-sigma (about 0.999999 reliability) they mean that when you order a million chips from them, on average 1 will be bad.
It's not apples-to-apples, but if you look at the number of lines of code executed before a noticeable bug it's in the trillions.