The "inconvenient truth" with Undefined Behaviour is that it provides essential wiggle room for the compiler to implement some optimizations that wouldn't be possible with a stricter specification. Of course the downside is that sometimes the compiler goes too far and removes critical code (which AFAIK is still a hot discussion topic how much compilers should be allowed to exploit UB for optimizations).
Because that was the only way for C to catch up with stuff that was being done in languages like PL.8.
So compiler optimizers get to take advantage of UB for that last mile optimization, which in a language like C always expect the developer to be a ISO C expert, and when they aren't or are too tired trying to meet project deadlines, surprises happen.
"Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue.... Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels? Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
-- Fran Allen interview, Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming
It's not "essential". On realistic code it's single-digit percentages at most. But it's essential for winning compiler benchmarks, so you'll never get C compilers to stop doing it.
You would have a huge number of false positives, because if you're doing a bunch of null checking defensively you're going to be doing it redundantly a lot.
Right, but if the compiler has proved that you don't need those null checks then you can simply remove them. If you think you do need them then it's a sign you've screwed up somewhere and should fix it!
The problem is that it's not that simple. A NULL check might only become redundant after various rounds of inlining and optimizing, and the same NULL check could be redundant in one usage and necessary in the other. It's also very likely that changes to other parts of the code will make a previously 'redundant' NULL check now necessary, and obviously you're unlikely to get a warning in that direction.