There are numerous reasons for this, as Regehr points out. None of these products are "sound". In other words, they aren't guaranteed to find everything. Most of the time, these are tradeoffs to reduce the number of false positives and to speed up the analysis, which come with the risk of false negatives, as is the case here.
Indeed, trading off sensitivity for specificity is always a win when trying to sell static analysis to people.
Considering that any non-trivial piece of software has a fairly large number of bugs, you can dial the sensitivity way down and still find some bugs, and the high specificity looks really impressive "Most of what it labelled as bugs really were!"
Consider the hypothetical situation where your piece of software has 40 bugs in it (but you don't know that).
If you see a report of 10 bugs and 6 of them are real, that looks more impressive than a report of 100 bugs where 20 of them are real, despite the fact that the latter actually found more bugs. In fact there is a non-trivial chance that the first 5 or 10 bugs you look at in the latter will be false-positives.
Obviously if your specificity goes too low, the payoff of inspecting each report starts to become questionable, but I think that to successfully market an analysis took, you need to sacrifice more sensitivity for specificity than merited by a rational analysis of the cost/benefit of detecting the bugs early.