Because web usage statistics are hard to get right, and hard to interpret even if you do a good job on data collection.
To get a reliable sample of web traffic as a whole you'll need to recruit most of the larger websites (Yahoo!, Google, Wikipedia, etc.), and a sampling of second and third tier sites and then process the data taking into account that browsers lie and that all of your sites that you are sampling are going to be biased in one direction or another. Google for instance probably sees a higher share of chrome than similar sized sites. Add to which there is no standard way of distinguishing browsers other than by looking at the User-Agent string which is error prone and not guaranteed to be an accurate representation.