What's the point in a summary that ignores literally hundreds of tests? Tests that, as you point out, already exist and so would require absolutely no effort to run.
Why have a row in the table that says SVG IE 100% when the truth is very different:
The Google plugin that transforms SVG to Flash passes twice as many tests as the IE9 preview. Opera passes 3 (nearly 4) times as many, yet Microsoft felt it was useful to list a random subset of tests that show them doing well.
What's the point in creating new test to cover things that existing tests already cover?