What engineering did they do to reduce the security risk? As much as I like WebGL as a dev, Microsoft's arguments against feeding arbitrary machine code to buggy graphics cards that have kernel-level memory access privileges... seemed a bit convincing.
Just want to reply to say I also would like to hear an answer to this question. Something I've wanted to do for a while is write a fuzzer [1] that puts together arbitrary garbage shader script code and runs it with weird webgl operations looking for exploitable crashes. I would expect there to be a ton of bugs found, but then again the monetary barrier to entry might be high considering differences between hardware.
It also looks like the good folks at Mozilla have already been doing this to some degree [2], presumably shrinking the untested threat surface considerably (man I love those guys).
> Just want to reply to say I also would like to hear an answer to this question.
The question is that they do what a business is required to do: let the market decide. Shockingly, the market does not want actual security; it wants lip service to make people feel safe and it wants shiny features.
It would be an interesting project. You should go ahead and test the current implementations! Actually, would you even need webgl to hunt for GLSL exploits?
I don't know anything about what Microsoft has done in particular. But I can tell you what other WebGL implementations do, for example: rewrite shaders to ensure their memory accesses are safe, not accept as valid shaders code that is dangerous (but would be valid GLSL in general), validate input to the graphics card (e.g., buffers are bound, avoids depending on the GL driver to check that), do fuzz testing, maintain blacklists of known buggy drivers, etc. etc.
I would guess Microsoft is doing much the same, but it does have the extra advantage of only caring about one OS and also owning that OS.
You know, the drivers could have just gotten better over time. Remember when Vista came out, they moved to a new driver model, and there were lots of bugs...people were worried about untrusted shader code bringing down machines (not quite security vulnerability, but definitely a DOS!).
These days, the driver model is more mature, drivers that consumers have are a bit more robust, they probably re-evaluated their worries, which I think is great! Dynamism and flexibility is good in big.corp.
Disclaimer: Microsoft employee, but speaking for myself.
I agree that the threat is present, but rather than restrict freedom it is better to place decisions in the user's hands sometimes. There are security threats everywhere, even beyond the software or hardware level (i.e. phishing for passwords). I think that rather than not implement WebGL, it would be better to ask permission if the user trusts the domain (just as Chrome does with any plugin).
As of right now, with WebGL enabled, there is no confirmation of execution... it just happens. One time confirmation of execution per domain per code load might be a good option.
> Microsoft's arguments against feeding arbitrary machine code to buggy graphics cards
That would indeed be a bad idea. This is not how WebGL works. There's a translation layer that gets the WebGL calls and relays them to the graphics drivers after determining the calls are safe.
This layer can have bugs of course, like other sandboxes
(javascript, flash, etc).
Shader validation in ANGLE and black-listed drivers are how this is protected against in Chrome and Firefox. Microsoft was never really clear what the security issues were- really felt like a bunch of FUD.
tl;dr: While it was talking up the security risk of WebGL, Microsoft was allowing Silverlight to permit untrusted code to access graphics APIs in exactly the same way. Chrome validates everything before calling the actual driver APIs, so the opportunities for fuzzing are limited.
If it's a user-accessible WL, that doesn't actually add much security of course, because it's pretty simple to get users to add to the whitelist ("To play our awesome game online, open up the preferences dialogue and ...").
Ultimately though, that is the difference between a drive by infection and user interaction required. In the same way people on the whole now are too savy to download the super-awesome-screensaver or whatever, plenty are smart enough to not say yes to some prompt.
The security model of Silverlight dare is say, is superior to that of WebGL. The guys blog post doesn't actually help the issue of "Is WebGL a worrying attack vector?" instead it starts a seperate concern about Silverlight.
If we learned anything from ActiveX, it's that "click to continue" is an ineffective defense against malware on the web-- at least for most people.
I don't think it's 100% impossible that there will be a WebGL exploit against some driver or other at some point, but I think the odds have been greatly exaggerated by Microsoft and others. The reality of modern graphics cards is that most of the action happens on the card, not on the host CPU. Combine that with Intel's recent IOMMU technology and you find that exploits usually aren't that interesting. Even if you can get control of the card, you can't do much with it.
Of course, there could be a flaw in the host driver, but it would have to be a really unusual flaw. WebGL itself stops almost all invalid input (and some unsupported valid input) from being sent to the driver, so you'd have to find a perfectly reasonable set of polygons that still triggered an exploit. It would be similar to finding an mp3 that, when played, hacked your sound card driver. It's not impossible, but it's getting into tinfoil hat territory.
Well there is the case where you were redirected to a page you are not interested in (like popups). If confirmation is required, users will mostly close them without being exposed.
It would be still useful to ask users before enabling WebGL on a site the first time. I would certainly think twice before enabling webgl if a site doesn't seem to need it. It could work like flash content with flashblock extention.