What's amazing about this bug is that at every step you learn something that makes Pinkie Pie more terrifying while simultaneously making the Chrome security model sound more and more forbidding.
In an upcoming post, we’ll explain the details of Sergey Glazunov’s exploit, which relied on roughly 10 distinct bugs. While these issues are already fixed in Chrome, some of them impact a much broader array of products from a range of companies. So, we won’t be posting that part until we’re comfortable that all affected products have had an adequate time to push fixes to their users.
I was wondering what was taking them so long, as I recall the blogposts at the time promised that disclosure and a postmortem would come “soon”. Now that we have some detail I'm lapping it up.
The post is great in itself (clear and easy) but the constant marketing speech about how great Chrome is regardless of the bugs gets on my nerves to be honest. Yes Chrome is a very good browser, but I don't have to read that every paragraph in various forms... specially for tech articles.
It also looks like to me that devs commit code in a more lazy way since Chrome has a strong sandbox model for various components. But as a result, it seems easier to find many bugs that, when combined, bypass the sandbox, as show.
There is no way to explain how awesome Pinkie Pie's exploit is without simultaneously explaining how intricate Chrome's security model is.
A great way to market a browser is to have a security model so interesting/effective/intricate that any description of a working exploit will also serve as marketing.
I disagree. Marketing does not have to be in everydamnblogpost.
It's just annoying ;-)
While they attempt (and apparently succeed) to make you believe that exploiting Chrome is exceptional and it's such a super high security program:
The bottom line is, 2 guys showed up with a complete remote exploit of Chrome. And there are more exploits that are obviously unreleased, and some that will get released each year.
That is the true bottom line.
So again, while the article is nice and clear, the exploit is a good pony job as well - the marketing behind it makes the read annoying. It's a trend and it's not just Google. You even justify is as if marketing was a required thing to have and if you don't try to do it, you're just missing out. Well, I digress.
The only place I see some rhetoric is the second sentence of the first paragraph, the second paragraph, and the first sentence of the second to last. It's tame: it emphasises the exploit being very involved, which is well supported by the rest of the report. Everything else is necessary detail that describes the progression of the exploit from Pinkie Pie's point of view.
Your contributions, on the other hand, are much more content-free, being mostly value judgements against Chrome's PR or the supposed overconfidence of their programmers. And while you do brush on more technical matters, you do so by name-dropping products rather than being informative and describing the relevant security property.
But I'm curious about this equation, interesting == effective == intricate. Intricate == complex, right? So, the exploit certainly reveals that Chrome's security model is complex. And this is supposed to be a good thing? Seems like a good thing, if you're Pinkie Pie...
I too am bullish on sandboxing, but I suspect, like all security boundaries that have come before it, that it will be secure in inverse proportion to the amount of functionality that is allowed to pass through it. App developers will poke more and more holes through the sandbox to enable new ways to cater to users. E.g. the WebGL^H^H^H^H^HGPU command buffers channel leveraged by PinkiePie.
Well yeah WebGL is a freaking good target. And NaCl is too.
In fact, when I look at Chrome I look at NaCl and WebGL first. Because they're typical targets.
Chrome did make a good attempt at securing their browser and it works well. Unfortunately it seems that devs write slightly more sloppy code (i mean some of the exploits used are kind of basic, as if they just didn't care all that much because there's a sandbox).
That's my take tho, and it's very arguable.
I like memory-safe based OSes with secure message passing for such reasons. Singularity by Microsoft is a pretty neat implementation for such a concept. While it's not bullet proof it's simple yet (way) more powerful than the hacks we've to go through to sandbox apps on various OSes today.
Except for the memory corruption part. When reading that, I'm more aghast that we're still dealing with these basic bugs like "out of bounds array write leads to ROP chain that executes arbitrary code in the process". That just makes me feel even more that huge million-line codebases of C++, even well-engineered code like Chrome, cannot be trusted, because of the fundamental flaws in the C++ memory model for code that must be secure.
Here's a paper describing how to escape from a VM using memory errors. They're causing memory errors by putting a lit light bulb close to the memory chips:
You don't write them in C. You write them in a not-yet-existing language that allows low-level, but safe, access. (Prototypes of this language certainly already exist, I'm not convinced any are ready for this level of prime time.) You probably also have some additional hardware support not yet existing. And while, yes, deep at the heart of the system there will be something or some set of somethings that, if screwed up, could do something like memory corruption, it will be made as small as possible, verified as rigorously as possible, and get the tar beaten out of it until it's as safe as humanly possible.
This will not be a security paradise, because there's plenty of other ways to screw up. Even if we magick a perfect capabilities-based system into existence in 2040, with every desirable property that is promised fully manifested, programmers will still fail to correctly use it, because security is profoundly a Hard Problem. But the same freaking buffer exploit for the ten millionth time should be a thing of the past. (Library support should also be well on its way to making cross-site scripting a thing of the past, too.)
I definitely don't think that eliminating memory corruption vulnerabilities will produce security shangri-la. Most of the vulnerabilities we find every day aren't memory corruption.
I certainly didn't mean to imply that you had that belief by any means. My world (much smaller than yours, of course) is utterly dominated by the cross-X/injection complex of security vulnerabilities (cross-site scripting, SQL injection, shell command injection, all the same thing in the end really). I've also lost track of the times I've encountered the moral equivalents of "limited admin permitted to make new user accounts is capable of creating a full admin account and controlling its password" or some equally brain-dead simple privilege escalation that doesn't even involve anything "clever". I was just contextualizing.
There's a whole range of passes being made at this. I think the last ten years have been about the interpreted "scripting" languages and it's become obvious the next major PL niche is one of these safer-yet-systems-level languages, so in addition to the ongoing research it seems to me like there's been a burst of work on these, with more to come. D is among the most mature, and the least revolutionary, with all that entails. Mozilla is doing Rust. Go arguably fits into this area, though I'm not sure it's quite meant for kernels per se. (It is a systems level language, though.)
But given that 2040 was the year tossed in, I was also thinking the next generation after that, where some of the next-next generation of verification would be folded in. There you're looking at Haskell as being the gateway into that world (even though it is not really that verifiable in the strongest sense itself, it gets your foot in the door), and the Coq and Agda and the slowly-but-surely increasingly usable proof assistants, which would be useful for a provable-not-corruptable (via normal software means) software kernel.
Though... if one looks at the rate of advance in kernels over the past 30 years and then project out to the next 30, we get a distressingly high probability of it still being in C. Still, I cautiously optimistically (or pessimistically, depending) think that the hardware revolution that we are still only at the beginning of as we run out of Moore's Law is going to produce non-C languages that will eventually be irresistible to produce kernels in. There's going to be ever more constraints we want to maintain and it's going to get harder and harder to maintain them without some sort of language support beyond what C can supply.
He quit because he found type classes to be both insufficient and a lot of trouble for this purpose. My impression is that he would love to work on a new language targeting the same problem but making some different engineering tradeoffs. I hope he manages to secure funding so he can do that, BitC was a very interesting project.
In theory, it's possible to use a formally verified approach to ensure this can't happen, and there is a lot of research into that.
There is a version of the L4 microkernel that has been formally verified which should prevent memory corruption in kernel space, but I don't know the exact details.
This of course won't prevent corruption due to physical sources, such as radiation, but with physical access to a machine you will always be able to gain access.
The first thing about L4.verified is that formal verification, while admirable, doesn't really matter, at least not in a world where exploits are commodities, exploitation a continuous process: if a program is only a few thousand lines long and written with attention to security, the number of vulnerabilities can at worst be counted on one hand, and attackers will find them all for you in short order. If you're Iran and have an attacker capable of pouring millions of dollars into a single hack, it might make a difference; if you're Microsoft, not so much.
The second thing is that while a microkernel (if used with an IOMMU to prevent a malicious driver from DMAing all over your code) appealingly prevents an attacker from exploiting a malloc overflow in some random network driver and immediately gaining full access to anything (the current state of kernel security!-- but performance is key), it doesn't prevent an attacker from using that network driver pwn to hijack the user's Facebook session; full access is appealing, but in a complex system there are many, many "lesser targets" that are just as bad from a user's point of view.
Microkernels and their little cousin sandboxing can help, but the resulting trusted computing base is still much, much larger than we can formally verify in the foreseeable future.
(A sibling of this comment mentioned Singularity, but that's a fairly different beast: instead of proving that fast code is safe, you try to make obviously (memory) safe code fast. The only reason Singularity is able to make interesting performance claims is that it uses verification to completely avoid system calls: pretty cool, but a NaCl-like kernel could do something fairly similar for C; it doesn't really change the correctness/performance tradeoff.)
There's still role based access control and similar models that you can apply. As well as trusted path execution for binaries.
While none of this is bullet proof, it does add a very customizable kernel-level "sandbox". If you can't run your exploit for example, even thus the kernel is vulnerable, well, too bad.
Now for dreaming out loud, let's go code an OS in Rust that mimics most of Singularity and adds some RBAC on top for good measure (even thus its a much lighter version than on traditional OSes due to the system call avoidance and inherent sandboxing of all apps, as well as the contracted messages.)
You have many strong and valid points, But I think it's possible to engineer a world where "Memory corruption related exploits" cease to exit. There are many high-level programming techniques that can be used to prevent those, and I was making a point that we can fix the low-level ones also, most likely formal verification is not the way there, but it does exist as a possibility.
That is a very far cry from saying we can live in a world with perfect security, which I think is the point you are trying to make.
The L4 microkernel was verified that it faithfully implemented the system specification. The system specification is written in (executable) Haskell. http://en.wikipedia.org/wiki/L4_microkernel_family#Current_r... So, how do they know the Haskell specification is correct???
Well, they don't know whether specification is correct, but verification, while proving correctness, also proved (because you need these to prove correctness) no buffer overflow, no null pointer dereference, no unintentional integer overflow, etc.
I agree that is is not useless, however its usefulness is limited to the implementation level, verifying the translation from the Haskell specification to assembly/C/blub implementation is correct.
The higher level question is whether the Haskell specification fully and correctly specifies the desired behavior. In my 30-odd years of experience in Mil/Aerospace, I have never seen a fully and correctly specified set of requirements that could be transliterated into correct executable code. If nothing else, they all have had implicit assumptions. That is the Achilles heel of the IBM "Master Programmer" method, reborn as "outsourcing".
Since the specification is executable Haskell, that implies they wrote Haskell test programs to show that the specification implements the desired behavior.
Writing a program to verify the specification that another program implements... and then claiming a formal proof of correctness of the system is now recursive. Turtles all the way down.
For others: I learned a lot from reading the book "Mechanizing Proof". No knowledge of formal methods is needed (but you will learn something about formal methods while reading). I can't recommend the book highly enough.
L4 was verified to the tune of about $4.6 million dollars, or $500 per line of code. And that's for a microkernel under 10 KLOC. And that's assuming no changes are made. Ever.
The idea of having a small kernel that is properly verified, and properly run user-level code in a memory-protected manner isn't absurd. And when put into the context of Operating System budgets, 4.6 Million dollars is completely reasonable. Again, we're also talking about a hypothetical 2040 OS. not something we're going to have working tomorrow.
I didn't mean to imply it's not possible, or won't be possible in the future, just that I don't think it's on the immediate horizon for anything but small kernels. 4.6 million dollars may be reasonable, but if you have to add on another million dollars and another month of verification time for every patch release... it's just not practical yet.
You can use a language with dependent types (types depending on values, so you can have arrays of type "array of 10 ints" etc.) - it adds some type-level work, but makes a lot of mistakes not even compile.
But complicated type systems are, unfortunately, rarely used in languages suitable for system programming. I only know about ATS in this group actually :-)
Dependent typing is really neat, but it introduces some new problems. Namely, the type checker becomes Turing complete. We have other examples of this level of complexity turning out to be manageable (C++ templates, for example, and the Hindley-Milner algorithm which turns out to be super-exponential in the worst case) but the power of dependent typing is begging to be used and abused to a greater degree.
I have some hope for Rust because their type invariants system (I forget what they're calling it) lets you glue some of this information to variables in a compelling way without necessarily having to solve all the theoretical and practical problems that come with full dependent typing.
Well, here the compiler won't help you with bounded access - you can easily read array[42], and it would compile (and maybe even work... but just a little bit funny ;-)).
With dependent types function to get element from array may have type (this is pseudocode):
get (array : T[n], index : m) : T {n : nat, m : nat, m < n}
which would mean "function get, which takes: n long array of elements of type T, index of type m, where m is smaller than n, and returns T". Type-level naturals and bounded array access are the basic examples of dependent typing, more interesting ones may be red-black trees with guarantees about their shape put in the type or some magic for creating DSLs.
There's an automatic conversion to int* when necessary, but the array is nominally a distinct type. This is particularly apparent in C++ where you can do things like instantiate a template from the array type and create a compile-time function to return the number of elements.
In the end it all boiled down to old-style plugins. All the exploits were used to finally install and run an old-style NPAPI plugin.
Just like ActiveX, these are binary code that usually runs outsidE of any sandboxing due to compatibility reasons.
With NaCL or just the advances in HTML and related technologies, this kind of plugin really should have outlived its usefulness by now and maybe it's time to drop support - at least support for all plugins but a few whitelisted ones from the older ages.
Like Flash and maybe QuickTime (though both have a terrible security track record).
Though considering the persistence of piling up bugs that was happening here, for all we know, there would have been a different exploit somewhere else that could have worked even without NPAPI. It would just close one more attack surface.
Wait until you see the other one. There are a surprising and depressing number of ways to get a browser to run native code on legacy operating systems.
Yes, plugins should go away. No, that won't stop this kind of thing :/.
Let's be fair here, This particular bug used an NPAPI installation to finally get full access, but there many other significant breaches of security previous to this, it seems that the final step, was likely one with the most potential vulnerabilities, with NPAPI just being the easiest to install the payload.
Pinkie's not the only brony hacker either. Consider this Rainbow Dash fan: http://www.reddit.com/r/IAmA/comments/sq7cy/iama_a_malware_c... ... and in general, there are a plethora of bronies scattered across the startup world and CS academia (full disclosure: myself included).
This really takes you into the mind of a hacker(the malicious kind). Judging from what I saw it seems they combine a ton of small exploits to produce a major security breach. The amount of understanding of the underlying system you need to have in order to put these exploits together is mind boggling.
I don't like how you vilify him and call him malicious. Nothing about this was malicious. He even gave it to google for far less than it was worth. This was a legitimate audit and demonstration and it is wrong to associate anything negative with it.
What do we do against people like this?
You're asking the wrong question. Remember, he didn't put those bugs there. He didn't break anything. It was already broken. He just found the hole by reading exactly what you gave him.
What you should be asking is, how do we stop making software with vulnerabilities. The goal is to make it so that there is no hole to find, not to get rid of the hole-finders.
Just to play angel's advocate, your parent could have been referring to an abstract malicious hacker, who would probably think the same way and use the same techniques as Pinky. The "mind of a hacker (the malicious kind)" is probably very much like the mind of Pinky, except for the parts governing ethics. And he asks what to do "against people like this", not what to do against Pinky himself.
But you are right, and it is valuable to point out that neither Pinky nor Homakov nor any other talented whitehat are in no way malicious.
IMHO, as Pinkie demonstrated, there isn't anything foolproof that you can do against a malicious hacker. Having sponsored and/or incentives for white hat hackers to find and report exploits seems to be the best answer.
If this stance were adopted into the wider software development community, would it turn more black hat hackers into white hats?
Honestly, the overall process reminds me of what I do some days trying to automate processes which were not designed to be automated. I know what I want to do, I just have to figure out how, step by small step, using things not in the way they were intended to be used.
No, crackers defeat software protections. No one in the security industry (legitimate or otherwise), uses the term cracker for anything other than that, despite what Eric S Raymond et al would like you to believe.
Well, if you start from the last step (I want to load an NPAPI extension because I can gain control from one) and work backwards it seems a little less like stumbling in the dark.
I liked the confirmation prompt bug though, that was icing on the cake.
It is scary that once you have a foothold it just becomes a matter of time until someone figures out how to use it to piggyback on to more unrestricted space.
It's available for extensions I think (there's also the higher-level WebGL which you must be aware of), and requires whitelisted graphic drivers. As you can see, graphic acceleration offers a huge attack surface (memory-unsafety in C++ code, plus logic bugs at highly privileged levels like graphic drivers, firmware, and the hardware itself). Some of these layers realistically won't be protected until they have proper IOMMU support.