Hacker News new | past | comments | ask | show | jobs | submit login

And is the net result of those teams that they find, on average, fewer than 1 security vulnerability per year? That KVM has, on average, less than one security fix per year?

To quote the KVM escape Google Project Zero published in 2021 [1]:

"While we have not seen any in-the-wild exploits targeting hypervisors outside of competitions like Pwn2Own, these capabilities are clearly achievable for a well-financed adversary. I’ve spent around two months on this research, working as an individual with only remote access to an AMD system. Looking at the potential ROI on an exploit like this, it seems safe to assume that more people are working on similar issues right now and that vulnerabilities in KVM, Hyper-V, Xen or VMware will be exploited in-the-wild sooner or later."

A single, albeit highly capable, individual found a critical vulnerability in 2 months of work. KVM was already mature and the foundation of AWS at that time and people were already saying that it was highly secure and that it must be highly secure since it would be such a high value target, so logically it must be secure since only an incompetent would poorly secure high value targets, thus reverse logic means it must be secure. Despite that, 2 person-months to find an escape. What can we conclude? They actually are incompetent at security because they did poorly secure high value targets, and that entire train of logic is just wishful thinking.

Crowdstrike must have good deployment practices because it would be catastrophic if they, like, I dunno, mass pushed a broken patch and bricked millions of machines, and only an incompetent would use poor deployment practices on such a critical system, therefore they must have good deployment practices. Turns out, no, people are incompetent all the time. The criticality of systems is almost entirely divorced from those systems actually being treated critically unless you have good processes which is emphatically and empirically not the case in commercial IT software as a whole, let alone commercial IT software security.

That quote further illustrates how, despite how easy such an attack was to develop, no in-the-wild exploits were observed. Therefore, the presence of absence of known vulnerabilities and "implicit 7-figure bounty"s is no indication that exploits are hard to develop. The entire notion of some sort of bizarre ambient, osmotic proof of security just wrong-headed. You need actual, direct audits, with no discovered exploits to establish concrete evidence for a level of security. If you put a team with a budget of 10 M$ on it and they find 10 vulnerabilities, you can be fairly confidence that the development processes can not weed out vulnerabilities that require 10 M$, or possibly even 1 M$ effort to identify. You need repeated competent teams to fail to find anything at a level of effort to establish any sense of a lower bound.

Actually, now that I am looking at that post, it says:

"Even though KVM’s kernel attack surface is significantly smaller than the one exposed by a default QEMU configuration or similar user space VMMs, a KVM vulnerability has advantages that make it very valuable for an attacker:

...

Due to the somewhat poor security history of QEMU, new user space VMMs like crosvm or Firecracker are written in Rust, a memory safe language. Of course, there can still be non-memory safety vulnerabilities or problems due to incorrect or buggy usage of the KVM APIs, but using Rust effectively prevents the large majority of bugs that were discovered in C-based user space VMMs in the past.

Finally, a pure KVM exploit can work against targets that use proprietary or heavily modified user space VMMs. While the big cloud providers do not go into much detail about their virtualization stacks publicly, it is safe to assume that they do not depend on an unmodified QEMU version for their production workloads. In contrast, KVM’s smaller code base makes heavy modifications unlikely (and KVM’s contributor list points at a strong tendency to upstream such modifications when they exist)."

So, this post already post-dates the key technologies tptacek mentioned that supposedly made modern hypervisors so "secure" such as: "everything uses the same small KVM interface", "Maximalist C/C+ hypervisors have been replaced with lightweight virtualization, which codebases are generally written in memory-safe Rust".

KVM, Rust for the VMM, despite that one person on the Google Project Zero team invalidated the security in 2 months. Goes to show how effective and secure it actually was after those vaunted improvements and how my prediction that it would be easily broken despite such changes was correct, where as tptacek got it wrong.

[1] https://googleprojectzero.blogspot.com/2021/06/an-epyc-escap...




I don't think you understand the argument, which is not that the systems component stack for virtualized workloads is entirely memory safe, but rather that the world Theo was describing in 2007 no longer exists. For instance, based on this comment, I'm not sure you understand what Theo was referring to when he described the second operating system implicated in that stack.


[flagged]


I've written roughly 250 words in this whole thread, following your original response to me. You've written 450 words in this single comment, which purports to summarize my argument. You're starting to sound like Vizzini. Unfortunately for you: I have built up a resistance to iocane powder, largely by actually reading the code you're talking about.

My experience, when simple, falsifiable, or technically specific arguments are met with walls of abstraction, is that there's not much chance a productive, well-informed debate is about to ensue.

I made my point about the currency of Ceiling Theo's take on virtualization. I'm happy with where it stands. I don't at this point even understand what we're arguing about, so here's where I'll bow out.


I literally presented a direct counterexample falsifying your claim that KVM + Rust results in highly secure virtualization. Even though you have rejected even attempting to define what would constitute highly secure, by no reasonable metric is 2 person-months highly secure.

All you have done is present things that you imagine improve things, and then demand I prove your imagination wrong without presenting any actual empirical evidence for your position.

They rewrote a component in Rust, my imagination thinks it just has to be very secure. 2 person-months. Fat lot of good that did. New boss, same as the old boss. Just like I said.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: