Ted's post was a reaction to my overconfident assertion that even with a custom allocator, Rust would prohibit code from compiling that could lead to Heartbleed-like conditions. Furthermore, he was pointing out that random dorks like me talking great about Rust don't seem to actually know what was going on. I was under the impression that Heartbleed was purely out-of-bounds access, and it's not quite just that. (Still, I find it highly unlikely any memory safe language would end up with a real world program reusing the same buffers for SSL keys and user buffers, but it's possible.)
Had I been a bit more careful in my claim then there'd have been no question. Ted was technically accurate (the best kind), but a lot of the resulting discussion was of the form "bad coders can always write security holes, therefore a language isn't the fix, so C's OK". Even when major products, like everything MS ships, have critical CVEs almost exclusively due to memory safety, not arbitrary bad logic. Further tptacek likes to point out that memory safety is just one class of sec bugs, and that there's frillions of other bugs in other classes. My suspicion is that this is more of a bias towards app-level stuff (ie command injection in a billion web apps), not systems. I suppose if you get pwnd it doesn't matter how, but a world free of all those MS memsafety RCEs seems like a qualitatively different one.
But hey, I'm just some arbitrary weakling, and I find it hard to challenge people that are both smarter and drastically more experienced than I.
Writing widely-deployed infrastructure (web browsers, networking stacks, device drivers, openssl libraries) in C will be less and less justifiable over time. Rust is far from optimized right now. Rust is probably not going to be a language that you will script in. But it is a language that will possibly give Mozilla the world's most secure web browser by a significant margin if they succeed in writing the most bug prone components in Rust.
It takes more thought to write Rust programs. If you want to write safer widely-deployed infrastructure, you should learn about it. It may not be your best option right now, but its ideals are those that we absolutely need to strive for. Acting macho about C and the ability to write it safely does not lead to fewer RCE's in the world. It leads to more, because newcomers may see such people who can do so as examples to be emulated, far before they are capable.
Note that there are no familiar critics who say that Rust will not prevent some bugs. Note that there are no familiar proponents who claim that Rust will prevent all bugs. It is an implementation that strives to make progress in an area that we can all agree with - the safety of our programs with minimal performance degradation.
> But it is a language that will possibly give Mozilla the world's most secure web browser by a significant margin if they succeed in writing the most bug prone components in Rust.
A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.
I would suggest that the security issues in Java are due to bugs in the various JVM implementations, which are likely written in C, right?
Not to say that Java is going to eliminate all bugs. However, it should eliminate all bugs to do with, e.g., reading/writing out of the bounds of an array and resulting in RCE due to bugs introduced in the application code. Reading/writing out of the bounds of an array will still be a bug, but not one that allows arbitrary code execution (at least not in the same way as it may for a language like C.
And many of them, from a quick glance, are actually due to complicated sandboxing policies. That is, RCE was an explicit feature, but they found it hard to get right.
In fact, one of the first real security holes I've ever found was in 1.0 version of the CLR. You could load a function pointer to an un-JIT'd function, then later on use that to safely jump to other code. This is technically a RCE, but the user has to be executing your code first anyways. That is, if the CLR and Java didn't set out to run "partially trusted" code with no help from the OS, these wouldn't have been problems at all.
Other languages like C, Ruby, etc. simply don't have this "partially trusted" concept so there's nothing to attack. (Well I guess things like Native Client or VMware qualify, but they're at a better abstraction later than JVM or CLR permissions.)
When I was an undergrad at UW, I engineered a vacuum bug that took advantage of a JDK1.1 bug in constant pool verification to suck out the user's in memory environment configuration (so like PATH variables and such).
But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!
And at that, no one really trusts JVM security anymore, preferring to sandbox the entire operating environment in a heavyweight or lightweight VM.
> But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!
But most applications are written in memory-safe languages, so it's not surprising that most vulnerabilities are found in non-memory-safety related areas. The more interesting statistic is the number of critical security bugs that are memory-safety related in non-memory-safe languages.
That doesn't address what I said in the sentence you quoted at all. We were discussing Java and vulnerabilities that arise given flaws in the JVM. You are talking about something else that is completely different.
Neat! In 2003 I was writing an obfuscator for .NET, leading me to explore quite a bit. It was fun.
But app level bugs aren't what gives Java a bad name, are they? When someone says, like the GP, that "what about Java, that's got tons of security issues", that's almost certainly from its use as a browser plugin. Otherwise everyone would be saying the same about every language out there.
We found lots of bugs in Microsoft's Java implementation at the time; they offered us lots of money for our test suite but Brian wanted to do a startup :) Anyways, if you want to break something, you can usually get there with fuzz testing (but these days, most sane organizations will fuzz themselves).
Ya, when people say Java is insecure, they usually mean the Java plugin has insecure interfaces. As the browser + Java becomes increasingly rare, it fades from our memory. There is nothing insecure or secure about the language, memory safety actually works...but its only one small part in having a secure environment.
I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties. The language won't magically make your code "secure", just a bit easier to secure.
> I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties.
Our analysis of the security benefit of Rust comes from two very simple, empirical facts:
1. Apps written in memory-safe languages do not have nearly the same numbers of memory safety bugs (use after free, heap overflow, etc.) as apps written in C and C++ do.
2. Memory safety issues make up the largest number of critical RCE bugs in browser engines.
> The language won't magically make your code "secure", just a bit easier to secure.
Of course it won't magically make your code secure. Applications written in Rust will have security vulnerabilities, some of them critical. But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy. Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.
> But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy
I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally. The problem is not that this isn't an achievement, only in managing expectations (e.g. saying Rust is secure which doesn't make sense).
> Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.
It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?
My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.
Not that the effort isn't worthy at all. I'm just a bit cynical when it comes to seeing the tangible benefits.
> I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally.
I disagree with the latter. Based on our analysis, Rust provides a defense against the majority of critical security bugs in Gecko.
> It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?
You've changed the question from "will this increase security" to "is improved security going to result in users choosing Firefox en masse". The latter question is a business question, not a technical question, and not one relevant to Rust or this thread. At the limit, it's asking "why should engineering resources be spent improving the product".
Rust is a tool to defend against memory safety vulnerabilities. It's also a tool to make systems programming more accessible to programmers who aren't long-time C++ experts and to make concurrent and parallel programming in large-scale systems more robust. The combination of those things makes it a significant advance over what we had to work with before, in my mind.
> My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.
It is true that exploitation of UAF (for example) is not within the skill level of most programmers and that individual UAFs are easy to fix. But "hard for most programmers to exploit and easy to fix" doesn't seem to be much of a mitigation. For example, the Rails YAML vulnerability was also hard to exploit (requiring knowledge of Ruby serialization internals and vulnerable standard library constructors) and easy to fix (just disable YAML), but it was rightly considered a fire-drill operation across Web sites the world over. The "smart cow" phenomenon ensures that vulnerabilities that start out difficult to exploit become easy to exploit when packaged up into scripts, if the incentives are there to do so. Exploitable use-after-free vulnerabilities in network-facing apps are like the Rails YAML vulnerabilities: "game-over" RCEs (possibly when combined with sandbox escapes).
>the browser's "securedness" would only increase marginally
The developers' claim is that more than half of all security bugs are bugs due to memory safety issues and that Rust will solve these. More than halving the number of bugs doesn't sound marginal to me.
I'm not sure why you say this. Go look over Microsoft's CVEs for the past two years. I did, and, apart from the CLR-in-a-browser scenario, nearly every single critical CVE was a direct result of memory safety.
In other words, if we magically went back in time and wrote all MS products in Rust instead of C++, their CVE could for RCEs, their famous worms, etc. would all disappear (except in the cases where they explicitly opted into unsafe features.)
Those worms would disappear but you can't say for sure that the crackers just wouldn't find other vulnerabilities to focus their efforts on exploiting. That is to say, having gone through the cracking process myself (for research purposes, of course!), you find the lowest hanging fruit you can find, and once that fruit is gone you move on to the next lowest fruit.
Back in the 90s and early 00s, a lot of the low fruit was buffer overflows or forged pointers. Then we got serious about fuzz testing and static analysis, and now they are picking at other fruit (which is why Heartbleed was so weird).
OK then look at the CVEs for the last couple years. The reward for finding a 0day RCE in a MS product is so high, I don't think it's accurate to say it's just the low hanging fruit.
Many of the Java security bugs are in Java code. Relevant to this discussion, "Jetbleed". The many other SSL breaks in Java. A variety of issues involving deserialization of untrusted data, ala Rails yaml bug. Bugs in the JVM itself are more the exception than the rule.
I don't think Java proves your point. The major impacting bugs in Java itself tend to surround the idea of running arbitrary code with a complicated sandbox. The embedded CLR-in-the-browser also suffered many (in fact, out of all the severe MS CVEs that aren't memory related, most were sandbox escapes). So that's probably more of an indication not to build complicated sandboxes that rely on fine-grained classloading permissions systems.
The other Java bugs are ones that'd plague any language: SQL injection, rules-engines-gone-wild, etc.
> So that's probably more of an indication not to build complicated sandboxes that rely on fine-grained classloading permissions systems
Web sandboxes are full of holes too. That's why modern browsers have sandboxes within sandboxes. I don't think an HTML5 sandbox is less complicated than the JVM sandbox.
> A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.
Sure. Nobody is claiming (or should claim) that memory safety is a solution to all security problems. That's independent of the claim that memory safety is an effective defense against common vulnerabilities in C and C++ programs.
A large number of the Java CVEs you're thinking of are bugs in the Oracle JVM, meaning they are bugs in C++ code, which is not in a memory-safe language.
Actually, I think a lot of the CVEs are in the shipped java classes. But it's sometimes hard to tell, with such fine descriptions as
Unspecified vulnerability in Oracle Java SE 6u85, 7u72,
and 8u25 allows local users to affect confidentiality,
integrity, and availability via unknown vectors related
to Deployment.
Yeah, I started to look through OpenJDK commit history to get a better sense of the proportions here, but I haven't gotten far enough to have any useful data.
> A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.
Has it? I can't remember the last time I saw a security advisory for Tomcat or Jetty (there are some but they're rare), in stark contrast to Apache or Nginx.
> Rust is probably not going to be a language that you will script in.
No; you have D for that.
EDIT: Seriously, for the grumpy down voter: I've seen quite a few programmers write that they use D even for things that are normally considered as scripting tasks.
As to scripting in D: I don't write a lot of D code anymore, but thanks to snappy compile speeds and the 'rdmd' compiler driver (which automatically tracks dependencies), it really does have a sweet spot for writing little programs where you'd like the benefits of 'scripting languages' (no makefile hassle, short edit-run cycles) but need just a bit more runtime performance.
Unfortunately, this point of view has little merit.
Security is all or nothing. You can't have a little bit of this and a little bit of that. Unless the parts of the web browser that can be "influenced" by external attacker (directly or indirectly) are written 100% in a memory safe language, you simply have no real security but the illusion of such.
And this is how that hypothetical browser fails, and why it will never amount to anything re: security, since it's gonna end up using a gazillion of C libraries, all of them full of bugs and possibly vulnerable to security exploits.
One could say that Rust also fails by allowing "unsafe" code in its core design but it's still too early to see how that will play out.
Security is not all or nothing. You can definitely say that X is more secure than Y even if both have bugs, so long as X's bugs are less critical and less frequent.
As an example, I would happily claim that nginx is more secure than wordpress or the average php website written with mysql_query in the 90s. Does nginx have bugs? Probably somewhere in there. Are they as likely to be found, exploited, or (when exploited) lead to as serious issues? I doubt it.
Security is often about many many levels. A good example of this is Chrome, its sandboxing, operating system memory randomization, and user privileges. When someone finds a bug in v8, to turn it into root on the box requires bugs in all those layers (see writeups for pwn2own).
Generally, an improvement in security at any layer will reduce the impact of bugs at other layers. I'd absolutely rather have a browser written 20% in rust than 0% in rust.
I think it depends... at a certain level, you need software to be as fast as possible for the systems above... I think more user-space applications should be written in safer languages (C#/.Net, Java, Go, Rust, Node.js, Python, ...) and low-level drivers, kernels, rendering engines and the like should be in the lowest-level possible.
Unless we want everything to run as poorly (relatively speaking) as a web-application in a browser on a less-than top of the line computer... User-level applications are written on top of lower-level systems... the lower they are, the more important raw speed, and limited use of memory is.
I like to say that of course Rust prevented Heartbleed, because we don't currently support wrapping malloc ;). That's of course a bit glib.
While it's true that Rust gives you a lot of assistance with writing safe code, given that unsafe exists, and that we, as a systems language, need to give you underlying access to the machine, Rust will allow you to do all kinds of stupid things if you vouch to the compiler that you're going to handle the safety aspect. This means that ultimately, you can do any kind of stupid thing in Rust that you can do in any other language.
An example: a common problem in C code is buffer overflows, which happens when some code thinks that an array has more elements in it than it does. Because C's arrays don't have a length as part of the type, it can be error prone to keep track of the length of the array. Rust's arrays, vectors, and slices all have a length as a part of the type, and so it's more difficult to have this data be out of sync. But Rust will let you be stupid if you want to:
fn bad_stuff(v: Vec<i32>) -> i32 {
unsafe {
*(v.get_unchecked(v.len() + 1))
}
}
fn main() {
let vec = vec![1, 2, 3];
let val = bad_stuff(vec);
println!("{}", val);
}
The `bad_stuff` function here uses `get_unchecked()`, which skips the array bounds check, to grab some memory past the length of the end of the vector. Rust will happily compile this code, your `unsafe` block tells the compiler to trust that you're using `get_unchecked()` in a safe way, even though it can't verify it.
Now, there's a decent discussion that can be had on this topic: 'usual' Rust code will of course be significantly safer than 'usual' C code, since you have the compiler's assistance. Unfortunately, security is often about the edge cases. While it's true that not wrapping malloc would have prevented Heartbleed from happening, and most C projects don't wrap malloc... it did happen. Once Rust starts getting used in the wild, we'll see Rust with security problems, I'm sure of it. That doesn't mean that Rust has no value, just that we need to be realistic about the tradeoffs that any of our technologies make. Rust is not a panacea, even if it does cure some of your aches and pains.
Almost all languages have an escape-hatch to do dangerous stuff.
The key insight about Rust is the set of programming abstractions (linear types to manage resources) that enable one to get systems-like programming tasks done, without needing to fall back on very low-level coding techniques.
What about adding safer checked buffer types as part of the standard library? A fixed buffer for things of "up to this size" where it remembers how big the current thing is.
Seems like it would be a pretty easy problem to fix if you're not playing fast and loose.
Oh but it DOES mean that Rust has no value, at least regarding security. Once you give them the rope, they WILL use it. Let's see if time will prove me right.
Every language has a C FFI that users can use as rope to hang themselves with. `unsafe` code in Rust is just a reified FFI that actually manages to make Rust code safer because it means fewer things need to be written in C, and even unsafe Rust code is safer than C. Furthermore, the strict demarcation of `unsafe` blocks greatly alleviates the burden upon code auditors, allowing them to focus their efforts. Even if an extraordinarily high percentage of your code is contained within unsafe blocks, say 10%, that's ten times less code to audit than an equivalently-sized C codebase.
What an incredibly useless and demeaning comment. Everyone who has used Rust has needed to be corrected at times because of the changing nature of the language over the past few years, including people on the core team. When Steve has needed to be corrected he takes it graciously, and I say this as someone who has corrected Steve before. As for "extrinsic ethos", not only does HN make it trivial to gloss over usernames but most people do not expect members of a language's core team to come forward with frank explanations of its limitations.
If you have problems with Steve, this is not the place to air them.
A key phrase in Ted's article is Hopefully the simulacrum retains the essence of the problem.
What Tedbleed, as your article calls it, demonstrates is a bug similar to Heartbleed: reuse of a buffer in a memory-safe language is susceptible to leakage of secrets.
While I might answer the question are Rust’s memory safety features being oversold by uninformed zealots?, no I don't think so, but I would suggest that thinking that memory safety ends all security issues is a not good plan.
The key is Unless you venture into the (explicitly demarcated) unsafe portion of Rust, you will not see memory exposure vulnerabilities like Heartbleed: the language does not prevent this. Perhaps your post will stimulate Ted to go further with his example and write an erroneous complete TLS stack in Rust, showing how it too can do Heartbleed. Perhaps you agree that this can be done.
Among the scariest mass vulnerabilities relating to total information disclosure, the Padding Oracle Attack, the BEAST attack, the compression flaw CRIME, none have to do with memory safety of languages. Rust there would not have helped one whit.
Memory safety is just one issue in writing secure code.
To my knowledge, no one in the Rust or greater memory-safe programming language community would claim:
Using <$LANG> prevents leakage of secrets!
When put like that, it sounds like a straw-man argument. In my own words, the narrower and stronger claim being made is that the specific and important avenue by which Heartbleed leaked secrets is not possible without unsafe code in Rust.
So if you're going to claim that Ted "isn't wrong", but on terms that no one was originally arguing... I don't even know what to say.
So if you're going to claim that Ted "isn't wrong", but on terms that no one was originally arguing...
Perhaps I am taking a more general implication of this than "Rust would not have exactly reproduced the Heartbleed problem". I infer the overall meaning of Ted's argument, and the general discussion that "Memory safe languages do not prevent information leakage". To me that is what the argument is about.
Nothing 'prevents' information leakage other than writing perfect code 100% of the time.
If the argument is that you can write a program in Rust that tells someone something they shouldn't know… is that… what's even the argument here? Is that even a question?
But in that case, comparing against Heartbleed is pointless. Why not ask about SQL injection attacks? Or directory traversal attacks? Or OS privilege escalation attacks? Or listening on a port that isn't correctly firewalled? Rust won't prevent any of those either, because none of them have anything to do with the language being used.
Arguing that it's possible to write code that does the wrong thing is so pointless that I can't understand why this article was written other than as some kind of half-assed hatchet job.
By allowing an attacker to read arbitrary memory of the process that linked against the library
In rust, if I write code where I store my secrets in two variables which I pass into my ssl constructor, but retain single ownership of (such that the ssl library may not retain references to them after the constructor exits), there is no way for those exact memory addresses to be read and returned via that library, excepting unsafe.
If you say "ah, but the constructor could take a copy and leak that", then that's a different class of vulnerability. It could, but then it would be leaking its memory, not mine.
However, that completely does not relate to your question. I'm not saying that they both aren't security vulns, they both are.
You asked how heartbleed leaked secrets (implicitly how it was different from the rust code), and I told you how it differs. Where the secrets are stored is different. One is user-supplied secrets only, the other is arbitrary secrets within the process, user-supplied or otherwise. Yes this is different.
Your response here is simply moving the goalposts.
To move the goalposts back to position, I'll answer my own question. The specific and important avenue by which Heartbleed leaked secrets was reusing secret containing buffers and miscalculating the initialized length.
No matter how horribly you miscalculate the length of a buffer in rust, you will not access arbitrary process memory (without unsafe). Yes, you can overflow a bound in rust. No, you can't overflow a bound into arbitrary program memory.
Heartbleed was not just re-using a buffer, it was also blindly copying program memory into that buffer because C allows such memory access.
You're simplifying the problem and wording it in a way where the difference is masked, but as I already explained clearly, it's different. Feel free to tell me how the rust code could access the linking program's process memory without using unsafe and without explicitly taking ownership of data from it, and I'll eat my words.
Does rust zero the allocated memory? If IIRC, that was the crux of heartbleed(I wrote my own exploit based of the description).. It created a too large not-zeroed buffer based on the size supplied and not the actual size sent. So you could send 16bytes and specify 16k(or whatever it was) and you'd get back a bunch of extra crap from the not-zeroed memory. Previously used and now freed memory, not just any old memory.
In ANY case, if you build it(unsafe) they will come. People WILL be writing code using unsafe.
Yes, and that's not in any way controversial. If it wasn't useful to be able to drop down and do anything that C can do, it wouldn't be in the language. Our job as a community is to foster a culture where `unsafe` blocks are viewed with scrutiny and distrust unless proven otherwise. Our job as library implementors is to determine which use cases cause people to reach for `unsafe` and encapsulate those patterns in well-vetted safe interfaces. Servo already has a bot that inspects each PR in the queue and raises an alarm if any code within an `unsafe` block has been modified, flagging it for very close review. This is not a feature that we take lightly, it is a necessary tool to be used with care.
I would claim that using Scala (and presumably Haskell) gives you the tools to make it impossible to leak secrets, if you want to. You could use a path-dependent container type to ensure that content was associated with a particular connection and never passed off to a different connection; if you tried to do something like reusing a buffer, it would be a compile failure.
There would be costs to this - both in runtime performance (presumably you wanted to reuse buffers for a reason) and in program complexity. And of course it's possible to make an error in your type definitions (though you have to make two errors to be exploitable - one in the types and one in the values - and the types are shorter and much easier to audit than the full program code). But it absolutely is possible, and I think that pretty soon we'll reach the point where it becomes worthwhile for security-critical code.
Up to a point. Fork+exec is a very blunt instrument; sshd can piggyback on the unix security model because every ssh connection belongs to a user with a local user account and (usually) has the authority to do everything that that local user account may do. That wouldn't be true for a webserver.
Of course ultimately you can Greenspun any language to do what any other language can. But in practice a language where the vocabulary already exists - where you have higher-kinded types and existing generic libraries for handling when a value has a particular "context" - makes it much more practical.
As far as I understand, unsafePerformIO is not unsafe in relation to memory safety issues. It is just unsafe in that it circumvents the IO monad, which means order of execution is not guaranteed.
My original comment said that even with a custom allocator, Rust would refuse to compile code that could lead to Heartbleed. This was technically not accurate. (I thought HB was just out of bounds+uninitialized behavior.)
I think the real key is that in C, this leak can happen with just malloc+forgetting a memset, no explicit buffer reuse required. Whereas in Rust, you'd need to explicitly reuse the buffer.
I know next to nothing about rust, but can't you do pretty much whatever you want when you implement a custom allocator? Including allocate a giant buffer ahead of time and reuse stuff with no safety whatsoever?
I'm not sure custom allocators exist. But assuming they did, sure you can. But in that case you're explicitly opting into unsafe code to use uninitialized memory.
And, even if you did have such an allocator, you still can't read past the end of the allocation. So if you ask for a 16 byte buffer and attempt to read 17, it'll fail.
It's extremely important to remember this in context: Heartbleed occurred on OpenBSD because OpenSSL had its own wrapper over malloc(). Instead of actually free()-ing those areas when calling openssl_free(), OpenSSL would leave them untouched and reuse them. If that had not been the case, and free() had in fact been called, reading past the actual payload boundaries would have yielded no useful results, because the malloc() in OpenBSD's libc would have cleared it (as opposed to openssl_malloc(), which was reusing previously un-free()-d zones).
In other words: idiomatic C and a sane malloc() implementation could have actually prevented Hearbleed.
It makes sense to assume that un-idiomatic Rust and reusing memory without clearing it would trigger the same bug.
By "sane malloc" do you mean one that gives you "cleared"/zeroed memory? I think it's a rarity and I think programs kinda assume that malloc takes, I dunno, 300-1000 cycles at worst when allocating many megabytes - whereas zeroing such buffers takes much more.
Or did I misunderstand your point about "malloc sanity"?
The section responsible for Hearbleed was never allocating more than 64 kilobytes, which can probably be cleared in 1000 cycles on most modern architectures.
As someone else pointed out, OpenBSD's malloc() implementation could have supplied a cleared memory area with no discernible performance impact (in fact, I think LibreSSL already does).
Technically yes (although, by default, no), but it's more efficient than that would imply. By default, I think only small chunks are overwritten, so OpenSSL's meagre 64 KB of Heartbleed payload would have been filled with useless junk, whereas multi-megabyte mallocs() in e.g. a RDBMS would have been unaffected.
There are some other protection mechanism included, too; there's a more in-depth presentation here:
If you follow the same faulty line of reasoning about performance and portability that led the OpenSSL team to introduce their incorrect wrapper over native malloc(), you will find that you'll reproduce Heartbleed using entirely safe code.
I don't get your point, what are you trying to say?
> What Tedbleed, as your article calls it, demonstrates is a bug similar to Heartbleed
Similar but very different in the severity.
> but I would suggest that thinking that memory safety ends all security issues is a not good plan.
Nobody said that.
> The key is Unless you venture into the (explicitly demarcated) unsafe portion of Rust, you will not see memory exposure vulnerabilities like Heartbleed: the language does not prevent this.
Sure, if you want to break it, you'll probably can, but you'll have all kinds of warnings before and the few lines where you actually do that, can be proofread the most times.
There's nothing preventing you from writing common bugs the generate vulnerability either, but that's simply not possible.
> Perhaps your post will stimulate Ted to go further with his example and write an erroneous complete TLS stack in Rust, showing how it too can do Heartbleed. Perhaps you agree that this can be done.
Well, Rust may have a memory leak at some point in the future, no software is perfect, but that is one language, that will be review constantly, that is mainly memory safe, the other is one language that is not memory safe where thousands of developers are constantly writing code, inserting lots of heartbleed/memory kind of bugs.
> Among the scariest mass vulnerabilities relating to total information disclosure, the Padding Oracle Attack, the BEAST attack, the compression flaw CRIME, none have to do with memory safety of languages. Rust there would not have helped one whit.
That's not the point of the article, ted tries to say that hearbleed would be possible, author contradicts this as it shows how the vulnerability would be of different severity.
The fixation on private keys puzzles me. While they were extractable in some cases, it was difficult. Stealing passwords and cookies was trivial however. And how does one use a stolen key? You have to intercept the traffic first. I can abuse a stolen password from anywhere in the world by using it to log in. Then I can read all your (email, etc.) and don't even need the private key.
To differentiate these as "very different in severity" is I think quite misleading.
Really? The private key in asymmetric cryptography should never/is assumed to never be known to anyone other than the key-holder. Session keys are understood to be shared, and so are by definition not secure (equivalently, I can log my part of the "conversation" with a server, even if over TLS with PFS). So session keys, session cookies, transmitted data -- is always known by more than one party, a secret key should never be known by more than one party.
It is true that most TLS server deployments don't enforce this -- they don't do the asymmetric operations in a trusted computing module, smart card etc -- but they should. They shouldn't store the key in ram, but they currently do.
If you get the secret key, you can do many more things than merely intercept traffic. You can impersonate and fake traffic (fake evidence).
In addition to all this, being able to read arbitrary memory controlled by the same process is also worse than being able to read stuff that is already being transmitted. This isn't strictly a Rust/other language issue (I imagine, but do not know, that one eg could sanely (if foolishly) share buffers across thread pools, and so potentially leak information across clients -- this would probably be a design error wrt. trust boundaries etc ... but no-one is arguing that any language can prevent design errors).
As mentioned in other comments, privilege separation (a la openssh etc) is a great way to help leverage the os/kernel in order to enforce assumptions about the security primitives used. But that's a little beside the point.
In a general sense, yes. It would suck to lose your private PGP key and have someone send fake messages. Probably more so than losing any one (or few) encrypted messages.
HTTPS is a little different. You need to verify you're talking to the real paypal.com, but that's so you know you're not sending your password to a stranger. paypal doesn't send signed emails, for instance. (maybe they should, but not with the website key, for sure). Stealing passwords and cookies is pretty much the end game for https private key theft.
Is anyone willing to say that a bug which only leaks passwords and cookies is a minor issue?
wrt heartbleed, there seems to be a gap between the "could happen" and "did happen" consequences. Besides the one test server pummeled with millions of requests, were any private keys actually compromised? On the other hand, we know yahoo passwords were compromised because people were doing it within hours of the heartbleed announcement. I'm more concerned with the latter.
Yes, if I'm writing a sort function for vulnerabilities, "read arbitrary memory" is worse than "read some memory", but there should be some accounting for degree of difficulty as well.
I don't see any value in arguing over whether a hypothetical program written in Rust would have had one specific bug.
What is more important is whether using a language like Rust will prevent some severe security holes, and the answer is quite obviously that it will. Sure, it's possible to create bugs in any language, but it's much harder to prevent bugs in C than in most languages.
Ted's original point about Heartbleed not being a violation of memory safety is bogus, because there is a configuration of OpenSSL (achievable with flags passed to ./config) where extended-sized Payload values cause the OpenSSL server to segfault with an out of bounds read. It takes a special kind of bravery to look at that kind of crash and say that it is not the result of a violation of memory safety.
Yes, but you specifically said that Heartbleed wasn't a memory safety violation, when in fact OpenSSL-vulnerable-to-Heartbleed can segfault on over-read. So what is it?
But Heartbleed still existed even without out-of-bound reads, right? That's the critical distinction I got wrong and why Ted made his rather fine point. In Ted post, IIRC, he's clear that Rust will unlikely have such needed up code written. But that if people like me boast how Rust would have flat-out refused to compile such code, then we don't know the details and probably aren't in a position to be spouting off on security. A one or two word edit of my original comment would have avoided this whole discussion while retaining the essence that Rust would, in practice, almost certainly eliminate these bugs.
As soon as we have a full port of OpenSSL in Rust, this will be interesting. Talk is cheap; who's going to step up and do the hard work?
I do believe that the right developers, using the right practices, could make a more-secure OpenSSL. But it would be expensive in terms of developer time. Maybe a group with the right experience could crowdfund it?
I have a question: How do Rust and Go compare in terms of security? Or are they both very similar in terms of mitigating certain classic C vulnerability classes.
Background: I have been developing in Go for ~5 years, and Rust for probably less than 2 months.
- Both Go and Rust use bounds checking on arrays/slices (which means out-of-bounds accesses are not allowed).
- Go is garbage collected, so memory leaks cannot occur.
- Rust encourages an ownership/lifetime model, so memory leaks cannot occur.
- Go can still have data races between multiple concurrent threads. To catch them, you must use the race detector _at run time_. Note however that Go encourages communicating information over channels, and if idiomatic code is being written the chance of data races is next to none.
- Rust cannot have data races in most cases, as it's ownership/lifetime model allows for catching them _at compile-time_.
- Both of these assumptions are based on no external C or unsafe code being in the picture.
TL;DR: Very similar.
EDIT: I used the wrong terminology -- sorry. Instead of _memory leak_ I should have said _dangling pointers_ (memory leaks can occur in _any_ language, Go/Rust/Java/JS/etc).
> Go is garbage collected, so memory leaks cannot occur.
Well it depends on how you define memory leaks. Memory leaks can of course occur logically. Like say you create objects in a cache and never remove them. It is still a memory leak.
But at the level that you define the memory leak, it seems Rust can be just as good as Go because memory ownership is tracked by the compiler. One can say it is even better because of compile time checks and no need to have a GC @ runtime present.
If you're worried about memory leaks, an obvious downside of Rust is that it encourages reference counting over GC (more accurately: has no GC, but that might change in the future), and reference counting can leak via cycles.
However, in practice most values can be neither Rc nor Gc but single-owner, like unique_ptr in C++, which is easier to reason about than either.
Yes, it's important to note that Rust isn't 'refcounting but not GC,' but single ownership + borrowing over all else, and then using refcounting if you find yourself in a situation where you need multiple ownership.
GC doesn't preven all the memory leaks. En easy example is a stack class backeed by an array. If you need more members, you allocate a bigger array and copy members. (The GC takes care of the smaller old array).
So far so good. But You forget to allocate smaller array (and copy members) when you consume too much memory. So 1000000* push(something) followed by 1000000*pop() keeps plenty allocated memory not in usage.
(It's not as brutal as malloc() and forget the result but still pretty awfull situation in a large project.)
According to that definition of memory leak, I guess you can call anything that doesn't eagerly free memory for memory leak. And the term gets quite weak and vague in that case.
The issue with this code is not that it isn't eager enough. It deoesn't free the memory ever (for value of ever "till the whole stack is not freed, that can be the whole runtime of the program").
Zero experience with Rust beyond watching tutorials and talks, please please correct me if Im wrong.
Rust has one killer feature that Go does not have: the typesystem has semantics for memory sharing. You can declare how a pointer can be used: is it accessed concurrently, is it local to me, &c. This helps the typesystem detect race conditions where two threads access a block of memory in an unsafe way.
Go does not have this. In Go, you "share memory by communicating", instead of "communicating by sharing memory." But if you do want to share memory, you are back to using locks and your own conventions and mechanics to prevent concurrent access. The language doesn't help you.
So it's a different programming paradigm. And if you use Go like you would use Rust (memory sharing), then Go is less safe.
(...right?)
EDIT: This sounds one-sided, but I'm only talking about sharing memory, here. If you don't do that, and stick to communicating, then Go can blow you away. I don't know how Rust deals with coroutines, but Go has super light threads (goroutines) that make concurrent behavior pure joy. All your threads can so easily talk to eachother, and wait for eachother, and it's all so light and native and good!
E.g. you have a handler for a POST, but you only want to close the connection when another thread has finished doing something. And you don't want to tie up an entire OS thread, because it could take a while and that's just too expensive. In go: super easy, just do a blocking channel read. In another language? Wow, where would I start...
Using a plain-old mutex in Go isn't encouraged. As you said, the memory model is _share by communicating_.
Sometimes, yes, using a mutex makes more sense -- and that does introduce the possibility of a data race.
If you follow effective Go code -- I imagine the code is on average about just as safe as Rust code is. It is partially comparable to using `unsafe` blocks in Rust -- don't do it, unless you know what you're doing. Stick with channels.
While channels are awesome, and are a good pattern, what's important about Rust is that it lets you do channels, but if you need to share mutable memory for some reason, Rust still has your back in that use-case. We can guarantee no data races, even with shared mutable memory, at compile time. It's pretty huge.
That said, channels in Rust are also nice to use, and you shouldn't not choose channels. Rust just enables other use-cases too.
Example from personal experience: I'm working on an audio app right now, and I have high bandwidth, low latency data. I'm CPU bound, allocations and the GC are my enemy. It's just not feasible to constantly copy those buffers around. :(
Don't take this the wrong way, I'm not complaining about Go. Just know that sometimes, there are real reasons to share memory. And when you do: Rust has got your back, but in Go you're on your own.
Not sure what you mean; channels don't force you to copy buffers, you can simply pass the pointer/slice through the channel, but the point is that the paradigm encourages you to pass the ownership of that buffer through the channel.
"the paradigm encourages you to pass the ownership of that buffer through the channel."
Rust transfers ownership when you send a single-owner reference over a channel. The sender can no longer access the referenced object; the compiler prevents that. That's the safe way to do it. This is a significant advance in multi-thread programming. The rules that make this work are strikingly simple. Somebody should have thought of this 20 or 30 years ago, but as far as I know, nobody did. Rust pushes the single-owner model hard, and it seems to work. The C++ crowd beat their head against the wall on this for decades, through three generations of auto_ptr, then unique_ptr. Really getting it right requires something like the Rust borrow checker, which C++ does not have.
Go does not do anything to prevent sharing data between threads. It's very easy in Go to unwittingly create shared data, because slices are references. After you've sent a reference, the sender still has access to the shared data, as does the receiver, so there's a potential race condition. Go does nothing to prevent this. There's a lot of dancing around this issue in "(In)effective Go", and long discussions of "is this a race condition" in Go forums. Go is a useful language, but it's in spite of Go's concurrency system, not because of it.
Why can't you transfer buffers as []byte or []float32 etc to another goroutine over a channel? Hint: that doesn't copy the buffer, it just transfers "ownership" of it.
What if the buffer is aliased? In rust, when you transfer ownership, you can statically guarantee that there's no aliasing. But AFAIK that's not possible in go.
"ownership" was in quotes because what he really means is "logical, but not language guaranteed, ownership". Both sides of the channel still can touch the underlying data and cause races.
C++ really isn't memory safe. It's trivial to think of an example:
#include <iostream>
auto f() {
int x = 1;
return [&] () { return x; };
}
int main(int, char **) {
std::cout << f()() << std::endl;
}
Of course "using <x> correctly" prevents bugs as long as - as C++ does - you define "correctly" as generously as possible. I agree Go isn't totally memory safe, but the level of memory safety is insanely less than in C++.
If you want something particularly involving unique_ptr, consider that best practices are to give non-owning sharing as raw pointers: you could say they're safe as long as you use them correctly but again that misses the point. I can also give an example (stolen from a talk) directly involving shared_ptr even when used with traditional "best practices" although I'm sure the best practices will update themselves to include this.
In some sense, Rust has been described as merely formalizing and automatically enforcing the best practices that people were already using in C++; it's the formality and automaticity that turns it from a dangerous language to a safe one.
My first thought would be to attach the connection object to the data the other thread is working on. When it is done it processing your message it drops the whole thing, closing the connection. If you want to hold it open for multiple workers you could make it an Arc<Connection> so it doesn't close until they all drop it.
I can't speak to security as a whole, since security encompasses more than memory safety. But regarding memory safety, they're similar. Rust has an edge in that it can use hardware parallelism while remaining memory safe, while Go is not memory safe if GOMAXPROCS > 1. However, the memory-unsafe part of Go is a corner (racing on a map or slice) that hasn't caused any real-world security problems I'm aware of.
I generally agree with what you're saying here. The difference appears to be that Rust forces you on a compiler level to be safe of any potential race-condition.
Go doesn't enforce this on a compiler level -- but instead it encourages the use of sharing by communicating (over channels, where data races cannot occur), instead of sharing memory e.g. via a mutex (where you can run into a data race).
Important distinction: Rust forces you to be free of data races, which are a subset of race conditions. A very important subset, mind you, but they're different. It's not really possible to prevent general race conditions, like any form of logic error.
Rust is not "runtime free". Rust needs a "minimal runtime", as stated on the project home page. You need at least a memory allocator. Even C programs are usually linked to a runtime library (like glibc).
Rust is usable with an extremely minimal runtime that approximates having no runtime at all (with #[no_std]). There are a few intrinsics you have to define, but none of them involve memory allocation. It also has a core crate that does not expose anything that requires allocation.
I've played around with both languages and to my limited knowledge Go just seems like a more modern C language with a GC and easy multithreading.
Rust on the other hand is a completely different animal to anything I've used before. It has one of the most annoying compilers but that is for a reason, if your code compiles then it's memory safe (with a few exceptions, like 'unsafe' block)
Golang has possibly the best TLS library implemented in any language. Maybe you instead mean that nobody would drop a Golang TLS library into a C program, but that they might do that with a Rust TLS library? That's truey, but not particularly interesting. Most languages can't be dropped easily into C programs.
> That's truey, but not particularly interesting. Most languages can't be dropped easily into C programs.
I don't see how the fact that most languages can't be dropped into C programs makes the fact that Rust can not interesting. It seems to me to imply the opposite.
Fair enough! I just get itchy at this refrain about Rust rewriting OpenSSL because again: Golang crypto/tls is an achievement. It is a seriously nice piece of code. It didn't need a uniquely great language (golang isn't that), just a uniquely talented TLS implementor.
Making a library that can be linked into a C program is interesting because it can also be linked into a program written in pretty much any language. OpenSSL is awful, but it has bindings to pretty much any language that you want, so it's very commonly used. Go's TLS will never have that, but one implemented in Rust could.
Rust does not yet have a SSL library implemented as far as I am aware of. Go, however, does indeed have it's own pure-Go SSL library[1] and does not use OpenSSL at all.
I should have been more clear. By "re-implementing OpenSSL in Rust" I meant making an implementation of SSL that would be used the same way as OpenSSL is, dynamically linked from C or various other programming languages. If you want a static library to compile into your Go application then of course doing it in Go is the most straightforward way to do it.
For the security topic, most of the power of Rust come from a strong typing, safety checks (boundaries), ownership checks... Well, a lot of checks at compile time.
As far as I know of Rust by glancing at the doc frequently, C++11/14 also contains these "safe tools": ownership can be dealt with smart pointers (unique, shared), value semantics, move semantics (rvalues) and the rest. Boundaries checks with the correct containers (std::vector, std::array...). Mutability with "const". Atomicity with std::atomic... Type safety by using explicit conversions, templates instead of macros, variadic args...
Heck, all the drama from C++ seems to come from a misusage or some legacy parts from another age (up to C). In order to avoid people to shot themselves in their feets, why couldn't we just introduce a strict compilation mode a la JavaScript to let say Clang:
# pragma strict
# pragma unsafe
# pragma safe
Nowadays, good C++11 compilers can even deal with such a things like constexpr. This seems a reachable goal.
What are the true drawbacks of a fully modern C++ code compared to a Rust? Is there any true feature that can't be done in C++ due to a deep problem of conception? Don't le me wrong, I do not seek to diminished the amazing work of Mozilla. I am not talking neither about the other cool features from functional languages, switch cases... that Rust has and C++ don't.
You're mistaken if you think that anything in C++11/14 makes it memory-safe like Rust guarantees.
C++11/14 lets you assign a std::unique_ptr to another variable then re-use the original variable, causing a null dereference. You can also have a reference to the same std::unique_ptr from two different threads causing a race-condition on the underlying pointer.
There is no bounds checking on std::vector by default. You can bounds check a std::vector (using "at" or various configuration/compiler settings which change the semantics of the standard library) but by default, it is not checked.
Fundamentally, the issue is that C++ lets you have null pointers and references (yes, references can be null). It lets you use before initializing. It lets you run out of bounds. It lets you access the same variable from multiple threads without any safety. Yes, idiomatically, none of these should be an issue but it's not possible to be a perfect programmer.
Rust forces idiomatic memory management and eliminates all these issues. If you make a memory safety mistake, the compiler will force you to fix it.
Your points are clearly valids. But wouldn't some of them corrected by a strict mode?
- For instance, the reuse of the original variable could be deduced by the compiler.
- The bound checking for std::vector can effectively be enable. One could imagine an std::strict_vector that do so.
What I am wondering is: does the same idiomatic memory management applied to C++ would require some huge tweaks to the language, some bad tricks, new keywords, or can it be done without changing its design but enforcing some rules?
I, for instance, have no clues how to deal with your "one unique_ptr two threads" problem. Could it be done in an elegant way in C++?
Linear types (as in Rust) can prevent (at compile time) some of the more trivial use-after-free issues e.g. for unique_ptr, but I think the main reasons you won't see undefined behavior eliminated from C++ wholesale is that it a) often requires extensive support at runtime (see ASAN, UBSAN, etc.) and b) presents a huge barrier to optimization in certain cases and c) (thus) is going to be waaay too slow for production use. (I.e. if C++ were to go in this direction someone would basically either "fork" C++ or a new (similar) language would supplant it.)
Unfortunately, currently no "sufficiently smart compiler" exists, so that high-level code can be optimized sufficiently to beat what a good micro-optimizing C++ compiler (which can assume that no undefined behavior can occur at runtime) can achieve.
Undefined behavior does permit optimizations, yes, but I think you're overselling its benefit. Rust doesn't have undefined behavior and there are many instances where its strict semantics mean that it is far more optimizable and runtime-efficient than C++ (though some of those optimizations are yet to be implemented).
Perhaps, these days you're right -- assuming you want to only support mainstream architectures. These days you can mostly rely on all mainstream architectures to do something sensible with e.g. signed integer overflow[1] or excessive shifting, but that wasn't necessarily the case when most of C++ was standardized. As an example of a similar nature -- as I'm sure you know -- Rust has chosen to not abort/panic on signed overflow although almost all instances of such are most probably plain logic errors and could lead to security problems[2]. As far as I could understand, this was for performance reasons. Granted, this is not quite as disastrous for general program correctness as UB, but it can lead to security bugs.
Point being: Underspecification can give you a lot of leeway in how you do something -- and that can be hugely valuable in practice.
Just as an aside: Personally I tend to prefer safety over performance, but I was persuaded that UB is valuable by comments that Chandler Carruth of Clang (and now ISO C++ committee) fame made about UB actually being essential to optimization in C++. Sorry, can't remember where, exactly, those comments were made.
[1] Everybody's probably using two's-complement (for good reasons).
[2] Not nearly as easily as plain buffer overflows, but there have been a fair few of these that have been exploitable.
Even mainstream architectures don't handle excessive consistently, e.g. for shifting an n-bit integer, I believe some mask the shift by 2^n - 1, some by 2^(n+1) - 1, and some don't mask at all (i.e. 1 << 100000 will be zero). Of course, being UB (rather than just an "unspecified result" or some such) probably isn't the best way to handle the inconsistency.
In any case, I believe Rust retains many of the optimisations made possible via UB in C++ by enforcing things at a compile time. In fact, Rust actually has quite a lot of UB... that is, there are many restrictions/requirements Rust 'assumes' are true. For example, the reference types & and &mut have significantly more restrictions around aliasing and mutability than pointers or references in C++. The key difference between Rust and C++ is that it is only possible to trigger the UB with `unsafe`, as the compiler usually enforces rules that guarantee they can't occur. People saying "Rust has no UB" are usually implicitly meaning "Rust cannot trigger any UB outside `unsafe`".
Rust will actually now panic when not in release mode when an integer overflow happens, as opposed to how things used to be before, where it would just accept the overflow silently with wrapping semantics. Here is the discussion from when this change was announced:
http://internals.rust-lang.org/t/a-tale-of-twos-complement/1...
A "safe mode" in C++ is completely impossible in practice, because C++ does not have a module system but an automated text copy-and-paste mechanism (the preprocessor). Hence your "strict" mode would refuse to compile any unsafe constructs in your standard C++ library headers, boost headers, Qt headers, libc headers, or any other headers of popular and mature libraries that made you choose the C++ language in the first place. If you can't re-use any code anyway, why not pick a sane language?
Remember: Heartbleed was a consequence of the OpenSSL implementors avoiding using glibc for allocations. A "fair" comparison is with unsafe rust code, which of course can have the problem.
Was it? I thought that only exacerbated the issue. malloc is free to return previously used memory, is it not? Various systems might provide some last-chance efforts against this kinda thing, but the bug would still exist and be exploitable in some configurations, right?
Free by the standard, yes. In the average stdlib, not any more. In particular, Linux web servers would be safe, which would have rendered Heartbleed more of a "edge case platforms" kind of bug.
if you keep unsafe code contained enough that it is only used in cases where there is no other choice, it's still practical.
A simple pointer arithmetic loop over a string in C could lead to very nasty bugs if said string isn't null-terminated (for whatever reason), in rust you wouldn't think of using unsafe code to do that.
How much unsafe code does Servo use ? It seems most of the unsafe code blocks I see have a "TODO" block with some explanation of why it's unsafe, and with the express intent of removing the unsafe block eventually.
You have to distinguish between the failures that are the result of unhandled logical errors ("panics", in Rust terminology), which are memory safe, and segmentation faults. Which are you seeing? If it's the latter, then I'm sure the Servo team would love a bug report at https://github.com/servo/servo/issues .
As for the usage of C++ libraries, this will diminish over time. The history of Servo thus far has included a gradual process of rewriting C++-based components in Rust, and sometime this year Mozilla intends to begin rewriting small components of Firefox in Rust. Reducing the attack surface is always valuable.
Failures don't mean unsafe. If you could exploit those failures that would mean unsafe. Try it on servo! Me & other servo contributors would welcome it.
Don't be so aggressive. If I criticize code, it doesn't mean I criticize people.
I tried to say that language injections (such as C++ libs) make Rust apps more fragile. If my assumption is wrong, and all these failures because of Rust code - I apologize.
I don't know with regards to the failures you saw. For the issues I see & fix most are in rust code.
I didn't mean to sound aggressive, we really would welcome reports of safety issues. It's hard to know if servo /really/ is safer without the hundreds of people trying to exploit it that chrome & firefox have.
Servo should be safer & I believe it's safer but I don't /know/ it's safer.
There is a whole class of memory leaks that is possible in C but not in Java. You can leak memory by forgetting to deallocate it. This class is not possible in typical Java code, because the garbage collector collects it.
Then you have memory leaks that happens because the application has data structures that grow unbounded. The latter one is not really preventable by automatic means, because the memory is still referenced from elsewhere in the application, and a garbage collector can't know that this memory won't be accessed again.
It does help that the most common source of memory leaks is eliminated however. That many Java applications are memory hogs is more a result of typical Java programming style than anything else.
> It does help that the most common source of memory leaks is eliminated however. That many Java applications are memory hogs is more a result of typical Java programming style than anything else.
This may be part of it, but typically, the JVM will happily reclaim large chunks of memory from the OS, without necessarily allocating anything internally.
I mean, memory leaks are possible in Rust assuming the person explicitly uses unsafe code. However, most code does not use these unsafe blocks...
and it should not be possible. Rust has a sound type system that uses linear affine typing. While their implementation is not "proven correct", in practice if the Rust compiler is working properly, leaks will not happen.
What's keeping you from having a cache and forgetting to expunge it? It's fair to say that you have to work a lot harder in Rust to get a memory leak, while it requires no effort in C/C++, though.
The code that led to the Heartbleed vulnerability was checked in in December 2011. The first pre-Alpha release of Rust was in January 2012 (source: Wikipaedia for both). Would technology from the future have prevented it, is the actual question, and the answer should be obvious...
Rust isn't technology from the future, for the most part. It's engineering from the future -- everything is in one language with a native ABI -- but with very few exceptions, each technology in Rust has been around for a while.
Memory-safe arrays, which is what's at issue here, is precedented in tons of languages. Even GW-BASIC got this right.
It should be an embarrassment to the entire systems programming community that it's 2015 and we're only starting to think of replacing C.
It's been an explicit goal of Rust to not use cutting edge PLT. For an industry language, Rust's stuff is pretty new, but that's because industry languages lag so far behind PLT as a field.
This rant is disingenuous both to the author's post and the ongoing conversation about the benefits and limitations of memory safe languages. To put it more generously, one might write: Could this specific technology from the future have prevented Heartbleed?
I think he has a point. Do we really need this specific technology from the future to prevent Heartbleed? No, because we have ASan and other things like that.
ASan is a runtime detector; it only works if you have a test case that covers the buggy thing, like in this case sending a malformed heartbeat request. OpenSSL pretty clearly didn't have a test case for that - if they did, they'd've caught the problem, with or without ASan. It's like the difference between a type system and unit testing.
You miss an important distinction between proof-based approaches (such as type systems) and testing (such as ASan): for any non-trivial problem it's impossible to test all potential inputs, and the inputs that trigger security issues tend to be ... unusual.
As Dijkstra said: "Program testing can be used to show the presence of bugs, but never to show their absence."
The Rust typechecker is a static analyzer, and thanks to the language's strict semantics it's more powerful and more reliable than any static analyzer that can be written for C or C++. The language also has a pluggable lint system, so that if you concoct your own correctness guidelines (say, "trigger a compiler warning whenever a buffer is reused"), you can easily implement it and distribute it as a regular Rust library so that others can benefit from it as well. See https://github.com/Manishearth/rust-clippy for an example of such user-defined lint passes.
We also have 'not using a custom malloc() wrapper that doesn't actually free memory as a wrapper around free()' to prevent Heartbleed, but instead of this technology being from the future, it's actually from decades ago.
Had I been a bit more careful in my claim then there'd have been no question. Ted was technically accurate (the best kind), but a lot of the resulting discussion was of the form "bad coders can always write security holes, therefore a language isn't the fix, so C's OK". Even when major products, like everything MS ships, have critical CVEs almost exclusively due to memory safety, not arbitrary bad logic. Further tptacek likes to point out that memory safety is just one class of sec bugs, and that there's frillions of other bugs in other classes. My suspicion is that this is more of a bias towards app-level stuff (ie command injection in a billion web apps), not systems. I suppose if you get pwnd it doesn't matter how, but a world free of all those MS memsafety RCEs seems like a qualitatively different one.
But hey, I'm just some arbitrary weakling, and I find it hard to challenge people that are both smarter and drastically more experienced than I.