Hacker News new | past | comments | ask | show | jobs | submit login

When I was an undergrad at UW, I engineered a vacuum bug that took advantage of a JDK1.1 bug in constant pool verification to suck out the user's in memory environment configuration (so like PATH variables and such).

http://www.cs.cornell.edu/People/egs/kimera/flaws/vacuum/

But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!

And at that, no one really trusts JVM security anymore, preferring to sandbox the entire operating environment in a heavyweight or lightweight VM.




> But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!

But most applications are written in memory-safe languages, so it's not surprising that most vulnerabilities are found in non-memory-safety related areas. The more interesting statistic is the number of critical security bugs that are memory-safety related in non-memory-safe languages.


That doesn't address what I said in the sentence you quoted at all. We were discussing Java and vulnerabilities that arise given flaws in the JVM. You are talking about something else that is completely different.


Neat! In 2003 I was writing an obfuscator for .NET, leading me to explore quite a bit. It was fun.

But app level bugs aren't what gives Java a bad name, are they? When someone says, like the GP, that "what about Java, that's got tons of security issues", that's almost certainly from its use as a browser plugin. Otherwise everyone would be saying the same about every language out there.


We found lots of bugs in Microsoft's Java implementation at the time; they offered us lots of money for our test suite but Brian wanted to do a startup :) Anyways, if you want to break something, you can usually get there with fuzz testing (but these days, most sane organizations will fuzz themselves).

Ya, when people say Java is insecure, they usually mean the Java plugin has insecure interfaces. As the browser + Java becomes increasingly rare, it fades from our memory. There is nothing insecure or secure about the language, memory safety actually works...but its only one small part in having a secure environment.

I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties. The language won't magically make your code "secure", just a bit easier to secure.


> I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties.

Our analysis of the security benefit of Rust comes from two very simple, empirical facts:

1. Apps written in memory-safe languages do not have nearly the same numbers of memory safety bugs (use after free, heap overflow, etc.) as apps written in C and C++ do.

2. Memory safety issues make up the largest number of critical RCE bugs in browser engines.

> The language won't magically make your code "secure", just a bit easier to secure.

Of course it won't magically make your code secure. Applications written in Rust will have security vulnerabilities, some of them critical. But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy. Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.


> But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy

I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally. The problem is not that this isn't an achievement, only in managing expectations (e.g. saying Rust is secure which doesn't make sense).

> Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.

It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?

My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.

Not that the effort isn't worthy at all. I'm just a bit cynical when it comes to seeing the tangible benefits.


> I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally.

I disagree with the latter. Based on our analysis, Rust provides a defense against the majority of critical security bugs in Gecko.

> It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?

You've changed the question from "will this increase security" to "is improved security going to result in users choosing Firefox en masse". The latter question is a business question, not a technical question, and not one relevant to Rust or this thread. At the limit, it's asking "why should engineering resources be spent improving the product".

Rust is a tool to defend against memory safety vulnerabilities. It's also a tool to make systems programming more accessible to programmers who aren't long-time C++ experts and to make concurrent and parallel programming in large-scale systems more robust. The combination of those things makes it a significant advance over what we had to work with before, in my mind.

> My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.

It is true that exploitation of UAF (for example) is not within the skill level of most programmers and that individual UAFs are easy to fix. But "hard for most programmers to exploit and easy to fix" doesn't seem to be much of a mitigation. For example, the Rails YAML vulnerability was also hard to exploit (requiring knowledge of Ruby serialization internals and vulnerable standard library constructors) and easy to fix (just disable YAML), but it was rightly considered a fire-drill operation across Web sites the world over. The "smart cow" phenomenon ensures that vulnerabilities that start out difficult to exploit become easy to exploit when packaged up into scripts, if the incentives are there to do so. Exploitable use-after-free vulnerabilities in network-facing apps are like the Rails YAML vulnerabilities: "game-over" RCEs (possibly when combined with sandbox escapes).


>the browser's "securedness" would only increase marginally

The developers' claim is that more than half of all security bugs are bugs due to memory safety issues and that Rust will solve these. More than halving the number of bugs doesn't sound marginal to me.


I'm not sure why you say this. Go look over Microsoft's CVEs for the past two years. I did, and, apart from the CLR-in-a-browser scenario, nearly every single critical CVE was a direct result of memory safety.

In other words, if we magically went back in time and wrote all MS products in Rust instead of C++, their CVE could for RCEs, their famous worms, etc. would all disappear (except in the cases where they explicitly opted into unsafe features.)


Those worms would disappear but you can't say for sure that the crackers just wouldn't find other vulnerabilities to focus their efforts on exploiting. That is to say, having gone through the cracking process myself (for research purposes, of course!), you find the lowest hanging fruit you can find, and once that fruit is gone you move on to the next lowest fruit.

Back in the 90s and early 00s, a lot of the low fruit was buffer overflows or forged pointers. Then we got serious about fuzz testing and static analysis, and now they are picking at other fruit (which is why Heartbleed was so weird).


OK then look at the CVEs for the last couple years. The reward for finding a 0day RCE in a MS product is so high, I don't think it's accurate to say it's just the low hanging fruit.


That idea of "easier to secure" led me to this scoring system. http://deliberate-software.com/programming-language-safety-a...

I'd love to see Rust shown too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: