Hacker News new | past | comments | ask | show | jobs | submit login

No, but to be safe enough it has to be more safe than the Rust compiler, because the latter doesn't get run on untrusted code (with the result automatically executed)[1]. If bounds checks exist in the compiler IR, they're subject to optimization, which is very helpful for performance but also risky, as incorrect optimizations can easily cause memory unsafety. Optimizer bugs in modern backends are rarely encountered in practice, but from a security perspective, that's like saying your C++ program never crashes in practice: it helps, but it doesn't prove the absence of bugs that can only be triggered by pathological inputs; such bugs in fact tend to be quite common.

I've never tried to find an optimizer bug in LLVM, but I have found more than one in V8, so I have some idea what I'm talking about.

[1] More specifically, this doesn't happen in situations where correctness of the generated code is relied on to provide safety guarantees. There are several websites that will compile and run Rust code for you, but none of them try to ban unsafe code, or filesystem/syscall access for that matter, at the language level; rather, their security model relies entirely on the OS sandbox the process runs in. Google's PNaCl uses (or used to use?) LLVM on untrusted code, but AFAIK the output of LLVM, the machine instructions, are still run through the NaCl validator, so getting LLVM to miscompile something wouldn't accomplish much. (NaCl also runs both LLVM itself and the untrusted code in an OS sandbox.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: