Hacker News new | past | comments | ask | show | jobs | submit login

Depends on what you mean by "research". If you mean "problems with a high degree of novelty" then quite a lot of research, I think. Rust is unusual: it requires a lot of tricky type inference and static analysis from the compiler, and doesn't make developers structure their code with header files etc to make separate compilation easier; BUT unlike most languages with those properties, people are writing large real-world projects in it.



My understanding is that Rust's slowness is mainly from code generation and optimization. `cargo check` performs full parsing and type-checking, and runs reasonably quickly.

The problem starts when Rust throws a ton of IR at LLVM. The current focus is on adding optimizations in Rust at higher level (MIR) so that it would cause less work for LLVM.


That is a problem.

There is another major problem, which is that rustc/cargo do not do a good job of compiling crates in parallel when one depends on the other.

There may well be other problems too.


Imagine you have 16 cores available. Allowing parallel compilation where it is sequential at the moment, you get 16 times speedup on such a machine. 32 cores machine -- 32 times speedup etc. Or even 10 such machines -- 320 times speedup.

On the other side, imagine you can even reduce the "ton of IR" tenfold but if the compilation is sequential, the tenfold speedup is fixed even if you have 32 or 320 cores.

In general, allowing for parallel compilation is much more important topic, and it's worth even redesigning language to make that more achievable.

Which also doesn't mean that producing "ton of IR" is a good thing. But working on allowing maximum parallelism is more fundamental. Also, discovering algorithms to allow all compiler passes as effective as possible is important for something to reach wider usability. But Rust must also get to achieve its promises of being "safer" in the scenarios where C is otherwise used, and I time and again see a lot of "unsafe" keywords in the sources wherever I see some implementation of anything fundamental.


Modulo the necessarily sequential (as mentioned by rayiner), you are correct in terms of absolute speed achievable. But you achieve that speedup at a roughly linear increase in $$$ cost. When you have idle cores sitting around, maybe it was opportunity cost you were already paying, but that's bounded.

On the other hand, reductions in the amount of computation needed mean you get a faster build for the same resources, or a similar speed build for cheaper.

Depending on your circumstances, one of these things (speed vs. cost) might be much more important than the other. We cannot forget about either.


Except Amdahl’s law.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: