> I see C and I wonder if that's the best we can do for a systems programming language. Maybe C is just a local optimum that we have just invested too much manpower into to switch to anything else?
At this point the ecosystem around C is too big to justify the cost of switching for the domains it's used in. Any new contender is going to have to offer something new, without giving up any cost in performance or control over the machine.
With Rust (disclaimer: I work on Rust) we're shooting for safety to give us that edge—having the compiler reliably prevent use-after-frees and buffer overflows instead of discovering them through 0-days, without the traditional approaches that sacrifice performance and control over the machine (GC), seems pretty compelling to me. Even if we do succeed in getting enough new projects to use a safe language to make a difference, though, C will still be immortal; once a language has that much staying power it'll be around forever.
Let me first say the I applaud Rust and the effort you're making in developing a new, safe systems-programming language. But I do have one concern.
I like this quote by an Intel engineer, which appears on the back of Java Concurrency in Practice: "For the past 30 years, computer performance has been driven by Moore's Law; from now on, it will be driven by Amdahl's Law."
I think that this is the biggest challenge facing software development in this age. Now, I know concurrency is core to Rust, and I know that Rust adopts message passing as the principal concurrency construct. Message passing is terrific, extremely useful, and as close as we've got to a model of programming multi-core that's simple and easy to grasp. But at the end of the day, we do need a way to develop efficient mutable, concurrent data structures. Languages that rely on message passing, like Erlang, usually pass this problem down to a database, but the problem still has to be solved.
You say that languages resorting to GC "sacrifice performance and control over the machine". But GC already provides better throughput than manual memory allocation in practically all circumstances, and suffers mostly from latency issues (pauses), but even those are being worked on with some good progress (like the work done by Azul on their JVM). The only major true sacrifice GC requires is extra RAM, which is becoming ever cheaper.
GC is currently the best way to develop efficient concurrent (shared) data structures. There are ways of writing those without a general-purpose GC (RCU and hazard pointers), but those wither require kernel cooperation, and/or suffer from worse worst-case performance, and/or quickly transform into GC when generalized.
I know the plan is for Rust to have a GC as part of the libraries, but this problem (of developing efficient concurrent data structures) must be addressed. It's possible that hardware has become too complicated for us not to give up some control over to some runtime. Maybe we're at a stage where low-level full-control programming is incompatible with fully utilizing the hardware for best performance. It is possible that low-level systems programming languages will excel in resource constrained environments and have several other advantages, but sheer performance/scalability won't be one of them (at least on server-class hardware).
> But at the end of the day, we do need a way to develop efficient mutable, concurrent data structures. Languages that rely on message passing, like Erlang, usually pass this problem down to a database, but the problem still has to be solved.
Rust fully supports concurrent data structures with shared mutable state, and there are several in the libraries.
> But GC already provides better throughput than manual memory allocation in practically all circumstances
This is far too broad of a statement. I can certainly come up with cases in which manual memory management will outperform GC. For example, if you have an arena-like pattern like the binary-trees benchmark, I think it's impossible to outperform manual memory management. Even if you bump allocate in the nursery then you still have to copy to the tenured generation, reducing throughput over a bump allocator.
> and suffers mostly from latency issues (pauses), but even those are being worked on with some good progress (like the work done by Azul on their JVM).
Azul C4 generally requires a kernel extension to perform well, reducing its applicability in practice (desktop/mobile software). It also suffers from somewhat reduced throughput over the HotSpot garbage collector, according to the paper. This is not to bash Azul C4, of course—it's a really exciting piece of technology—but I feel that it's often held up as a solution to all of the problems of garbage collection when it, too, has tradeoffs.
> It's possible that hardware has become too complicated for us not to give up some control over to some runtime. Maybe we're at a stage where low-level full-control programming is incompatible with fully utilizing the hardware for best performance.
I don't see this the case in practice quite yet. Java HotSpot, which features the best widely used GC, is routinely outperformed by low-level C++. At this point I see the burden of proving that garbage collection outperforms manual memory management in practice is on the proponents of pervasive concurrent GC. It may well happen, but I don't think we're there yet.
> Rust fully supports concurrent data structures with shared mutable state, and there are several in the libraries.
Are they lock-free? If so, how do you do it without a GC?
> Java HotSpot, which features the best widely used GC, is routinely outperformed by low-level C++
This is true mostly in single-threaded computations. Also, as another commenter points out, Java's main problem is the lack of arrays of structs which makes locality difficult. This is being worked on (and almost completely orthogonal to the issue of GC), and will hopefully be at least partially resolved in Java 9[1].
My point is that while we need new languages now, we also need to prepare them for a many-core future. Once you have over 100 cores, many locking schemes stop scaling[2]. I'm not saying Rust specifically should think ahead, but I think a new systems programming language should, especially if it's goal is to replace C for the next 40 years. Unless, that is, what I said turns out to be true, and low-level programming will give us good control over resources, but not the best performance; or if the many-core CPU future isn't coming (I say CPU because we're already in the many-core SIMD present with modern GPUs).
"But GC already provides better throughput than manual memory allocation in practically all circumstances"
You could drive a bus through the exceptions let through with the "practically" in that claim. In the kinds of problems I solve the single biggest driver to better throughput is cache locality/branch prediction. Every time I go up a level in memory cache I lower my throughput.
There is nothing saying that GC based solutions couldn't get to the point where they are better about cache locality than manual allocation but they aren't there yet.
the "who's memory is this anyway" type checking thing in Rust is one of the most interesting advancements of the state of the art of low-level programming that I'm aware of in my lifetime.
At this point the ecosystem around C is too big to justify the cost of switching for the domains it's used in. Any new contender is going to have to offer something new, without giving up any cost in performance or control over the machine.
With Rust (disclaimer: I work on Rust) we're shooting for safety to give us that edge—having the compiler reliably prevent use-after-frees and buffer overflows instead of discovering them through 0-days, without the traditional approaches that sacrifice performance and control over the machine (GC), seems pretty compelling to me. Even if we do succeed in getting enough new projects to use a safe language to make a difference, though, C will still be immortal; once a language has that much staying power it'll be around forever.