As a die-hard C guy, Rust is the first "new systems programming language" since Cyclone and D that I didn't immediately dislike. A lot of really interesting ideas in here. I'd love to know what Mozilla uses this for internally.
That said, it's hard to imagine anything displacing C for me. Almost any systems code I write these days is something I'll eventually want to be able to expose to a high-level language (Lua, Python, Ruby, etc). To do that you need code that can integrate into another language's runtime, no matter what memory management or concurrency model it uses. When you're trying to do that, having lots of rich concepts and semantics in your systems language (closures, concurrency primitives, garbage collection) just gets in the way, because you have to find out how to map between the two. C's lack of features is actually a strength in this regard.
I do really like Rust's expressiveness though. The pattern matching is particularly nice, and that is an example of a features that wouldn't "get in the way" if you're trying to bind to higher-level languages.
I've only recently started to look at D as a number-crunching language, but it certainly sounds like you know a bit more than me here.
Let me ask you a question or two: is there anything at all that makes it possible to interface D with Python? I'm aware of PyD [1] but it looks like it only worked with D1.
If I wanted to stick with D and Python, would it be the case that I'd have to (re-)write something like PyD from scratch, or is there a simpler approach?
A bit of Mozilla commentary: There is no "internally", really -- it's an open organization. So you can find out!
The answer, unhelpfully, is that so far the only significant rust program is its own compiler.
There's people who want to experiment with writing new browser stuff in it. Not sure how serious that is, and I bet it's pretty far off before you see Firefox shipping with Rust code.
I dislike duck typing; I much prefer to have an explicit declaration of what interfaces you are implementing. I also think the type system is too dynamic for a systems language (eg. their answer to generics requires run-time type checking for every operation).
Also, just as a gut-level reaction I didn't feel excited about any of the expressiveness that Go offers (compared with my reaction to Rust's pattern matching, which to me is a clear improvement over how you'd express an equivalent thing in C or C++).
I must have misinterpreted the intent of this from "Go for C++ Programmers":
> Because the conversion is dynamic, it may be used to implement generic programming similar to templates in C++. This is done by manipulating values of the minimal interface.
I thought they were saying this was their answer to generic programming, but it appears this is not the case. In any case, it was just an example of my general feeling that the type system is more dynamic than I prefer for systems languages.
I particularly like the natives and testing support, but I agree with what you said; I wish there was a way (maybe there is and I just haven't figured it out yet) to do a reverse native thing; that is, create a C wrapper around a rust library (and then yes, use SWIG to generate a python / ruby / etc. binding for the rust library).
I think your approach is wrong all together - D actually allows you not to use 2 different languages in your project, specifically one high level and another one low level. The whole idea is that language is flexible enough, that it allows you to both write performance critical and high level code when you need to. That saves you all the troubles of interfacing, different runtimes and so on.
But my project is a library whose functionality I want to expose to any language that cares to write wrappers for it. Reimplementing libraries in every language is a waste of effort and will have worse performance than sharing a single C library across many languages.
"With Rust, what Graydon has been trying to do is focus on safety and also on you could say concurrency -- the actor model which I've spoken about recently - and the two are related in subtle ways."
It's basically a planned replacement for C++ as the language used to program the Mozilla apps themselves (and native add-ons). It wouldn't make sense as a scripting language that runs on Mozilla.
It would be wise of any developers to give their new language a shakedown on smaller, less important projects before staking their whole business on it.
Of course, if it turns out to work amazingly well and they can port all their old code (or interface with it) with relative ease, there's no reason not to jump ship from the hellfires of C++ compilation. They've had major issues with that lately, after all.
System native executables: Win32, OS X, Linux, etc. This is a language for (among other things) prototyping browsers themselves, not running stuff inside browsers.
If anyone decides to give Rust a spin and is compelled to help out by providing feedback, the devs love to hear comments and criticism from users of the language:
Do you know if there is any explanation of 'unique pointers' and 'unique closures' anywhere? I'm quite interested to see some of the decisions, especially wrt the type-system.
Unique types are used to guarantee that only a single reference is ever held to a value. Sort of like the value has a single "owner". This restriction, while a maybe bit of a pain to program with, gives the compiler permission to do clever things.
In particular:
1. The compiler can detect when a value is no longer "owned" (referenced) by anything and free it automatically -- without garbage collection. That's really handy for things like closures where the compiler automatically allocated the memory for you in the first place.
2. If an immutable value is modified then a copy usually needs to be made. But if the immutable value is uniquely-referenced then the compiler can reuse the old bit of memory, thereby saving a copy operation. It can do this because it can prove the old memory can no longer be accessed.
3. I think Rust might also use uniqueness when sending values between its tasks. Since it can prove the value will no longer be referenced by the old task the compiler can avoid copying the values while still preserving isolation.
You're correct regarding (3) -- that's the main reason unique pointers were added to Rust in the first place. We call the heap of unique pointers the "exchange heap" for this reason.
Thanks, uniqueness typing has been a bit of a hobby of mine for a few years, so I'm familiar with 1 & 2. Uniqueness in system-level languages, however, is not something I know so much about, so something like 3 is pretty interesting.
One of the devs keeps a blog, and he has a post about implementing unique closures. (There are several posts on the topic actually, since I guess the idea evolved a bit with time. Just check out the Rust category for more...)
I grew up with computers in the 80s that didn't have Internet connectivity. I didn't get Internet access at home until 2005. Mailing lists confuse the hell out of me.
That's interesting, considering that mailing lists (like news/Usenet) were ideal for offline use, having been designed back in the day when Internet often had to be dialed up at specific times to exchange mail and news asynchronously (remember UUCP?). You can read and write submissions offline and submit them later.
Does anyone have a link to some examples of non-trivial Rust programs? It looks like a pretty neat language (despite making a few more distinctions than I tend to care about), but I'd like to see how it reads in practice.
I guess the Rust compiler is the most non-trivial Rust program at the moment. You can start there. I've been reading the lexer and parser code (cause I've got to write those for a class) and it's very easy to follow.
> as long as the 'mutable' keyword was something shorter :-
That's one thing I really like, actually: use of mutable structures should be avoided, making mutable structures harder to use (because they require a pretty long extra keyword) is a good way to drive developers towards immutable equivalents. See it as shifting incentives.
OK, but the goal isn't to be Haskell here, or even ML. Those languages have all kinds of support for making immutable-everywhere a feasible goal (and the vast majority of developers still don't use them).
If you want the language to be actually liked by people who develop large systems, it must be designed with its users in mind. 'Nanny' languages tend not to be very popular.
In C++ "reinterpret_cast" is a good example of something that is long and ugly for a reason. But it's also be very rare, probably an order of magnitude or two more rare than mutable in Rust (just a guess).
> OK, but the goal isn't to be Haskell here, or even ML.
So what?
> If you want the language to be actually liked by people who develop large systems, it must be designed with its users in mind.
Which does not prevent the language from driving users towards a goal. One of Rust's goal is emphasizing immutable structures, that's #8 on the front page of its website:
> immutable by default, mutability is the special case
mutability is the special case and a special case Rust tries to make people avoid.
> In C++ "reinterpret_cast" is a good example of something that is long and ugly for a reason. But it's also be very rare
Maybe I'm crazy but it seems like a lot of these recent new languages are missing the mark. If you're designing a language from the ground up why wouldn't you build unit testing as a first class citizen, for example? As well as profilling and instrumentation. How about a modular compiler that can round-trip back and forth between raw source and commented parse trees so you can do much smarter merging in source control?
So many languages seem to be aiming at targets that are pretty far away from the major pain points for the normal developer.
Unit testing is a first-class citizen in Rust. You can annotate functions with #[test] and they become unit tests. You can then run tests with the built-in test harness on a module-by-module basis.
Profiling and instrumentation are possible using the standard tools (xperf, Instruments/DTrace, perf/oprofile). Rust works just fine with those.
Not sure what you mean by "round-trip back and forth between compiled code and commented parse trees", but Rust contains a pretty-printer which preserves comments.
In my view that's still just bolted on unit-testing, though very thoroughly and seamlessly bolted on, I'll give you that.
What I want are tools that help manage the complexity of unit testing, built-in where it makes sense. For example, creating mock objects and ensuring they are reflective of what they are mocking is difficult. Imagine if you could take the built-in asserts and the unit-tests for a component and use them (or some tagged subset of them) as a spec. for automatically creating mocks. Imagine if determining unit-test coverage and which tests should be run based on code changes was automatic and trivial. Etc.
Certainly profiling can be performed on any binaries provided you have symbols for them, but is that really the best we can do? I have a hard time believing that adding support for instrumented binaries at the compiler level isn't a good idea.
What I mean by "round-trip between code and parse trees" is the ability to have easy access to parse tree structure either in code or to external tools. So that you can do things like easily build in refactoring support to IDEs, or to more intelligently merge code changes at a higher level than merely lines of text in a file.
Of course, none of these ideas are anywhere near fully baked, they would require research, experimentation, and a lot of hard work. But I'd rather see people pushing the boundaries of programming languages with novel research rather than just throwing yet another mashup of already existing features out into the wilds in hopes it'll survive.
Can you clarify a few things, these are bikeshed issues but they bother me none the less.
* Why was fn foo(bar: int) -> int chosen and not fn foo(bar:int): int?
* Why annotations within comments? Annotations aren't comments it smells of C bolt-on. Why not keyword annotations?
I've always believed that a language should have as little native as possible; the more you can off-load into a library, the better because it lets people extend the language in the language.
Why have first-class unit tests when they could be built on top of some other feature? If tests are part of the language, then using a different testing system (say something in the style of QuickCheck) would probably be at a disadvantage; if you have a language that can support expressive testing as a library, then it would be possible to add different styles of testing.
Heh, I actually thought about that talk (and Guy Steele) while I was writing my post. I wanted to include a quote from there, but didn't remember the title of the talk. Thanks for finding it :)
Edit: Actually, I was thinking of a different talk. Or maybe just a random quote I saw somewhere. This talk is still very interesting.
Tests are mostly built-in to make sure there's no excuse to not test, but it is intended to be minimal and that other test frameworks can build off it (though it's not clear exactly how yet).
Rust offers better control over memory layout, more predictable resource usage and a fuller concurrency model. If it works out it will be a good replacement for ocaml, especially for systems development.
They need to learn from Google how to present a new language. Although maybe they will once it reaches 1.0. When Google showed off Go they had some really great examples and a clean web page for it. Got me excited, this hasn't.
The language looks ok. If they are going for world domination a more likely path seems to be evolving C very slowly. That is they should start from C and then every few years add a few new features and take out some old crummy features until they finally reach Rust.
Lots of people probably don't like rewriting code bases wholesale and new languages take a while to become trusted.
Backwards-compatibility is held extremely dear to the C community, with those few breaking changes being simple (usually) to work around. If you want to break compatibility, you have a better chance calling it a new language.
Thats true, but new modes can be added with switches similar to how gcc has -std=c89, -std=c99 and whatnot. Newer modes could disable old features and people could upgrade at will.
Except that there's no business case for rewriting old code that still works and is still supported by the compiler. And then suddenly, the compiler team has to support multiple, very different versions in the same release.
With 10 years between each such 'mode', and the differences still being relatively minute, it would take centuries to morph C into something else. Which is a good thing, don't get me wrong. It just shows that the proposed approach isn't feasible.
as a point of thought: you could have a language where you introduce small changes incrementally, and a tool like go's gofix that is builtin and goes over an existing code base updating stuff that is obsolete.
The current approaches of "leave the things substantiatlly unchanged" and "release a bunch of changes all together" have not exactly been proven "the best".
Ada has a reputation for being dramatically overengineered, especially among programmers with exposure to Ruby or Haskell (or Lisp, naturally). I've never seen a demonstration of how well it works for rapidly prototyping code and "filling in the blanks" iteratively. Would you mind providing examples?
That said, it's hard to imagine anything displacing C for me. Almost any systems code I write these days is something I'll eventually want to be able to expose to a high-level language (Lua, Python, Ruby, etc). To do that you need code that can integrate into another language's runtime, no matter what memory management or concurrency model it uses. When you're trying to do that, having lots of rich concepts and semantics in your systems language (closures, concurrency primitives, garbage collection) just gets in the way, because you have to find out how to map between the two. C's lack of features is actually a strength in this regard.
I do really like Rust's expressiveness though. The pattern matching is particularly nice, and that is an example of a features that wouldn't "get in the way" if you're trying to bind to higher-level languages.