Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla releases version 0.1 of the Rust programming language (mail.mozilla.org)
236 points by pcwalton on Jan 23, 2012 | hide | past | favorite | 82 comments



As a die-hard C guy, Rust is the first "new systems programming language" since Cyclone and D that I didn't immediately dislike. A lot of really interesting ideas in here. I'd love to know what Mozilla uses this for internally.

That said, it's hard to imagine anything displacing C for me. Almost any systems code I write these days is something I'll eventually want to be able to expose to a high-level language (Lua, Python, Ruby, etc). To do that you need code that can integrate into another language's runtime, no matter what memory management or concurrency model it uses. When you're trying to do that, having lots of rich concepts and semantics in your systems language (closures, concurrency primitives, garbage collection) just gets in the way, because you have to find out how to map between the two. C's lack of features is actually a strength in this regard.

I do really like Rust's expressiveness though. The pattern matching is particularly nice, and that is an example of a features that wouldn't "get in the way" if you're trying to bind to higher-level languages.


I've only recently started to look at D as a number-crunching language, but it certainly sounds like you know a bit more than me here.

Let me ask you a question or two: is there anything at all that makes it possible to interface D with Python? I'm aware of PyD [1] but it looks like it only worked with D1.

If I wanted to stick with D and Python, would it be the case that I'd have to (re-)write something like PyD from scratch, or is there a simpler approach?

[1] http://pyd.dsource.org/


If you don't mind writing a little C, that could serve as a bridge, since both runtimes can speak C.


A bit of Mozilla commentary: There is no "internally", really -- it's an open organization. So you can find out!

The answer, unhelpfully, is that so far the only significant rust program is its own compiler.

There's people who want to experiment with writing new browser stuff in it. Not sure how serious that is, and I bet it's pretty far off before you see Firefox shipping with Rust code.


The emphasis on memory-safety suggests to me that they'll eventually be encouraging plugin authors to use it.


Can you elaborate on what (if anything) you immediately disliked about Go?


I dislike duck typing; I much prefer to have an explicit declaration of what interfaces you are implementing. I also think the type system is too dynamic for a systems language (eg. their answer to generics requires run-time type checking for every operation).

Also, just as a gut-level reaction I didn't feel excited about any of the expressiveness that Go offers (compared with my reaction to Rust's pattern matching, which to me is a clear improvement over how you'd express an equivalent thing in C or C++).


> their answer to generics requires run-time type checking for every operation

Not quite, the Go implementation has runtime optimizations so that the cost for using interfaces/generics is incurred once at first use: http://research.swtch.com/2009/12/go-data-structures-interfa...


> their answer to generics requires run-time type checking for every operation

Go has no built in answer to generics. Trying to rebuild generics with other language features won't make it better.


I must have misinterpreted the intent of this from "Go for C++ Programmers":

> Because the conversion is dynamic, it may be used to implement generic programming similar to templates in C++. This is done by manipulating values of the minimal interface.

I thought they were saying this was their answer to generic programming, but it appears this is not the case. In any case, it was just an example of my general feeling that the type system is more dynamic than I prefer for systems languages.


Hi there! This is a thread about Rust. Please don't try to de-rail it with a conversation about Go.


- It's a sub-thread, not a reply to the original post.

- The discussion is about an at-least-partially-competing/similar language. Thus, it's relevant.


I agree, and they've done an excellent job with the tutorial documentation for it: http://doc.rust-lang.org/doc/tutorial.html

I particularly like the natives and testing support, but I agree with what you said; I wish there was a way (maybe there is and I just haven't figured it out yet) to do a reverse native thing; that is, create a C wrapper around a rust library (and then yes, use SWIG to generate a python / ruby / etc. binding for the rust library).


I think your approach is wrong all together - D actually allows you not to use 2 different languages in your project, specifically one high level and another one low level. The whole idea is that language is flexible enough, that it allows you to both write performance critical and high level code when you need to. That saves you all the troubles of interfacing, different runtimes and so on.


My project only uses one language: C.

But my project is a library whose functionality I want to expose to any language that cares to write wrappers for it. Reimplementing libraries in every language is a waste of effort and will have worse performance than sharing a single C library across many languages.


what did you dislike about ats? i've just discovered it and it seems pretty exciting to me.


In case you're wondering why we need another programming language, this short audio clip from Brendan Eich is great:

http://www.aminutewithbrendan.com/pages/20101206

"With Rust, what Graydon has been trying to do is focus on safety and also on you could say concurrency -- the actor model which I've spoken about recently - and the two are related in subtle ways."


What's the target platform? Looks like it compiles to system-executable, but Mozilla was involved, which makes me wonder if there's browser execution.


It's basically a planned replacement for C++ as the language used to program the Mozilla apps themselves (and native add-ons). It wouldn't make sense as a scripting language that runs on Mozilla.


On the audio it sounded like he went out of his way to deny that Mozilla has any plans to rewrite existing C++.

It may be that Rust is being designed as if that were a goal, but let's not start any rumors here.


The stated goal is to use Rust for prototypes of new browser architectures.


It would be wise of any developers to give their new language a shakedown on smaller, less important projects before staking their whole business on it.

Of course, if it turns out to work amazingly well and they can port all their old code (or interface with it) with relative ease, there's no reason not to jump ship from the hellfires of C++ compilation. They've had major issues with that lately, after all.


System native executables: Win32, OS X, Linux, etc. This is a language for (among other things) prototyping browsers themselves, not running stuff inside browsers.


If anyone decides to give Rust a spin and is compelled to help out by providing feedback, the devs love to hear comments and criticism from users of the language:

https://mail.mozilla.org/listinfo/rust-dev


Do you know if there is any explanation of 'unique pointers' and 'unique closures' anywhere? I'm quite interested to see some of the decisions, especially wrt the type-system.


Unique types are used to guarantee that only a single reference is ever held to a value. Sort of like the value has a single "owner". This restriction, while a maybe bit of a pain to program with, gives the compiler permission to do clever things.

In particular:

1. The compiler can detect when a value is no longer "owned" (referenced) by anything and free it automatically -- without garbage collection. That's really handy for things like closures where the compiler automatically allocated the memory for you in the first place.

2. If an immutable value is modified then a copy usually needs to be made. But if the immutable value is uniquely-referenced then the compiler can reuse the old bit of memory, thereby saving a copy operation. It can do this because it can prove the old memory can no longer be accessed.

3. I think Rust might also use uniqueness when sending values between its tasks. Since it can prove the value will no longer be referenced by the old task the compiler can avoid copying the values while still preserving isolation.

http://en.wikipedia.org/wiki/Uniqueness_type


You're correct regarding (3) -- that's the main reason unique pointers were added to Rust in the first place. We call the heap of unique pointers the "exchange heap" for this reason.


Thanks, uniqueness typing has been a bit of a hobby of mine for a few years, so I'm familiar with 1 & 2. Uniqueness in system-level languages, however, is not something I know so much about, so something like 3 is pretty interesting.


One of the devs keeps a blog, and he has a post about implementing unique closures. (There are several posts on the topic actually, since I guess the idea evolved a bit with time. Just check out the Rust category for more...)

http://smallcultfollowing.com/babysteps/blog/2011/12/16/impl...


I do not know about unique closures, but afaik unique pointers in Rust are similar to std::unique_pointer in C++ (with some syntax sugar).


They should consider moving the discussion to Google Groups or something. Not everybody loves to read the discussion threads via email.


I've always found mailman's web interface a lot easier to read than Google Groups. e.g.: https://mail.mozilla.org/pipermail/rust-dev/2012-January/thr...

AFAIK Google Groups don't even support threads


I grew up with computers in the 80s that didn't have Internet connectivity. I didn't get Internet access at home until 2005. Mailing lists confuse the hell out of me.


That's interesting, considering that mailing lists (like news/Usenet) were ideal for offline use, having been designed back in the day when Internet often had to be dialed up at specific times to exchange mail and news asynchronously (remember UUCP?). You can read and write submissions offline and submit them later.


Original HN thread here: http://news.ycombinator.com/item?id=3491557

https and http links should be treated the same for de-duplication purposes, although this thread has a few more links, and points.


If anyone from Mozilla reads this, - http://github.com/mozilla/rust/issues.

Is an invalid URL. You put the period inside the last </a>. You should toss that out :)


Does anyone have a link to some examples of non-trivial Rust programs? It looks like a pretty neat language (despite making a few more distinctions than I tend to care about), but I'd like to see how it reads in practice.


I guess the Rust compiler is the most non-trivial Rust program at the moment. You can start there. I've been reading the lexer and parser code (cause I've got to write those for a class) and it's very easy to follow.


Here's a discussion of a ray tracer written in Rust:

https://mail.mozilla.org/pipermail/rust-dev/2011-December/00...

and here's the source code on GitHub:

https://github.com/brson/rustray


I liked the part about 'immutable by default'.

I would love to have that in a language (as long as the 'mutable' keyword was something shorter :-).


> as long as the 'mutable' keyword was something shorter :-

That's one thing I really like, actually: use of mutable structures should be avoided, making mutable structures harder to use (because they require a pretty long extra keyword) is a good way to drive developers towards immutable equivalents. See it as shifting incentives.


OK, but the goal isn't to be Haskell here, or even ML. Those languages have all kinds of support for making immutable-everywhere a feasible goal (and the vast majority of developers still don't use them).

If you want the language to be actually liked by people who develop large systems, it must be designed with its users in mind. 'Nanny' languages tend not to be very popular.

In C++ "reinterpret_cast" is a good example of something that is long and ugly for a reason. But it's also be very rare, probably an order of magnitude or two more rare than mutable in Rust (just a guess).


> OK, but the goal isn't to be Haskell here, or even ML.

So what?

> If you want the language to be actually liked by people who develop large systems, it must be designed with its users in mind.

Which does not prevent the language from driving users towards a goal. One of Rust's goal is emphasizing immutable structures, that's #8 on the front page of its website:

> immutable by default, mutability is the special case

mutability is the special case and a special case Rust tries to make people avoid.

> In C++ "reinterpret_cast" is a good example of something that is long and ugly for a reason. But it's also be very rare

And so ought mutable structures be in Rust.


Maybe I'm crazy but it seems like a lot of these recent new languages are missing the mark. If you're designing a language from the ground up why wouldn't you build unit testing as a first class citizen, for example? As well as profilling and instrumentation. How about a modular compiler that can round-trip back and forth between raw source and commented parse trees so you can do much smarter merging in source control?

So many languages seem to be aiming at targets that are pretty far away from the major pain points for the normal developer.


Unit testing is a first-class citizen in Rust. You can annotate functions with #[test] and they become unit tests. You can then run tests with the built-in test harness on a module-by-module basis.

Profiling and instrumentation are possible using the standard tools (xperf, Instruments/DTrace, perf/oprofile). Rust works just fine with those.

Not sure what you mean by "round-trip back and forth between compiled code and commented parse trees", but Rust contains a pretty-printer which preserves comments.


In my view that's still just bolted on unit-testing, though very thoroughly and seamlessly bolted on, I'll give you that.

What I want are tools that help manage the complexity of unit testing, built-in where it makes sense. For example, creating mock objects and ensuring they are reflective of what they are mocking is difficult. Imagine if you could take the built-in asserts and the unit-tests for a component and use them (or some tagged subset of them) as a spec. for automatically creating mocks. Imagine if determining unit-test coverage and which tests should be run based on code changes was automatic and trivial. Etc.

Certainly profiling can be performed on any binaries provided you have symbols for them, but is that really the best we can do? I have a hard time believing that adding support for instrumented binaries at the compiler level isn't a good idea.

What I mean by "round-trip between code and parse trees" is the ability to have easy access to parse tree structure either in code or to external tools. So that you can do things like easily build in refactoring support to IDEs, or to more intelligently merge code changes at a higher level than merely lines of text in a file.

Of course, none of these ideas are anywhere near fully baked, they would require research, experimentation, and a lot of hard work. But I'd rather see people pushing the boundaries of programming languages with novel research rather than just throwing yet another mashup of already existing features out into the wilds in hopes it'll survive.


The Rust compiler is written as a library with a small driver, so getting access to parse trees in external tools is easy.

Take a look at rustdoc, for instance:

https://github.com/mozilla/rust/tree/master/src/rustdoc


Can you clarify a few things, these are bikeshed issues but they bother me none the less.

* Why was fn foo(bar: int) -> int chosen and not fn foo(bar:int): int? * Why annotations within comments? Annotations aren't comments it smells of C bolt-on. Why not keyword annotations?


Annotations don't live within comments. The syntax for annotations is #[annotation]. They are part of the grammar.


Because with the arrow syntax for results it makes it cleaner for functions that return functions?

fn (x:int) -> fn@(y:int, z:int) -> int


Also passing functions to functions:

fn foo(bar: fn(baz: int) -> int) -> int

Instead of

fn foo(bar: fn(baz: int) : int) : int


I've always believed that a language should have as little native as possible; the more you can off-load into a library, the better because it lets people extend the language in the language.

Why have first-class unit tests when they could be built on top of some other feature? If tests are part of the language, then using a different testing system (say something in the style of QuickCheck) would probably be at a disadvantage; if you have a language that can support expressive testing as a library, then it would be possible to add different styles of testing.



Heh, I actually thought about that talk (and Guy Steele) while I was writing my post. I wanted to include a quote from there, but didn't remember the title of the talk. Thanks for finding it :)

Edit: Actually, I was thinking of a different talk. Or maybe just a random quote I saw somewhere. This talk is still very interesting.


Tests are mostly built-in to make sure there's no excuse to not test, but it is intended to be minimal and that other test frameworks can build off it (though it's not clear exactly how yet).


Implementation partially in OCaml! (But why don't we use Ocaml again?)


> But why don't we use Ocaml again?

Rust offers better control over memory layout, more predictable resource usage and a fuller concurrency model. If it works out it will be a good replacement for ocaml, especially for systems development.


Rust is also natively unicode-aware. OCaml is not.


Also, the syntax of Rust is less frightening for people who don't know Ocaml.


As I understand the first (development) versions was in OCaml, but now the compiler is completely self-hosting.


They need to learn from Google how to present a new language. Although maybe they will once it reaches 1.0. When Google showed off Go they had some really great examples and a clean web page for it. Got me excited, this hasn't.

Edit: Apparently there is http://www.rust-lang.org/


"They need to learn from Google how to present a new language"

You mean like choosing a name that is easy to search on the internet?


I find it really interesting that Rust itself requires all these languages to compile:

    g++ 4.4 or clang++ 3.x
    python 2.6 or later
    perl 5.0 or later
Does anyone know why that is?


The perl dependency is from LLVM. Rust uses Python for utility scripts, "Just Because."

Also, you left rustc off the list of languages required to compile rust. :-) It's self-hosted.


Looks like mostly building scripts (documentation, tests ..)

Guess they expect all platforms to have Perl and Python these days..


Is there a reference manual anywhere?



The technical reference manual: http://doc.rust-lang.org/doc/rust.html

The friendlier language tutorial: http://doc.rust-lang.org/doc/tutorial.html


Reference here: http://doc.rust-lang.org/doc/rust.html

Tutorial (probably better at the moment): http://doc.rust-lang.org/doc/tutorial.html


The language looks ok. If they are going for world domination a more likely path seems to be evolving C very slowly. That is they should start from C and then every few years add a few new features and take out some old crummy features until they finally reach Rust.

Lots of people probably don't like rewriting code bases wholesale and new languages take a while to become trusted.


> and take out some old crummy features

Backwards-compatibility is held extremely dear to the C community, with those few breaking changes being simple (usually) to work around. If you want to break compatibility, you have a better chance calling it a new language.


Thats true, but new modes can be added with switches similar to how gcc has -std=c89, -std=c99 and whatnot. Newer modes could disable old features and people could upgrade at will.


Except that there's no business case for rewriting old code that still works and is still supported by the compiler. And then suddenly, the compiler team has to support multiple, very different versions in the same release.


With 10 years between each such 'mode', and the differences still being relatively minute, it would take centuries to morph C into something else. Which is a good thing, don't get me wrong. It just shows that the proposed approach isn't feasible.


How can you build trust in a language that changes all the time?


as a point of thought: you could have a language where you introduce small changes incrementally, and a tool like go's gofix that is builtin and goes over an existing code base updating stuff that is obsolete.

The current approaches of "leave the things substantiatlly unchanged" and "release a bunch of changes all together" have not exactly been proven "the best".


If you reinvent the wheel you should at least have one plausible use case that shows the benefit of your square/wheel over existing solutions.

Bonus points for real world examples.


Imagine Firefox, but without segfaults, and using multiple cores.

That's the use case. Rust is intended as a safe language for building a research browser that works in parallel.


And we have Ada.


Honestly, I am observing closely the way some new system languages evolve - D, Go, Rust and I see a large intersection of features with Ada.

I am always amazed that no valid discussion/comparison of those languages and Ada erupt in these threads :( Ada is not as bad as it's painted to be.


i find Ada is a good and consistent programming language.


Ada has a reputation for being dramatically overengineered, especially among programmers with exposure to Ruby or Haskell (or Lisp, naturally). I've never seen a demonstration of how well it works for rapidly prototyping code and "filling in the blanks" iteratively. Would you mind providing examples?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: