Hacker News new | past | comments | ask | show | jobs | submit login
Announcing Rust 1.9 (rust-lang.org)
350 points by steveklabnik on May 26, 2016 | hide | past | favorite | 64 comments



The time complexity of comparing variables for equivalence during type unification is reduced from O(n!) to O(n). As a result, some programming patterns compile much, much more quickly.

I love this. Not just my code compiling more quickly, but the underlying implementation is super interesting.


Is this something seen frequently in user code, or just an edge case? I'd love to get even faster with my libraries compiling, but honestly the continuous improvements in compile speed over the last half a year have already made me happy :)


Indeed. It's hard to miss just how much better rustc performance has become recently. As I've watched this I been left wondering; is it just the blood and sweat of the compiler developers doing this or is the language somehow inherently efficient to compile? I know there are other, much older languages that seem unable to deal with their compiler performance problems.


It is a combination of both. We took performance into consideration when designing all aspects of the langauge, and so we've laid a lot of groundwork. The gains have come because the compiler had so much technical debt; imagine a hundreds of thousands of LOC codebase where the language changes out from under it on a weekly/daily basis. So every since 1.0, and even before, really, it's been paying off debt, and doing things in better ways. And there's more coming. Incremental recompilation will make after-the-first compiles go even faster. But all of that takes tons of work.


On the other hand, niko has described trait resolution as "basically prolog", so that's something you can push to perform as degenerately bad as you want (the recent addition of specialization only making things worse).

The issue is just how degenerately bad "most" code is -- and I think that hinges a lot on how excited the word "higher" gets you vis-a-vis expressing your programs. If you think C# is a pretty cool type system that doesn't afraid of anything you'll probably be fine. If you think scala or haskell with All The GHC Extensions is where it's at... watch out.


> I think that hinges a lot on how excited the word "higher" gets you vis-a-vis expressing your programs.

You are the best Gankro. <3

What's nice about coherence is that there's little risk in modifying the constraint solving algorithm to optimize for the common use cases (because it can't be backward incompatible). What's nice about crater is that it's comparatively easy to test that optimization.


Its an edge case for the `n` to get high enough for it make a big impact.


And depending on the multiplying constant, it could mean everything is slower than before _until_ you reach a high enough n.

EDIT: The PR that added the change claims it was done well, though :) https://github.com/rust-lang/rust/pull/32062


Excited about unwinding becoming stable. I am hacking on postgres-extension.rs, which allows writing postgres extensions in rust. This will mean that postgres could call into rust code, then rust could call back into postgres code, and that postgres code could throw an error, and rust could safely unwind and re-throw. Cool!!


Not yet a Rust programmer but Rust does look like an excellent language for PostgreSQL extensions.


Very much pleased to see you working on this. I've been thinking that Rust would make an excellent language for those of us that want to get a bit closer to C-like backed functions and such, but aren't really C programmers. While Rust has it's own learning curve to be sure and isn't magic, the compiler can save those of us familiar with less systemy languages from mistakes we wouldn't otherwise be sensitive to.


By "hacking on" I mean "submitted a pull request to Daniel Fagnan's project". I didn't mean to take credit.

But I do plan to keep improving it.


Progress on specialization is good.

> Altogether, this relatively small extension to the trait system yields benefits for performance and code reuse, and it lays the groundwork for an "efficient inheritance" scheme that is largely based on the trait system


Looks like a great release. Controlled unwinding looks very interesting. #GreatJobRustTeam


I don't understand the announcement on panics. Hasn't it always been the case that thread boundaries (via spawn) could contain panics?

It's also used to have the incorrect default that such panics were silently ignored. .NET made this same mistake: background threads could silently die. They reversed and made a breaking change so any uncaught exception kills the process. I'd imagine Rust will do so by encouraging a different API if they haven't already. (I opened an RFC on this last year, but I didn't understand enough Rust and made a mess of it. But still a great experience, speaking to the very kind and professional community. In particular several people were patient, but firm, in explaining my misunderstandings.)


  > Hasn't it always been the case that thread boundaries (via spawn)
  > could contain panics?
Yes. This lets you catch an unwinding panic from inside the thread itself, rather than observing it get killed from the outside.

In Rust, they're not silently ignored: `join()` returns a `Result`, which has a `must_use` attribute, which will warn if you don't do something with it.


Thanks for the clarification.

On thread panics: Suppose you have a long-running thread that's updating shared variable. You don't call join on it; just drop it, so it's detached, running in the background. If that thread crashes (and isn't in e.g. a mutex so it can't poison), your process will continue on in an invalid state. Yes, you could add code to notice the panic and abort the process, but by default, it'll silently fail, right?


  > and isn't in eg a mutex so it can't poison
Well, Rust makes sure that you have _some_ kind of concurrency primitive in place to make this safe.

It's also worth mentioning that Rust is concerned about preventing memory safety bugs, not every single kind of bug ever. I mean, we want to encourage robust code, but we don't make the claim that Rust solves every kind of bug possible with the type system.


I understand. It just seems like a very odd default, allowing threads to silently crash. Is there ever a good scenario for this? Flip it around: if the default was thread crash->panic, would any users be upset? Are there people intentionally depending on silent panics?

It is the opposite of robust, more toward "on error resume next", to be rude. Just feels like an odd choice, that's all.

Edit: As an example: Suppose a background thread opens a file and writes out its pid and date every minute. If there's any panic on that thread, then the process will just go on, oblivious to the error. I find it hard to believe that's remotely close to the majority of intentions when spawning a detached background thread. For the few cases when you want "fire-and-forget-and-don't-care-if-it-fails", explicitly stating so seems wise.


Yes, one of the reasons that catching panics was even stabilized in the first place is because in many scenarios, aborts are not acceptable. Embedding Rust in other languages and runtimes is an important use case, for example, and a bug in your Rust extension bringing down your whole Rails app is not great.

The idea is not to make panics silent, but to be able to go from "this process is crashing" to "here's an error code".


> Embedding Rust in other languages and runtimes is an important use case, for example, and a bug in your Rust extension bringing down your whole Rails app is not great.

It's not, but it beats it silently ending up in a bad state.


The failure case in mind here is not silent data corruption, it is merely having your in-process performance monitoring service go offline.


It's not clear to me what the alternative would be here. The common case is that the parent calls `join` and handles the result there. This needs to work even if the child thread returns quickly, before `join` is called in the parent. If we switched to a strategy where a panic in the child immediately injected a panic into the parent, or aborted the program, that would break the common case. I don't think there's any way for Rust to know the difference between "the parent is doing some work and hasn't joined yet" and "the parent has no intention of ever joining", if the parent isn't going to be explicit about that.


I'm really looking forward to the next stable version after this, which will hopefully stabilize the new '?' syntax for 'try!'.


I can guarantee you it will not. ? is still unstable, and since the release of 1.9 means that 1.10 is in beta, it will not be in 1.10.


What's the remaining work needed to make '?' no longer unstable?


The main remaining work on the RFC is to implement the `catch` construct. Here's the tracking issue: https://github.com/rust-lang/rust/issues/31436

However, stabilizing `?` is not blocked on any particular work as much as on the relevant team deciding its a good feature without big bugs that they are happy to see made permanent. Given that a minority of the community opposes `?` vehemently, this process will probably move slowly.

Personally, I opt into question mark in all of my nightly code and I absolutely love it. I had a case last night actually where I really wanted `catch` also.


Well for one, there's a huuuuuge unresolved question: should it work for Option as well as Result? If we stabilized just the Result version, would we lock out the possibility of extending it in the future?

Just in general, it is a huge, and widely debated new feature that only landed recently. It needs more time before we can consider stabilizing it. One of the longest-running and most-commented RFCs in Rust history deserves to not just be made stable as quickly as possible, but to take it slow and make sure we get it right.


I've read the RFC and quite a bit of the surrounding debate, and I knew that most of it would take longer to stabilize. I just didn't realize that the '?' notation in particular was still under debate.

Why would stabilizing the Result version of '?' lock out the possibility of supporting it on Option or other types in the future?

In any case, I look forward to it whenever it becomes available in a stable release.


  > Why would stabilizing the Result version
Well, because in stabilizing something, you have to declare exactly what is stable, and what the semantics are. "? only works with Result" is not a detailed enough semantic, so you have to dig into details. Is it based on a trait? If so, and the trait gets designed incorrectly, it could cause problems. Is it not based on a trait? Now the language needs to know about Result, specifically.

  > I just didn't realize that the '?' notation in particular was still under debate.

Well, we have never had a case yet where an accepted RFC failed to end up actually being stable. We have had some RFCs that have had final details take a very long time. But in theory, it could happen.


Uh... That doesn't seem true at all? Unstable stuff gets deprecated all the time (the libs team basically defaults to delete-everything), and I assure you that some of that stuff was accepted in an RFC. There's also the case that the design is significantly changed in inplementation or as an ammendment RFC (e.g. maybe ? becomes ?? or ?! to let ? be used like swift/c#). Entry API is the most obvious case of this kind of change happening to me. BTree range api will literally never be stabilized as-is. The defaults-affect-inference design has been dead in the water for months, for an example from the lang team.


So, when I think about this, all the stuff that was unstable and therefore deprecated landed pre 1.0, but I _guess_ that was still RFC'd? Maybe my timeline is a bit off here.

I guess, my ultimate point is this: we have not made very many significant language changes in this first year of Rust. Now, we're starting to. It's new territory. Things can happen that may not have happened before, including these significant new features not actually landing.


> Why would stabilizing the Result version of '?' lock out the possibility of supporting it on Option or other types in the future?

It wouldn't necessarily but it could be done in a way where it does lock it out. For example, `?` could be implemented by defining an attribute which is applied to Result in libcore. If that attribute were stabilized, it would make it hard to migrate to using a trait to define the behavior of `?`, because you've made a guarantee that this attribute makes a type use `?`.

All of this has been avoided though, I think. Stabilization or deprecation of `?` is pretty much a political issue now and not a technical one (and I don't mean that the question is frivolous, just that its about what people want now).

There's still the Carrier trait which has unresolved technical questions, and the catch construct I don't think even has a PR.


I don't know how this unexpected vs expected errors philosophy gets propagated, but to me it always looked suspicious. Take array bounds for example: what if you have an API that lets users send a list of image transformations, and the user requests a face-detect crop, followed by a fixed crop with a given x, y, w, h.

Clearly your code can get out of (2D) array bounds with the fixed crop (if the image is such that the face-detect crop ends up small enough). Suddenly the thing that was "unexpected" at the array level becomes very much expected at the higher API level.

So the API provider can't decide whether an error is expected or not. Only the API consumer can do that. Applying this further, a mid-level consumer cannot (always) make the decision either. Which is why exceptions work the way they do: bubble until a consumer makes a decision, otherwise consider the whole error unexpected. Mid-level consumers should use finally (or even better, defer!) to clean up any potentially bad state.

I think Swift got this right. What you care about isn't what exactly was thrown (checked, typed exceptions) but the critical thing is whether a call throws or not. This informs us whether to use finally/defer to clean up. The rest of error handling is easy: we handle errors we expect, we fix errors we forgot to expect, but either way we don't crash for clean up because finally/defer/destructors take care of that.


This is why almost every panicky API has a non-panicky variant.

Non-panicky APIs can be used in a panicky way via unwrap() or expect(). There are very few panicky APIs, reserved for things where 99% of the time you want it to panic and it would be annoying to have to handle errors all the time. Array indexing is one of these cases, where it would be super annoying to handle errors on each indexing operation.

But if you're in a situation where you expect an index to fail, just use `get()` which returns an Option instead.

APIs do not make the decision for you. At most they decide if the panicky API is easier to use than the monadic error one. Which is where the fuzzy notion of "unexpected" and "expected" errors at the API level comes up, it's good enough to be able to determine what the default should be. Callers still have the power to do something else.


I would kill to have a monadic slicing API for Vec<T>, though. It seems like the one significant exception to the "use whichever works better for you" argument. Of course you can bounds check yourself first and return Err if there's a problem, but there's also no get_unchecked() for slices so you'd pay for the checks twice.


> but there's also no get_unchecked() for slices so you'd pay for the checks twice.

Yeah there is[1], unless I'm being really stupid...

[1]: https://doc.rust-lang.org/std/primitive.slice.html#method.ge...


Rather, there's no get_unchecked to get a slice from a Vec. Unlike index, get_unchecked can't take a range as input.


Ah gotcha, sounds like it could be a good first RFC for someone.


I'll add it to my long list of rust projects I wish I had time to do :).


But if you are a mid-level consumer, you don't know whether you can pick the panicky or the non-panicky API. Do you provide 2 APIs again?


No, you provide the non-panicky API, consumers can convert those into panics easily.

In Rust, if your API uses monadic error handling (Result or Option), this is easily chained with other errors and propagated up. Additionally, callers can choose to discard errors they feel should not happen by using unwrap/expect to convert them into panics.

If your API uses panics, this can't be cleanly turned into non-panicky things without using catch_unwind (you can, but you shouldn't). In the rare case you make the decision to have an ergonomic panicky API, try to provide a slightly less ergonomic non-panicky one.


So ultimately the reason for providing a panicky API has nothing to do with expectedness, its actually all about performance. Otherwise we'd always go for a non-panicky one.

Also, in languages with finally and/or defer, catch isn't supposed to be a problem.


I never said that.

Not performance, ergonomics. `array[5]` and `array.get(5).unwrap()` do the same thing (with a different panic message) and have the same performance when optimized (since it should inline). It would be annoying to have to handle the error every time you index, and would kind of ruin the syntactic sugar of indexing. Thus, indexing panics, and you have `.get()` if you want to not panic and use monadic errors (but as you can see, you can make `.get()` panic too by unwrapping it).

The same goes for other panicky APIs, they're panicky because 99% of the time you don't want to deal with that error since it's an "unexpected" one which means something went horribly wrong and there's no sane way to continue. For the 1% of the time you do need to deal with that error, you can use the alternate Result/Option-based API.


`try!` seems like sufficient sugar to deal with the ergonomics part.

> and there's no sane way to continue

Would like to hear more about this.


Not really, you have to change the return type of all of your functions and introduce new error types.

There's no same way to continue in cases where an out of bounds access is a program bug. What should a program do when it knows it is in an inconsistent state? Crash, in many cases.


Why is the program in an inconsistent state when it attempts to access an array out of bounds and is prevented from it?


Because it was designed assuming that the array would not go out of bounds. If an array did go out of bounds, clearly something has gone horribly wrong.

This is a question of how the software was designed and what assumptions were made.

In a case where the programmer feels that out of bounds may happen (or wishes to program defensively), of course the usual monadic error APIs can be used instead of indexing directly.


This "clearly something has gone horribly wrong" is where I think we actually get lazy.

Let me illustrate using my example. The mid-level consumer is a crop function that takes the image and coordinates. It has no reason to assume someone wont check the image bounds so it uses the panicky API to access the array, assuming "Ok, they'll check if they are out of image dimensions beforehand".

But another level up, we're passing user-generated data right to it, after the face detection crop. Clearly thats wrong. Unfortunately the mid-level consumer didn't provide a non-panicky API so we had to write our own wrapper.

The question is, is it crashy-wrong? If shared mutable state is cleaned up via RAII/defer/finally, there is nothing wrong with the "exception" being caught and the process can continue running. The reason why we don't know if something went horribly wrong is that we don't have the cleanup contract for handling these types of things in the first place.

So why do we want this extremely unsafe "cleanupless" contract to begin with? I can't think of any other reason than performance, because syntax sugar can always take care of the rest.


> Ok, they'll check if they are out of image dimensions beforehand

that's a straw man example, its not the kind of API I'm talking about.

Let me repeat: this is for cases where the library developer has written code where it won't go out of bounds, and if it does it's a bug in their own library. Not the input. Rust APIs are not designed the exception-way where unexpected input is supposed to be caught like exceptions.

For example, I may allocate a buffer of size n, and index it willy nilly at various points I know to be less than n (perhaps, for example, values from another array which was constructed with values not exceeding n). If this doesn't work, it's a bug in your program and there no two ways about it. The assumptions you made about your own program when writing it were wrong. This can happen if you make a mistake when programming, or mess up a refactoring, or rely on a library that changes behavior (which itself shouldn't happen but the world isn't perfect).

Almost all of the indexing panics I've seen in Rust have been bugs in the code that was indexing; where continuing the program somehow would probably make things worse since the state is inconsistent (according to the assumptions the programmer made about their own code, not the input). People tend to use `.get()` when it's possible that the input may trigger an out of bounds error.

Its not a performance issue; the bounds check happens either way, please stop making it one. Its an ergonomics issue. Rust tries to offer both imperative and functional styles, and also tries to be palatable to people from C++. This leads to some C++-like defaults like the indexing syntax. If people had to unwrap/try on every index in rust half the community would probably revolt.

(note that indexing isn't that common; rust uses iterators and other safer, bounds-check-contained abstractions, but it was determined that when you want to use direct indexing in Rust you usually are designing your code such that the out of bounds is a bug)


Your example doesn't make sense. Keep in mind that an out of bounds array access can cause a lot of problems; in the best case, it would cause a segfault, and in the worst it would allow for code injection. So there isn't really a case where accessing an array out of bounds makes sense. That's where the idea of "expected vs unexpected errors" comes from.

Another way to think about it is that some errors can be handled and some can't. For example, most of the time you can't "fix" a segfault, so it makes sense for that to just crash the program.


I don't understand your example. The API provider has to determine the contract and decide whether out-of-bounds crops are supported or not. If they are supported, then an out-of-bounds panic is a bug in the API. If they are not supported, then an out-of-bounds panic is a bug in the caller. In either case, the exception is unexpected and due to someone not following the contract.


In cases where the caller might want to handle an error, Rust almost always uses an Option/Result. For vectors, the `get` method lets you try to get an element out without panicking if your index is bad. (Or of course you can just check the index manually. This is guaranteed to be threadsafe, which is nice.) The try! macro is the standard way to say "I don't want to handle errors here -- you take care of it."


Unexpected problems are bugs: they arise due to a contract or assertion being violated.

Speaking of which, DBC [1] would be an awesome feature for consideration. It's one of relatively few areas where D is superior to Rust IMO.

[1] https://github.com/rust-lang/rfcs/issues/1077


Wow, almost 2.0 already. Any major features reserved for 2.0 or will it be just another typical release like 1.8 or 1.9, more or less?


Next release will most likely be 1.10. As per semver.


The next release will be 1.10, not 2.0.

There are no current plans for a 2.0 at this time.


catch_unwind

Exceptions, at last! Not very good exceptions, though. About at the level of Go's "recover()". If this is done right, so that locks unlock, destructors run, and reference counts are updated, it's all the complexity of exceptions for some of the benefits.

I'd rather have real exceptions than sort-of exceptions.


These are not exceptions, and I guarantee that you and your users will be very, very miserable if you try to use them as such. :P This is primarily for preventing undefined behavior at FFI boundaries, there are no affordances in the language that make this mechanism anywhere comparable to a real exceptions system from, say, Python.


So much snark.

Error handling is something Rust does really well. Having used exception languages quite a bit I much prefer the forced handled return values.

Exceptions don't map well to the type of handling patterns like and_then(...), or_else(...) and the like which I find much more ergonomic and clean.


Exceptions can also be a pain to deal with in lazy Iterators. Either every lambda needs to be a try statement, or you need state.


Users are strongly and emphatically discouraged from using this for expection-style control flow. This is for managing crashes at the thread or FFI boundary.

In general, we're very happy with monadic error handling and don't want exceptions.


I must say I came into Rust and didn't like the lack of exceptions - and these still aren't. The community explained it to me, and now I'm noticing more cases in other code (C,C#) where the forced error code system (handle or panic) would be far superior.

Having it at the thread boundary is good enough so you can write, say, a webserver, without letting user code or a mishandled request killing the process.


These are not exceptions, and are not intended to be used like exceptions. They may have an implementation that's kinda sorta similar to exceptions, but that doesn't make them exceptions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: