Hacker News new | past | comments | ask | show | jobs | submit login
Rust's language ergonomics initiative (rust-lang.org)
506 points by aturon on March 3, 2017 | hide | past | favorite | 288 comments



Are there plans to do user studies? 10 minutes watching new users code in Rust will give you better ideas than 10 weeks thinking about the problem in your head.

I feel like there's a real lack of user testing in software development tools land. If you're developing software for unsophisticated users it's obvious that you should be doing user testing, but it's an often ignored fact that developers are users too! APIs, compilers, build tools, they could all be vastly improved with some user testing.


> 10 minutes watching new users code in Rust will give you better ideas than 10 weeks thinking about the problem in your head.

I'd add: Do you want to focus on new or experienced users? For example, when implementing systems that will be used 8 hrs/day by its users, we look only at efficiency for experienced users (unless the political situation requires placating the noobs). They will be noobs for a day or maybe a couple weeks; they'll be experienced all day, every day for years. We explain it that way at the beginning and they thank us later.

An example many are familiar with is Vim: Steep learning curve, but I'm so happy that the focus is on efficiency for veteran users.


This is the reasoning that is used for Clojure - "Clojure is for experts". It's used to rationalise all sorts of design choices that make the language much harder to get started with than (IMO of course) it should be.

If you're implementing a system that users have to use for their job, they'll put up with whatever they have to while they're learning it. If you're implementing a programming language and you want to increase adoption and general user friendliness, I think this is a terrible choice.

I love Clojure now and I work with it every day, but it's still frustratingly hard for avoidable reasons even though I'm a pretty advanced user. I'm jealous of the attention that Rust, Elm and Elixir pay to this aspect of language design, and I'm very pleased to see languages taking it more seriously.


I guess the optimum has to be some kind of "area under the learning curve" but I think it's pretty clear if you made a binary choice optimising for people who already made the effort makes a lot more sense. Is Clojure really that bad? I don't know it, but I never had the impression there was an anti-newbie community. Sometimes people take these binary positions as a decision-making optimisation too; having that policy just simplifies a lot of discussions, perhaps.


Agreed. All of those were easier to deal with than Clojure, but that could just be because it is a pain to do anything with the JVM and emacs isn't easy either.


What's the language that takes that to the extreme? I remember it being used for fintech(?) or spreadsheets and I remember seeing one-liners that look like someone just mashed the keyboard. Apparently incredibly efficient once you are an expert in the language.


The APL family of languages that currently have commercial implementations like Dyalog and Q w/ kdb+ database. J and GNU APL are both free & open source. Dyalog is nice though even though a commercial license sets you back a grand per year (basically what most C# users pay for VS). You get a nice interpreter, support, built-in graphics, full .NET interop, full R interop, heck even DDE which is still useful in finance and my own industry. They seem to take documentation really seriously and have pretty interesting conferences. I don't code in APL, but am considering it as their product just seems pretty agile for my needs (personal user productivity). Q is extremely expensive and is the APL derived language that accompanies the kdb+ database used for time-series analysis on stock quotes...etc. It was built by Arthur Whitney and legend has it the entire source code is 5 pages of C (being an APL guy he has it all scrunched up though...doesn't like to scroll). People seem to like it and be willing to pay a fortune.


That would probably be Q used in kdb+, and J, which the previous response stated is an open source implementation of the same/very similar language. The terseness even visible in the language name.


Perhaps J, which is part of the APL family of languages.


I think that you should focus on both groups. The new users for adoption, the advanced users for retention.

Even if no actual language changes come out of the new users group, just documentation updates, it would be a win for the entire ecosystem.


>unless the political situation requires placating the noobs

If something is a pain point for the noobs it's usually a badly designed part of the language that the "veterans" have just learned by rote to step around.


This would be really great. I constantly see excuses for why no-one should bother doing user studies of programming languages because its really hard. So let's not do it at all and instead focus on approach #2 that you lay out ("10 weeks thinking about the problem in your head").

That being said, this is really hard. Observing a (necessarily small) sample of heterogeneous programmers dealing with (necessarily small) tasks isn't easy, either.

I wonder whether users would consent to having their text editors and compilers instrumented to produce a user study of every single character typed and every single build (failed or successful, especially failed). I would imagine that a language designer could learn a huge amount from seeing all the things their compiler is rejecting from real users...


You wouldn't really need to have people install a new text editor. Just plugging this in to the Rust Playground would probably do [0].

[0]: https://play.rust-lang.org/


This is an interesting idea! In the alternate playground [1], the frontend is React / Redux and so we already track every edit to the code as well as the build results. "All" that would be needed would to save that data somewhere and allow people to opt-in/out.

[1]: http://play.integer32.com/


I think that a quick survey of the user would be helpful as well- specifically their experience with programming, and their experience with the language in question.


> user studies of programming languages

I feel like a lot of what I see is people want to establish that language X is much better than language Y, and I think that kind of study almost doesn't make sense with all of the relevant but poorly defined and difficult to control variables. A study around ergonomics of a few options in otherwise the same language is comparatively trivial.


On this note, I'm actively seeking JavaScript developers interested in doing some user studies of a compile-to-js language I'm working on.

It emphasizes ergonomics, actually, as well as making functional best-practices (typed immutability, etc) more accessible.

I'm in San Francisco but happy to conduct remotely. I'm targeting people familiar with ES6 and functional styles, but interested in folks of all experience levels.

If anyone would be interested in trying it out, shoot me an email at rattray.alex – gmail.


Replying to chase this up tomorrow when I'm not on my phone


We have done informal ones and may or may not do more formal ones, we'll see.


Informal is fine. The important thing is to do user testing in an ongoing way, not just once. Without a feedback loop it's quite surprising how quickly development priorities diverge from what's really important.


Members of all the Rust subteams are constantly involved in guiding new users through the language. Come to any Boston Rust meetup and you'll find Niko Matsakis giving one-on-one help to people taking their first steps. And it's not just feedback from people in major tech hubs; Brian Anderson and Alex Crichton just got back from mentoring new users at Hack Illinois in Urbana. They go to great lengths to remain plugged-in like this.


We are always constantly trying to listen to people and get feedback; for example, it's one of the reasons I pay such close attention to these threads!

There's always room for improvement, but iterating in this way is defintely a core value.


Depending on what group you're trying to attract or improve the experience with, you could potentially setup contracts with the level of person you would like to deal with and pay them to implement something in rust while you monitor their progress & with full access to their work and what they're doing.


The issue is, of course, that the more you focus on specific tasks the more your language becomes "designed for" those tasks, and the language becomes less general. The language already went through some of this with Servo - the DOM would be much easier to implement with an object-oriented language, but that wouldn't fit in well with the rest of Rust and would significantly raise complexity.


That is a concern but the other risk is by not doing this the language progresses in areas people happen to currently use it, which will be skewed by what it's already good at. You can then end up never improving for those cases it's weakest. This covers types of programs but also experience levels and histories of the programmers themselves.

You are right however that you need to carefully pick the tasks to fit with the original aims of the language.

Perhaps a good way of phrasing this is "why aren't X people using rust for Y?".


"If you're developing software for unsophisticated users it's obvious that you should be doing user testing"

Compilers actually have a unique opportunity to introduce some form of opt-out tracking of all of the compilation errors with sources, etc. This could really help to understand the users.


No, that's not how it's done. Proper testing for computer usability involves video of the user and the screen. The important things to note are when the user stops, gets stuck, has to consult external references, asks for help, or gets upset. It's far too intrusive to impose on anyone not explicitly volunteering to do it.


Unfortunately I have but only one upvote to give. This is very important if you're performing a usability / ergonomics study. Context matters a great deal. The parent poster is also quite correct about it being invasive: although not absolutely necessary, ideally you'd observe the user in whatever context they happen to use Rust (e.g. at their workplace or in their home). Lab studies are still valuable too, but you might miss some context due to the artificial setting.


We have discussed doing this in the past, actually. Opt-in (oops, had "out" here initially) is very important here!

I am not sure if it's something that ever landed or not...


Opt-in tracking, I really hope you mean. I know the concerns with getting unrepresentative samples but sending what I'm working on to a third party is something that would immediately stop me using an entire language for any form of my work.


gah! Yes, I mean "we would never consider tracking you without your express given consent." That's opt in not opt out.


:) Thought that was the case, seemed odd that people were saying it like it was such a small deal!


It doesn't have to be strictly opt-in or opt-out. There could be two versions released, a version without tracking could live on the same release page, just a bit lower and not in a bold font or something like that, pushing new users towards the version with opt-out tracking.


This is a dark pattern. Why would you want the default to be a compiler that essentially spies on your code? I can think of no better way to make developers (not to mention businesses!) uncomfortable with the idea of adopting Rust.

Thankfully, Mozilla is one of the few entities left in the technology industry that seems to care about ethics. I don't believe the Rust team would ever do something like this.


No, it's not a dark pattern, you have a clear choice, just one being less advertised than the other one.

And Mozilla is the one with opt-out of all of the unethical features in Firefox. They have no ethics, looking like they care about those things is nothing more, but PR. After all, corporations are not people.


Is there a mailing list I can subscribe to that will tell me when I can turn this on? I'm hoping not to have to listen to rust-dev for the trigger phrase.


I don't know off the top of my head because it's been a while since we talked about it and I haven't paid super close attention to rustup's development. I am assuming that we would make an announcement on the blog because we'd want to be very clear about what's going on, and of course, try to convince people to opt in :)


How about opt-out on nightly builds and opt-in on production?


I don't personally think opt-out is ever acceptable for this kind of thing.


Yeah, I'd really prefer not to have to remember to opt out of accidentally violating my NDA when working on closed source projects.


My comment crossed with the GP post here, but that's a really good idea. Please look into it!


With good auto completion, the user should never even have to press the compile button or hit a compiler error. (Slight exaggeration)

This means the user might have struggled a long time with the code before even hitting the compile button. If you only collect statistics from compilation you will miss all this pre-struggle.


Was about to post the same idea. This should most definitely be the number 1 step for the whole "simplification" project. Take all the advice on how to build a start up by focusing on data points, and apply it to a programming language.


> 10 minutes watching new users code in Rust will give you better ideas than 10 weeks thinking about the problem in your head.

I am not sure whether 10 minutes of Rust would give you actionable info, I mean it certainly takes longer to get familiar with the basics in almost any language, even if the 10 minutes just stands for a short amount of time.


Things happen even in the first 10 minutes.

It's not appropriate to limit user studies only to people who are already experts.


But it's also questionable to limit them to people who are not, depending on what you want to accomplish.


I assumed "new users" implied relatively new users (before they grew to love the language enough to accept the downsides), not absolute beginners


Although I think I agree that many current languages do not pay enough attention to ergonomics (hello, Scala!), I think your line of reasoning could be quite dangerous.

I don't think there's any empirical support for the idea that a shallow learning curve translates into a powerful/expressive language. EDIT: I'd love to be proved wrong, so if there's anything in the way of evidence, please link me!


As a Scala dev, I'm amused to see the Scala call-out :)

Undoubtedly, it's a language that is a little tricky to get fully fluent, but I'm curious what things pop out to you as bad ergonomics? I think it actually doesn't suffer in many of the ways identified for Rust in this article.


It's a bit late here, so I won't go into much detail, but just rant a bit. Hope that's OK and gives a general impression of my personal gripes.

EDIT: Well, that rant turned out longer than I imagined... and it's even later now, so I'll probably go to bed momentarily.

EDIT#n: PS: Scala's still better than Java, but please don't tell me Scala's in any way "coherent" or "well-designed" or "orthogonal". Haven't really tested Kotlin or Ceylon, but I get the feeling that those are really much more c-or-w/d languages for the JVM. Plus, "orthogonal" doesn't mean shit if your ergonomics suffer like they do in Scala.

First: See Scala Puzzlers + PaulP's rants... and that's not even half of it. (Yes, I know collections are going to be "fixed", but... not really. I don't think the fact that Odersky still seems to be calling all the shots on that front is in any way encouraging.)

Second: An especially annoying thing is the way "_" desugaring works which sometimes leads to really surprising results when it turns out that "no, you cannot use _ to stand for 'identity' in a function parameter context". This was a conflation of 'features' namely that scalac doesn't warn when a unit-returning method is actually with an expression that doesn't return unit. If that explanation doesn't make sense, I apologize -- it's a complicated issue that's hard to explain... which is exactly my problem with some of the problems I've experienced in Scala :)

Third: The whole "let's encode typeclasses as implicit parameters" thing is insane. AFAIUI this is actually a thing that was intentional. Unless you have actualy real+usable dependent types it's a huge mistake to get rid of coherence[1]. (Plus, TCs are a semantic-level feature. You don't want to force people to "encode" semantic things you care about in your concrete syntax. That way lies "[OOP] patterns".)

Fourth: The whole "for-yield" thing. WTF? Either do monadic notation like it matters or just don't do it. For-yield leads to incredibly noisy code. (It's not even consistent, try starting your for-yield with a "x = ..." line, and you'll see.). The fact that monadic notation starts with a "for" and that collection iteration starts with a "for" is NOT in any way a sign that your language design is "orthogonal" as Odersky loves to claim. It just means that the language designer didn't really understand what he was doing. (As you can probably gather I'm a Haskeller/Idirs'er/etc.). Syntax for these things matters.

Fifth: ... and a follow-up to that: @tailrec isn't enough: That fact that explicit trampolining is basically required for any monadic Scala code basically kills the idea of monadic programming right off the bat... unless you're willing to invest huge amounts of time into it for-yield is just unusable. (Yes, I'm aware of the Free monad. One hopes that the performance problems have been solved, but it's still not always enough to have a Free monad.). Honestly I think it would have been better to invest time into implementing tail-rec on the JVM rather than wasting that time on @tailrec. I do mean wasting -- any situation where @tailrec applies, I could have trivially replaced (or "encoded" that "pattern") that code with a while loop and a var.

Sixth: Why isn't Monad/Monoid/etc. in the standard library? This goes back to my third point which is that the language doesn't actually encode semantics of typeclasses, and so now we have "scalaz" with its Monad and "non" with its Monad, and "fs2" with its scalaz-stripped-down-copy-of Monad. WTF? This is not needed -- these typeclasses have stood the test of time and it's only because Scala insists on encoding them that we ended up here. (Encoding via various implicit magic tricks, the fact that everything has to be an object, etc. It's fucking ridiculous.)

Summary: There's a lot of completely accidental complexity because Scala wasn't really designed as a coherent whole... it just sort of "grew organically". (Like almost every language out there.)

[1] Yes, you gain a "choice" of instances via imports. Guess how often that's actually useful vs. harmful? Never. It's absolutely insane that your imports can change your serialization/deserializion code.


What programming languages have been developed in this manner? What were the results?


Python. It was lauded for usability and became a 30 year success: http://python-history.blogspot.com/2009/02/early-language-de...


None. Typical usability testing with watching people do things doesn't apply to programming languages, because of how much more complex and larger languages are in comparison. At best this approach can be used to test usability of tooling, IDEs, but that's it.


I've heard it said that the creator of Elm regularly pairs with newbies and adjusts the language based on what they find difficult, but I don't have a citation.


If I recall correctly Microsoft did some regarding VB.


When we're trying to balance a trade off, I often wish we could just do A/B testing on it, but that's not realistic for languages.


What you can do is run over a large corpus of programs (crates.io? GitHub?) to identify the places where the different alternatives would impact code, collect metrics on them, and pull some sample programs to investigate the impact on them of the different alternatives. Is anything like that being done?

The HTML5 spec was developed in this way, and some of the code-health folks at Google were just starting to operate this way when I left, but I haven't heard of many other language-design efforts working like this. The technology & data exists now, with large public corpora of open-source code, parsing libraries that can identify patterns in it, and big-data tools to run these compiler tools over a large corpus. It's a bit of a paradigm shift from how languages are typically developed, though.


Russ Cox did this when figuring out how to seamlessly add monotonic elapsed time measurements to Go:

https://github.com/golang/proposal/blob/master/design/12914-...


We use crater to check potentially breaking changes against our entire package ecoystem, but its not as easy to test which of two syntaxes for a new feature will be easier to use by looking at existing code.


They may not do user studies like this, but for sure they adapt to user feedback.

https://thefeedbackloop.xyz/stroustrups-rule-and-layering-ov...


God yes. If anybody must remember one concept only about ergonomics, it's that.


After writing Rust in production for a while, the biggest bugbear I have is the naming/file structure.

I end up a lot with this;

    src/main.rs
    src/combobulator/mod.rs
    src/combobulator/tests.rs
    src/tests.rs
    src/somethingelse/tests.rs
    src/somethingelse/mod.rs
Because I find tests in the same file a bit confusing. It's really easy with maven-style layouts to know that "only things in main/java or main/scala get compiled and go into the jar". "src/test/*" and "src/main/resources" are for me. The same thing applies for cargo.tomls and resources - there's not really a way to see what goes into the executable from the file structure.

But this isn't the biggest problem with having things called "mod.rs". That would be if I open 5 mod.rs's in a text editor with tabs, I have no idea what goes with what.

I know that tests should go under tests/, but that's specifically for integration tests. Integration tests are an order of magnitude less likely to get written imo, and if they are they'll probably get written as unit tests anyway.

If anyone has any top tips for how to structure larger Rust projects while separating unit tests into different files, please let me know!


I prefer to keep tests in a `mod tests { ... }` block at the end of the source file, which provides comparable benefits and separation. I also prefer to use `combobulator.rs` rather than `combobulator/mod.rs`, for multiple reasons including filename ambiguity. In this case, you'd have:

    src/main.rs
    src/combobulator.rs
    src/somethingelse.rs


Too much code in one place. Makes it hard to read the file and difficult to see what is code and what are tests. `wc` can no longer give you a quick approximation of code size either.


In case you didn't see, I mentioned there is a way to do this right now:

https://news.ycombinator.com/item?id=13791945


So I can keep a separate src/ and tests/ hierarchy? Why are the rust people so insistent on this even though most seen to want to could their test and src trees.


I think this is a very common standard. At least, its what I have seen most often.


One approach for this that I haven't really seen get used much that you can use a macro to get around this. In your comboluator.rs, just add this at the bottom of the file:

    #[cfg(test)]
    Mod tests {
        include!("comboluator_tests.rs");
    }
Then write your tests there. It allows you to write unit tests against private things, but also allows you to decouple the files to keep things cleaner.


I have no tips to fix. When cargo went gold long ago I complained about this and I was told I'd get used to it. I never have. The test layout is painful and I really hope it gets changed or at least added. I want all my tests for a module in one place. I hope this stops being a problem.


Additionally tests in "tests/" are compiled to independent binaries on which the test suite is run on separately, which results into verbose and redundant output when ran.

Also if you keep your tests under "src/", you can't just stick them wherever you want; you need to have #[cfg(tests)] on every module declaration leading to the test function, otherwise the test is ignored silently. From time to time I find old tests that have not ran once because of a missing attribute somewhere upwards the module hierarchy.


Hm, cfg(tests) should only control if they're included in non-test builds; it shouldn't ignore them entirely.


Speaking of removing friction, there are three areas that have caused me grief when I wrote Rust code:

1. Error handling. The lack of built-in support for multi-error or error union in Result is painful in dealing with different types of error in a function. Support for Result<Value, Error1 | Error2 | Error3> would be helpful. Or may be support for easily converting one type of error to another. Now there's lots of boiler plate code to deal with error conversion. Error chaining would be nice, too.

2. Lack of stack trace when an error occurs. Now that stacktrace starts when panic!() is called, which is kind of late.

3. Better support for conversion between &str and String. Dealing with strings is so prevalent in programming that making it easier to work with the two types would be a huge boost to productivity.

Edit: another item

4. Support of partially applied function , i.e. bind a subset of arguments to the function pointer. Currently there's no way to bind the self argument to the Option/Result chaining calls. Basically the Option/Result chain (.and_then, .map, etc) only carries forward the value of Option/Result and nothing else. It would be nice put partially applied function in the chain. e.g. result.and_then(self.func1) where func1 has the self argument bounded. Or in more general form, result.and_then(func1("param1", param2, _)) where func1's first and second parameters have been bounded up front and the value of result will be passed in as the 3rd parameter.


> 1. Error handling. The lack of built-in support for multi-error or error union in Result is painful in dealing with different types of error in a function. Support for Result<Value, Error1 | Error2 | Error3> would be helpful. Or may be support for easily converting one type of error to another. Now there's lots of boiler plate code to deal with error conversion. Error chaining would be nice, too.

There are a couple of crates that support this; personally, I recommend the "error-chain" crate. However, I do wish that Rust promoted the most capable of those to the standard library.

> 2. Lack of stack trace when an error occurs. Now that stacktrace starts when panic!() is called, which is kind of late.

error-chain provides that.

> 3. Better support for conversion between &str and String. Dealing with strings is so prevalent in programming that making it easier to work with the two types would be a huge boost to productivity.

Can you give some specific examples of cases you've found cumbersome?

String has a Deref instance for &str, so taking a reference to a String automatically works as a &str. You can also call .as_str().

Going in the other direction, you can call .to_string() to make a copy of a &str as a new String.


1. Some of the 3rd party error chain libraries help somehow but they don't help to remove the boiler plate when dealing with different types of error. I just want to be able to do:

    fn func1() -> Result<MyStruct, Error1 | Error2> {
        let foo = foo()?;  // which can return Error1
        let bar = bar()?;  // which can return Error2
        ...
    }
2. Error chain only shows the stack where I explicitly add it to the chain. Anything underneath is not shown. All these should be done by the runtime instead of forcing the developers to add code to handle it.

Also, error handling is so prevalent in Rust. If error chain is the way to go, make it as built-in. As right now, everyone has to hit a brick wall with Result, and then hunt around for the same solution.

3. It's the other way around &str to String, having to call .to_string() everywhere. Make it implicit and automatic if the type expects a String while a &str is passed in.


I completely agree that Rust ought to build in support for the error-chaining pattern. I think I'd still prefer to have a named type, but the standard library should provide a standard way to construct that type.

> 3. It's the other way around &str to String, having to call .to_string() everywhere. Make it implicit if the type expects a String while a &str is passed in.

You can't turn a reference like &str into an owned type like String without making a copy, and Rust doesn't do implicit copies (among many other reasons, because doing so would make it harder to notice code patterns that will lead to poor performance). So you'll always have to have some explicit indication that you want to make a copy.

In general, most functions should accept &str parameters rather than String, for exactly that reason; you should rarely run into functions that want a String. You can also use the "Cow" type if you want to support both owned and borrowed strings in the same structure.


> I think I'd still prefer to have a named type, but the standard library should provide a standard way to construct that type.

Having ad-hoc error union Result per function make it lightweight, and less friction in writing code. I would go one step further, let the compiler build the error union automatically.

    fn func1() -> Result<MyStruct, _> {
        let foo = foo()?;   // might return Error1
        let bar = bar()?;   // might return Error2
        ...
    }
The compiler infers the list of possible error types returning from the functions. Func1() would automatically have the return signature of Result<MyStruct, Error1 | Error2>.

Make the type inference to good use.


That kind of automatic sum-typing seems like an interesting idea. You might consider bringing it to the Rust internals forum, posting it as a pre-RFC, and discussing it as compared to some of the alternatives. That might lead to either a change in the direction you hope for, or the unearthing of other ergonomic approaches.


This idea has been waved around a bit, but in the form of `impl Error` where `_` is. That is, inside the function the `E1 | E2 | ...` type is being built, and if you have automatic dispatch for `Error`'s methods then it will work with `impl Error`.

Actual global inference has never been on the table and still isn't.


I believe there are conversations of being able to promote enum variants to full-blown types, though I don't know if there's any project for lightweight enums of types.


I believe inferring sum types like that would make the inference system far more sensitive and complicated and possibly even slower. For one, it would likely mean mistakes like, say, assigning two different types to a variable may result in weird error messages, and may also limit how often coercions trigger.


I seem to run into functions that require String instead of &str all the time, most recently the "assert_eq!" macro which -- maybe I'm not understanding it correctly -- refuses to compare a &str to a String.

Can you tell me how I would remove the excessive uses of ".to_string()" from the tests in this module? https://github.com/rspeer/rust-nlp-tools/blob/master/languag...


  fn main() {
      let s1 = "Hello, world"; // &str
      let s2 = String::from("Hello, world"); // String
      
      assert_eq!(s1, s2);
      assert_eq!(s2, s1);
  }
compiles just fine for me?


Okay, once again I try to find an example from my code, and once again it turns out that Rust makes it simple in the simple case, but the more complex cases are still confusing.

If I change the test value Some("zh".to_string()) into Some("zh"), it points to that line and tells me:

    expected struct `std::string::String`, found &str
Sure, it's a different situation because the value is wrapped in an Option. But if String and &str are truly compatible, I would never expect to see that error message.


Ah yes. So this is an area where the diagnostic is _slightly_ misleading; it's trying to point out that you have two different types, and that that's the difference between the two of them. It's not that they can't be compared. Maybe a bug should be filed...

String and &str can normally be compared because of Deref, that is, &String derefs to &str. Option, on the other hand, does not implement Deref, and so no coercion happens. Rust doesn't do a lot of coercions, but Deref is one of the bigger ones.

Back to the _actual_ topic at hand, I can see how this can be a pain point until you know the rules, though. :/


> String and &str can normally be compared because of Deref, that is, &String derefs to &str. Option, on the other hand, does not implement Deref, and so no coercion happens.

For this particular case, any particular reason we couldn't add an impl of PartialEq? In fact, once we have specialization, couldn't we have a general impl of PartialEq for Options of Deref types?


  impl<T> PartialEq<Option<T>> for Option<T> where T: PartialEq<T>
does exist, but I'm not 100% sure why Deref doesn't kick in here, I just know that it doesn't.

> couldn't we have a general impl of PartialEq for Options of Deref types?

I think the issue is None. You'd get a null pointer, which doesn't make any sense in safe rust.


> I think the issue is None. You'd get a null pointer, which doesn't make any sense in safe rust.

No, I don't mean an impl between Option<T> and T, I mean a bidirectional impl of PartialEq between Option<T> and Option<U> where U is T's Deref::Target.

In that case, None doesn't cause an issue; None == None, and Some(t) == Some(u) iff *t == u


Ah right, yes.


    -> Result<MyStruct, Box<Error>>
accepts almost any type of error (via dynamic dispatch of things that implement std::error::Error trait).


That's a good idea. It's the shotgun approach.


> String has a Deref instance for &str, so taking a reference to a String automatically works as a &str. You can also call .as_str(). > > Going in the other direction, you can call .to_string() to make a copy of a &str as a new String.

As a beginner in Rust myself, I would like to observe that &str vs. String problems come up all the time for me. Every one of them is quickly resolved by knowing where to add an & or a method call, it seems, and experts know when to do this, and they stop noticing the problem because it's only the beginners who are doing it wrong.

But when every beginner is doing it wrong and every expert isn't, there is an ergonomics problem.


I think part of the problem might be that String and &str are not related in any intuitive way. My initial work in Rust was littered with what the heck is this, what's a str that there's a reference to, etc.

I understand now, but initially it made absolutely no sense that String is a heap-allocated owned string and &str is a reference to a chunk of string data of known length stored somewhere that I probably don't have to care about.

It may be a teaching thing, but I do wonder if they could have been named better. I just don't really know what else you'd call them without them becoming overly verbose.


I often joke that String -> StrBuf is my #1 wishlist item for a theoretical Rust 2.0.


I absolutely agree with you, and it seems worth looking closely at the problem to see if some change would make it easier to learn. However, the ownership and borrowing system represents the single biggest innovation and key idea of Rust, as well as the thing with the least ability to apply learnings from other languages. So while we should look closely at any roadblocks that make it harder to learn than necessary, we can't make it entirely transparent.


This is one reason why the new book focuses on teaching ownership and borrowing with String and &str. You're 10)% right.


> Support for Result<Value, Error1 | Error2 | Error3> would be helpful. Or may be support for easily converting one type of error to another.

It sounds like you want to create a new error type that's an enum of Error1, Error2, Error3 and then just implement the From traits. Then you can use ? with no boilerplate error handling.

I'm sure you've already considered this though. Why wouldn't that work for you?


Declaring the new enum and implementing it are the boilerplate.


Lots of typing and mental overload. Since we are talking about ergonomics and reducing friction, it would be good to make Result handling easier.


The From trait can be derived using my derive_more crate. That would at least decrease the boilerplate a bit.

http://jeltef.github.io/derive_more/derive_more/


I don't understand there are no stack traces unless you use a library?


for 1. there's map_err(..) which accepts a closure

e.g.

`try!(foo().map_err(|e| format!("{}", e.thing)));`


That's what I'm doing right now all over the place and I hate the boiler plate.


ah, fair enough!


I especially like this approach:

> Often, the heart of the matter is the question of what to make implicit. In the rest of this post, I’ll present a basic framework for thinking about this question, and then apply that framework to three areas of Rust […]

What's proposed here is a universally good way to think about what to make implicit. The proposed changes to Rust are just some applications of this.


"What's proposed here is a universally good way to think about what to make implicit."

I had a completely opposite reaction. It ignores all of the important things that make usability good and instead focuses on the approach that essentially promotes inconsistencies in design.

"The basic thesis of this post is that implicit features should balance these three dimensions. If a feature is large in one of the dimensions, it’s best to strongly limit it in the other two."


Based on the actual examples, I'm not sure it promotes inconsistency in design, as long as it's not the sole deciding criteria. As a tool to do first pass exclusion of ideas I think it has a lot of promise.

The major problem I see is that it's fairly subjective at the moment in what you consider when thinking about those criteria. For example, in the section about eliminating the need for mod (which was admittedly presented as radical), the following was stated: You could instead imagine the filesystem hierarchy directly informing the module system hierarchy. The concerns about limited context and applicability work out pretty much the same way as with Cargo.toml, and the learnability and ergonomic gains are significant. I think this is a case where the learnability would suffer quite a bit. If I understand it correctly, this changes the filesystem from a unidirectional resource to a bidirectional one, where the presence of arbitrary files not specified (as opposed to a well understood singular file, such as Cargo.toml) might change how the code is interpreted.


I think the idea is that `foo::something()` without a module declaration either implies the existence of `foo.rs`, or fails to compile. I'd agree that it would be bad if creating `foo.rs` changed the behavior of code that was previously doing something else (other than breaking the build).

What would happen, I think, is that adding a `mod foo { ... }` definition would change the behavior of code that previously implicitly referenced a `foo.rs` file. But that seems less crazy to me, since you've got a change in one file affecting something else in that same file. Or it might make sense for that to be a "conflicting module definitions" error.


> It ignores all of the important things that make usability good

Can you give some examples?

To me the approach presented here is not just about coming up with an initial design (where I agree there are more aspects to be considered), but iterating on existing design aspects (in this case language syntax and semantics) to make them more useable. Seeing the first dimension the author mentions is 'Applicability' it seems they pay attention to consistency, but I can see it is not explicitly mentioned.


Basics for good design is to satisfy all: simplicity, flexibility, consistency, universality, familiarity. So users won't have to neither learn a lot to do something simple nor learn a lot for a lot of different cases, but intuitively reuse what they already learnt across all similar cases.

So, what can be made implicit is irrelevant, the proper question would be what can be done to improve user experience. And it's a lot, but the design by committee kind of process is going to work against it.


> What's proposed here is a universally good way to think about what to make implicit.

You might want to vary your kool aid a bit!


I'm really encouraged by this post. I ran into a situation somewhat related to the borrowing in match patterns this week [1], and whilst it's only a mild annoyance, it's lovely that it might get smoothened out. Today, i started using modules in anger, and was immediately mildly annoyed by the need to explicitly reference crates in my code, when they're already in my Cargo.toml, and to declare modules, when they're implied by my file structure, so i'm happy to see that that is on the radar too!

The file structure one makes me laugh, because one language that does implicitly create modules from file structure, in exactly the way Rust would need to, is Python, which is the one with the whole "explicit is better than implicit" deal!

[1] https://www.reddit.com/r/rust/comments/5whke7/deref_coercion...


please fork the language and call it somehow RustyRuby if you prefer things to be implicit.


As I've said/posted this elsewhere, the Rust macro package is close to unusable. It makes easy stuff difficult and it doesn't exactly help with difficult stuff.

It would be interesting to compare the number of macros defined in the crates corpus divided by total line count and compare that with other languages. I do not think that I am alone in not using it. Yes, I use macros; I just don't program macros.

Obviously, Java has shown that you can survive without a macro pre-processor. That was even a point Gosling+Co made in a white paper I read way back in the day. But I do believe that if you are going to have a macro processor, it should be an expedient. Rust's macro processor is not expedient. It is its own impediment.

I'm used to using macros. I use them in C and I use them in assembly. These are both low level languages which Rust claims to be. Not being able to use Rust's macros in the style to which I've become accustomed is infuriating.


Macros are being largely re-done, see http://words.steveklabnik.com/an-overview-of-macros-in-rust for an overview.

Honestly, I very rarely use macros and have written two in my years of Rust. You almost never need them, or at least, that's my experience.


I guess it depends on what you work on. Both of my primary Rust projects (a .NET metadata parser & a Game Boy emulator) are heavily dependent on macro usage, and they implement multiple new macros. Both do lots of binary parsing, so I use bitflags, enum_primitive and bitfield all the time. For instance, I use this [1] two-macro monstrosity to parse tagged unions from the CLR metadata.

[1] https://github.com/paavohuhtala/clri/blob/b8a9057397ef95c0de...


Yup, I do think it varies this way.


And even in Clojure, where macros are very easy to use -- I write them very infrequently. They're seductive, but usually a bad idea.


  #define M_PI        3.14159265358979323846264338327950288


3.141592653589793115997963468544185161590576171875 is the exact decimal representation for the 64-bit IEEE-754 number that's closest to pi (viz. 0x400921FB54442D18).

Any time you see a decimal floating-point constant with a nonzero fractional part that doesn't end in '5', you're looking at a bug.

EDIT: As long as this grizzled old Fortran programmer is giving out free advice, I'll add two more items every programmer should know about binary floating-point:

a) Every binary floating-point number can be represented exactly in decimal notation if you use enough digits.

b) Those decimal values are the only ones that can be exactly converted to binary; all of the rest require rounding.


> Any time you see a decimal floating-point constant with a nonzero fractional part that doesn't end in '5', you're looking at a bug.

That's just silly. If you're writing some famous mathematical constant, the digits should match that constant, and not the requirements of the machine. (Except for the last one being rounded off.) Suppose we had a floating-point machine that gave us maximum 4 digits of decimal precision. I wouldn't define the PI constant as 3.145. That would just look like a typo to people who have PI memorized to half a dozen digits or more. I'd make it 3.14159 (or more) and let the darn compiler find the nearest approximation on the floating-point axis.


Any exact decimal representation of a specific binary floating-point number that's finite and not an integer must end in the digit '5' (perhaps with trailing zeroes). This is because its fractional part is (the sum of) a set of powers of two with negative exponents, and their exact decimal representations (0.5, 0.25, 0.125, &c.) all end in '5' (proof by induction is obvious and left to the reader).


Any time you see a decimal floating-point constant with a nonzero fractional part that doesn't end in '5', you're looking at a bug.

depends on the language. for example here it is in go:

    Pi  = 3.14159265358979323846264338327950288419716939937510582097494459 // http://oeis.org/A000796


What the person above you is saying, I think, is to remember that computers usually work in base 2. This applies to IEEE floating points, where the mantissa is in base 2; when you represent fractions in base two, they're powers of two: 1/2 (.5), 1/4 (.25), 1/8 (.125), etc. What he's asserting, I think, is that any power-of-two fraction, or any combination of those (in binary), result in a number ending in 5 when represented in decimal. Anything else is going to be rounded to the nearest representable number (that ends in 5).

So, go might have that value in its source, but it's getting rounded to something that would, if represented in decimal, end in 5.


In Go, floating-point constants may have very high precision, so that arithmetic involving them is more accurate. The constants defined in the math package are given with many more digits than are available in a float64.

Having so many digits available means that calculations like Pi/2 or other more intricate evaluations can carry more precision until the result is assigned, making calculations involving constants easier to write without losing precision. It also means that there is no occasion in which the floating-point corner cases like infinities, soft underflows, and NaNs arise in constant expressions.


There's an argument to be made the other way too. If you're using an unusual default rounding direction and you care which direction your PI constant is rounded, you might prefer it if PI rounded in the same rounding direction as the rest of your floating point math. In that case you'd want a constant that is equivalent to PI under all rounding modes.


The code below works for me

#define PI1 3.141592653589793115997963468544185161590576171875

double pi1 = PI1;

#define PI2 3.14159265358979323846264338327950288

double pi2 = PI2;

assert(pi1 == pi2);

(edit: or even 3.14159265358979323846)


Your PI2 rounds to PI1 under the rounding mode used at compilation time. Print it out with "%50.48f" (or FORMAT(F50.48)) and you'll see PI1. But PI1 is independent of rounding mode.


Sure. I thought you were implying more than that. When you called it a bug I thought you were implying that the use of one over another would alter the output of a program. My bad.


I thought that style was considered bad form in modern C/C++ (esp. the later, with `constexpr`). What's wrong with

    const PI: f64 = 3.14159265358979323846264338327950288;

?


What machine can represent all those decimal digits so precisely that the ending decimal digits ...0288 are exactly right and not ...0287 ?

I'll accept without looking it up that the statement is correct syntax in some language (it doesn't look like C++ to me).


See above. The constant is (a) not pi, and (b) not an exact decimal representation of any binary floating-point number.


const M_PI: f64 = 3.14159265358979323846264338327950288;

You wouldn't use a macro for something like that in Rust.


I don't understand what this means. You'd use const in Rust for this, not a macro.


That was just a trivial example. Yes, you could use a const which would be difficult to share across files. Moreover, if you wanted to do something equally textual

  #define PASS_VERBOSE if (flag_verbose && first_pass) printf
you just might be able to but only with a completely different tool.

A macro processor is not of the language; it is above the language.


A "macro" in C is not the same thing as a "macro" in Rust. By your logic, Erlang processes aren't processes because they aren't kernel processes, or Go packages aren't packages because they don't use the Java package naming convention. Nobody owns the exclusive right to define the words we use.


> A macro processor is not of the language; it is above the language.

Then feel free to use the C preprocessor with Rust, it works just as well. :P Just like it does with Python, and Java, and...


The C preprocessor can only be used with Python as long as you don't do anything multi-line:

   #define whatever(param) \
   foo: \
      bar \
      baz

I made a preprocessor some 18 years ago that could be used with Python.

Wayback Machine:

https://web.archive.org/web/20000815202258/http://users.foot...


Better yet, use m4.


> which would be difficult to share across files.

It's path::to::wherever::PI. That's it. Just like any other item. If you want to use only PI in your code, you'd use 'use', like any other name.

Your second example is something better suited to a macro, it's true.

I _think_ what you're getting at here is that you only want text substitution? I think we will have to agree to disagree if that's true :)


Well, given that you have only written two macros in your years of Rust, I would strongly encourage you or the language ergonomics initiative to openly question why this is so. Clearly, Rust has a clever approach but I'm questioning whether it is in fact a usable approach. Too much solution for not enough problem.

Yes, I do like textual substitution. Guilty. This is a common old school low level paradigm. Still, the underlying language and its compiler exist below to enforce the rules on any atrocities I commit with my macros.


If you want textual substitution, you are not limited by anything that Rust offers. You can incorporate any existing text preprocessor - of which there's a multitude, and at least cpp and m4 are in pretty much any Unix system - into your compile pilelines. By the very definition of textual substitution, it is completely orthogonal to the meaning of the output (in this case, Rust code).

The kind of macros Rust implements, on the other hand, are the kind that have to be language-specific, because they deal with the syntax tree of that particular language. They also enable many things that are outright impossible with text substitution, such as hygienic macros.


> Clearly, Rust has a clever approach but I'm questioning whether it is in fact a usable approach. [...] Yes, I do like textual substitution.

These two statements are not consistent. Textual substitution is not usable. The pitfalls with it have been well documented for decades, and yet the same problems persist. Why on earth would you want to perpetuate the list of problems caused by such a facility?

Syntactic macros are absolutely superior to textual substitution. You can take issue with Rust's currently limited support for macros, but to propose textual substitution as a viable and more usable alternative is simply absurd.


What I'm saying is, I personally find Rust expressive enough to never need macros, and so their ease of use, to me personally, is not an issue either way. (Well, other than the import bit, I do care about that.)

Those who do use and write them heavily are the ones actually involved in the macros 2.0 effort. There are certainly flaws.


You know what I like? Sensible error messages, which you tend to lose if you use macros for anything complex (disclaimer: I am very guilty here)


I very rarely write macros despite having written thousands of lines of Rust, but they are super useful on occasion. Often in Rust something that you might have used preprocessor macro tricks to do in C can be done without macros at all. At other times, Rust macros will let you do things that would be impossible to express in C.


You want textual macros. Rust doesn't have this feature, it has a syntactic macro feature inspired by the Lisp family. Sorry.


> Obviously, Java has shown that you can survive without a macro pre-processor. That was even a point Gosling+Co made in a white paper I read way back in the day. But I do believe that if you are going to have a macro processor, it should be an expedient. Rust's macro processor is not expedient. It is its own impediment.

Annotation Processors (iirc Java 1.5) are clearly a form of a pre-processor. It's not macros / textual expansion, though.

Similarly C++ mostly gets along without macros since it contains a capable meta-programming system -- and I think this is the more important point here; for many tasks meta-programming is just a handy thing to have. Dynamic languages don't have that problem, since their runtime is their meta-programming system as well.


Another thing that has helped Java was the decision to use a JIT.

Most Java JITs are able to remove code if it is proven unreachable, which allows to use pure Java code for what would be #ifdef in C, with the caveat that all branches must compile.


You would think that AOT would be the right time to remove dead code.


Only if it can be fully determined at compile time.

For example a debug flag can depend on a command line parameter, but it will be used to initialize a constant variable.

So the code can be shipped with both versions, and the JIT will just remove the unused branches.


absolutely agree and I think this issue is much bigger than everything together what is described in that article. Solving macros syntax issue would solve a lot of issues with verbosity.


I really hope one day I can build a macro for ternary operator with ? and :.


Is using the if statement as a ternary that much worse?

    let x = if a { b } else { c };
Sure, it costs a few characters, but I appreciate the consistency and clarity.


Much agreed - I quite regularly write code like that, and it's very clear on reading the first thing after the `=` that you're doing something conditional. You can also embed complex expressions in there without confusion. The ternary syntax requires you to read the entire statement before you even know what kinds of expressions it contains, then backtrack to figure it out.


I would strictly call that an "if expression" rather than an "if statement". It is equivalent to ?:, even though it is also a bit more verbose.


Verbose.


The thing is, Rust's Option type means the primary use of the ternary in other languages - `foo ? foo : somedefault` - is entirely unnecessary in Rust. Other uses of it tend to benefit significantly from being more obvious about what's happening.

I think a ternary operator would be the first construct in Rust that prevents reading a statement from left to right.


No offence, but I hope you never can, because if you can, others can, and since Rust already uses '?' for something else, it's likely to just get confusing. Is the if syntax for ternary really that bad?


It's in different context. Reusing a keyword or an operator in different context happens all the time in languages.


Is it that different? Assuming you would use '?' and ':', presumable we might see both the following then:

    let foo = foo()?;
    let bar = bar() ? this() : that();


? already has different forms as of now. You can do foo()?; or foo()?.bar(). People don't seem to be confused.

expr ? expr : expr is just another form. The ternary form is so well known that I doubt people have problem recognizing it.


The usage of ? in foo()?.bar() is no different from its usage in foo()?; ...


Are those really 2 different forms? I was under the impression that these were the same:

    foo()?.bar()
    (foo()?).bar()


> Right now, such a signature would be accepted, but if you tried to use any of map’s methods, you’d get an error that K needs to be Hash and Eq, and have to go back and add those bounds. That’s an example of the compiler being pedantic in a way that can interrupt your flow, and doesn’t really add anything; the fact that we’re using K as a hashmap key essentially forces some additional assumptions about the type. But the compiler is making us spell out those assumptions explicitly in the signature.

I feel this exact same way with Go. E.g.

    x := map[string]map[string]int{
        "key": map[string]int{
            "another": 10,
        },
    }
Given that the outer type signature says that the `value` of the map should be a `map[string]int` it's sometimes quite annoying to specify that inner type over again


You can leave the inner type out...

https://play.golang.org/p/m3bLmneArB


Ah, maybe it's when they're alias types that that wouldn't work?


Actually I just tried that and it worked also. But I think you're right that this hasn't always been the case because I remember being annoyed about it. Or maybe we're both crazy.

You definitely need to specify the type for an interface, but that makes sense.


You're not crazy. I also have vague memories of maps being more obnoxious in Go. :)

The facts are that map literals did change a bit in Go 1.5 (i.e. fairly recently), full literal specification for values could already be elided back then - it was KEYS that required full specification: https://golang.org/doc/go1.5#map_literals

If you go way more back, Go 1 also saved some typing on values, in case they were pointers: https://golang.org/doc/go1#literals


Type inference should solve that if Go ever gets that.


This has already been requested quite often, the Go team refused pretending it wasn't "readable". "readability" is their number one excuse to get away with not discussing a feature at first place.


Great!

but do not forget to document what is implicit. Otherwise, it is magic and make it more confusing. That is the impression of my last attempt to learn rust.


My biggest and probably only real frustrating with Rust is that modules and crates live in the same namespace. That makes stuff incredibly confusing to teach and read. I can otherwise live with the explicit extern/mod if needed.


I also had problems understanding this whole crate system when I was playing with Rust.


This is extremely common, and is one of the reasons why it's such a big part of this post.


but in the post reason of the pain (conventions) described as a cure, and it's really awful.


They need to live in the same namespace, otherwise you'd practically need different syntax for importing something from a crate, or a module. You can use `extern crate foo as bar`, if you are having namespace issues.


You don't need different syntax. Just prefix all non carte imports with your own crate name.


A bit of inspiration can be gleaned from the work of Dr Stefik on Evidence-based language design [1][2].

[1] https://www.youtube.com/watch?v=uEFrE6cgVNY [2] http://dl.acm.org/citation.cfm?id=2534973


Thank you for this link! I'd been trying to find this again...


The implied bound one reminded me of a very similar thing in haskell https://prime.haskell.org/wiki/NoDatatypeContexts . Basically

    data Hashable a => Set a = ...
is completely useless. It only forces you to add constraints to functions that, if necessary, would be required anyway.

Not to be confused with existential quantification

    data ExistentialSet a = forall a . Hashable a => ...
which carries a reference to the hash function in the instances, similar to trait objects in rust.


DatatypeContexts turns on a feature that Rust already has - constraints on type parameters to datatypes. This is about propagating constraints from the datatypes to functions over it.

Haskell hasn't found DatatypeContexts very useful, but Rust has. In my opinion this is largely because of the difference in what our type systems mean - in Rust, types carry a lot more information about the memory model & data layout than in Haskell. This has led to a different skew.

In Haskell, DatatypeContexts also create struggles with higher kinded polymorphism (you can't implement Functor for the definition of Set you just provided, for example), but in Rust, the same memory model concerns that make datatype constraints useful make traits like Functor less useful.


Great initiative

By its very nature, Rust is harder (than let's say Python or JS). It is compiled, there's not much runtime magic to rely on and low level is hard.

But thinking about this and trying to make it easier is important


What does hard have to do with compiled or not? Visual Basic was actually compiled and it was and still one of the easiest languages to learn ever.


VB compiled to a VM that was embedded in the .exe file


PCode was only up to version 6.

Version 6 introduced an actual AOT compiler to native code.

Also its older brother, QuickBasic, compiled to native code.

Incidently VB is now again being AOT compiled to native code, via .NET Native and CoreRT.


If you are looking for easy magic, Ruby and JS will serve you better.


I'm an ergonomics nut and I've been looking to learn Rust. Any chance they're looking for a set of guinea pigs to report their experiences as new users, or are they mostly working on already-known issues?


We are always looking for experience reports. Making a post on internals.rust-lang.org would be great!


I hope the ergonomics initiative takes it towards Java rather than Python. For all the hate it gets, Java's explicitness is a boon when maintaining large scale systems and preventing bugs.


What parts of Java are you thinking of? When I think of explicitness Java has but Python doesn't, types mostly come to mind, but Rust is already statically typed, so that can't be what you mean.


I was able to work with some rust developers at Hack Illinois recently. We started the 2017 Rust cookbook [0] with Brian Anderson.

[0]: https://github.com/brson/rust-cookbook


Very cool! Took me a while to find the rendered version, though: https://brson.github.io/rust-cookbook/


I still can't get my head around rust. While all those features definitely make sense, I find it very confusing sometimes.

Is there something like rust for c++ programmers?


Is this before or after leafing through their book? Because I think there are at least two stages of not-understanding Rust. One is before taking a look at the book and docs in which you can't make any sense of it at all. Another one is after scanning the book and trying some of the examples in which you really start to understand how you can't make any sense of it at all.


https://github.com/nrc/r4cppp exists, I haven't read it though. Or at least, not in a very long time.


There is a porting guide for C/C++ [0] that may help clarify the differences between the two. I have not read it though.

[0] https://locka99.gitbooks.io/a-guide-to-porting-c-to-rust/con...


Oreilly has a pre-release book that seems quite good at explaining rust from a C++ perspective.

http://shop.oreilly.com/product/0636920040385.do


it's not really that much c++; try to drop your c++ glasses. might help to pass trough ocaml to collect $200.


Curious, is it harder for c++ programmers to learn Rust than dev from higher level devs? Because ive seen Rust has a lot more success recruiting devs from python, js, even php.


Not really, it is mostly a culture thing.

There are two main communities in C++, those that embrace safety and take advantage of the language features to improve their productivity, while going down to lower level constructs if performance needs an extra push.

Then there are those that are kind of exiled C developers using a C++ compiler, forced to migrate to C++ on their work, trying to use it as C with C++ compiler.

This is the group that has more issues with Rust.


That maybe it. Perhaps the first camp is more comfortable and familiar with the "ocaml'ness" of Rust, while for the second (C devs) camp the concepts are alien.


Exiled? We totally left voluntarily!


"Idea: implied bounds" sounds like a very interesting idea. It is a pain copying the bounds as author mentions. I also have worked with library code that does not consistently use trait bounds and it can lead to very confusing errors.

The thing that keeps getting me now is there are so many types moving around with generics and traits. It would be nice if it were easier for something to be object safe and/or Any was more powerful. My solution as with many things is to route around it but it is frustrating at times.


> Any was more powerful

https://github.com/rust-lang/rfcs/pull/1849 is relevant to your interests; however, as the comments say there, associated type constructors would be needed before it could possibly be used with Any.


Yeah it seems like you would need some kind of guard to make sure you don't violate any of the lifetimes that could be tucked away. Still good work!

I guess compiler plugins and libraries will eventually fill in the gaps, until then I guess I just have to be creative :)


I was surprised that Rust actually did it the way it is now (i.e. no implicit bounds). Article says accessing the map arg's methods will fail, but isn't the entire argument itself invalid by definition? You can't instantiate the hash map without validating that K and V do implement those traits.


In the standard library today, there's no bounds on `K` on the definition of HashMap, only on the impl block containing its methods. This sort of makes HashMap a bad example for this code block; you're right that if the bounds were on the definition instead of the impl, the argument itself would be invalid.


I think it becomes useful only if whatever struct expects some extra level of behavior from the types. Basically any time a struct or a trait is coupled with another trait or needs a particular marker/lifetime to function I find myself repeating the bounds over and over.

For example in gfx-rs it might be repeating the gfx::Resources or gfx::Factory trait bounds.


With implied bounds my gut reaction is that you get c++ templates without "concepts". Ie the situation c++ is in today, where it's perfectly fine to instantiate a template with an implicit bound but you only get a super confusing error message later when you try to use a particular function that this bound requires.


Python and Nim are good examples to learn from ;)


They should take on a huge project, like say, converting the linux source code into Rust.

The sheer bulk of the code will effectively "force" them to make Rust ergonomic. They might even end up with annoying things, like different sized ints on different CPUs, or... (horror of horrors)... running Rust through a pre-processor as part of its compilation.


The Rust compiler is written in Rust, so rest assured the Rust core developers spend a lot of time dealing with the language itself. Additionally Mozilla is spending quite a bit of developer time on Servo, so we have quite a few people actively writing a lot of Rust code.


Only 7 or 9 actual developers that work for Mozilla in servo. IIRC.


Well both of those things basically already exist; you can do conditional compilation based on architecture, etc. and macros can do some of the less hacky things a preprocesser can (as well as many things it can't).

The biggest Rust project (which already has many of the core contributors working on it) is Servo, and it's probably about as ambitious as you can get while still having a nonzero chance at completion. I don't think there's much additional value in taking on a completely unfinishable project to shape the Rust language so it's good for a kind of programming (i.e. Kernel programming) that is far removed from what 99% of potential users will use it for.


There's Servo: https://servo.org/


Has the linux kernel made C ergonomic though? I guess there are some gcc extensions that can be the showcase for that.


The Linux authors don't write the C standard.


I suppose the question then is, did writing Unix make C ergonomic? :)


I think C is fairly ergonomic.

Also, Dennis Ritchie developed Unix while simultaneously developing C, so it's not hard to imagine that he added features to the language to simplify his Unix code. At one point he added 100,000 lines to Unix within a year, so he had reasons to make the language ergonomic.


Good question :) I think it did, though I'd be interested to hear an informed opinion.


From Dennis himself,

https://www.bell-labs.com/usr/dmr/www/chist.html

The best part for those that care about security:

"To encourage people to pay more attention to the official language rules, to detect legal but suspicious constructions, and to help find interface mismatches undetectable with simple mechanisms for separate compilation, Steve Johnson adapted his pcc compiler to produce lint [Johnson 79b], which scanned a set of files and remarked on dubious constructions."

So although C designers saw the dangers of C and provided static analysis from the early days, many C developers keep ignoring them.


I, for one, see C as both unsafe and unergonomic. The many features of C++ can be seen as various attempts to make some or another thing expressible in C -- parameterized datatypes, namespacing, encapsulated resource management -- that wasn't before.


It is, after being forced to use BCPL to finish his PhD and being used to languages like Simula, Bjarne swore he wouldn't be doing that again.

Hence why he started to design C with Classes after getting his job at AT&T.

https://www.youtube.com/watch?v=ZO0PXYMVGSU

Around 29:00.


I've been writing Rust quite frequently recently, and have enjoyed it. However, the biggest problem I've faced so far is exactly what they are trying to address. Modules can be annoying, and I wouldn't mind better optional parameters or a better way to box up trait objects.


I am not sure if this directly applies to your post, but it came to mind when I read the section about `extern crate`. It seems like the Cargo system is relied upon by almost all Rust users, and I am not sure if the following is an ergonomics problem or a lack of understanding on my part.

A couple months ago I was exploring Rust's web server capabilities after seeing the "Are we web yet?" page. I decided to try out the `iron` package.[0]

I was quickly able to serve some basic content, but I wanted to add some headers to the response.

I was able to `use iron::headers::Allow` to add an Allow header.[1]

Next, I wanted a Link header. Link wasn't available but I could get around that by defining a custom header with the `header!` macro. Unfortunately, I couldn't figure out how to get the `header!` macro for custom headers without `#[macro_use] extern crate hyper` and adding `hyper` to the Cargo.toml file.[2]

Then I wanted a `Vary` header. I was able to get that in with `use iron::headers::Vary`, but I couldn't actually create one yet! In order to create my `Vary::Items` header, I needed to also `use unicase::UniCase` and add `unicase` to my Cargo.toml.[3][4]

So within an hour or so of starting the project, my explicitly listed dependencies had grown from just `iron` to include two additional dependencies. The iron package already relies upon hyper which already relies upon unicase.

Here are some questions I am still left with. Would love any responses.

Is it possible for me to use the pieces described above without explicitly listing these crates? If not, why do I need to declare hyper as a dependency when iron is already using it? Perhaps I don't need to, and I was just unable to figure out how to get the `header!` macro from iron directly. My initial expectation was that iron would either wrap or expose every part of hyper that I might need. The same goes for hyper not allowing me to just use the same unicase it relies upon.

How am I supposed to get the correct version of hyper and unicase to match with the ones that my version of iron was sent with? Do I have to go look them up? Can use the latest version of `hyper` even if `iron` is a few versions behind? What version should I be specifying?

[0]: http://ironframework.io/doc/iron/

[1]: http://ironframework.io/doc/iron/headers/struct.Allow.html

[2]: https://hyper.rs/hyper/v0.9.9/hyper/macro.header!.html

[3]: http://ironframework.io/doc/iron/headers/enum.Vary.html

[4]: http://ironframework.io/doc/unicase/struct.UniCase.html


> Is it possible for me to use the pieces described above without explicitly listing these crates?

Not unless iron exposed them directly. I haven't used it a while, so I'm not sure if it does.

> If not, why do I need to declare hyper as a dependency when iron is already using it?

There's a difference between a direct and transitive dependency, it's not going to just inherently bring transitive stuff into scope.

> Perhaps I don't need to, and I was just unable to figure out how to get the `header!` macro from iron directly.

Yeah I haven't used iron in long enough to say, but this is theoretically possible.

> My initial expectation was that iron would either wrap or expose every part of hyper that I might need.

Yes, it is possible that iron's API isn't the best here.


Macros and the importing and scoping of them is a mess in rust right now and major rewrite of the macro system is underway. Currently crates have no way to re-export macros from other crates and #[use_macros] brings all macros in the imported crate to the local crate's namespace as-is (no way to prevent possible naming conflicts)


major rewrite of the ... is underway.

That's a common response to criticisms of Rust. At this late date, that's a problem.


There are only two cases where that's been a thing. Macros, and nonlexical lifetimes.

The major rewrite for macros has already brought fruit with Macros 1.1, solving some of the more pressing issues.

The other one, nonlexical lifetimes, needed a complete overhaul of compiler internals. The main part of that work happened with MIR, but there's still work to be done. It's ongoing. This was work that was going to take a long time, and it did.


I mean, it would still have taken time. Should they have delayed 1.0 and stability for it? I don't see the benefit.

There have been plenty of quality of life features that were promised and delivered successfully. Many new ones have been discovered and are in active, visible progress, but are not implemented yet, so yes, you can always identify some pieces that are still being developed. The amount of resources is finite, but Rust is getting significantly better each year.


What else is that a response to? Macros have had this planned since before 1.0, with the current system specifically designed as a temporary workaround until it that happens.


I agree, the macro scoping is just a small part of the rewrite but it is a problem (especially ergonomically) that rust should have got right the first time around.

Right now macros don't quite feel like they are part of the core language, but instead a hack or plugin slapped haphazardly on top of the compiler.


> I am not sure if this directly applies to your post, but it came to mind when I read the section about `extern crate`. It seems like the Cargo system is relied upon by almost all Rust users, and I am not sure if this is an ergonomics problem or a lack of understanding on my part.

This is definitely true, and I don't think it's negative. You can use Rust without Cargo and crates.io, but it feels very different. It's nice that that option exists, but I wouldn't want to work that way. Any sizeable project is likely to accumulate a large number of crates.io dependencies, but having written a lot of C++ code, the alternatives are "reimplement this thing myself" or "copy paste some code", neither of which are particularly good.


Oops, I used `this` when I should have said `the following.` I have updated the line in my post above.

I wasn't suggesting Cargo being relied upon by everyone was an ergonomics issue, but instead referring to the description in the remainder of my post.


> why do I need to declare hyper as a dependency when iron is already using it?

Without commenting on the particulars of the iron API (I haven't used it), it makes sense for library authors to be able to make an explicit decision as to whether or not their dependencies are going to be a part of their backwards compatibility contract or not. You could imagine a circumstance where a library A that you used depended on some library B and wanted to switch, internally, to instead depending on library C. If library A had exposed library B to its consumers as part of a public interface, that would be a breaking change, but it wouldn't be if they hadn't.


> it makes sense for library authors to be able to make an explicit decision as to whether or not their dependencies are going to be a part of their backwards compatibility contract or not

Absolutely. But as soon as those are required in order to use your library, shouldn't they be included?

> If library A had exposed library B to its consumers as part of a public interface, that would be a breaking change, but it wouldn't be if they hadn't.

Maybe I should have refined my original post. I guess it was way too long.

The hyper crate exposes `hyper::header::Vary`. To create a `Vary::Items` in my program, I also need to `use unicase::UniCase`. See the short example at [0].

It felt odd to me that I was required to have my own dependency on unicase when it seems like an internal issue for how Hyper represents the Vary header.

I have little experience with these dependency management systems. This seems like an irrelevant detail (they look just like strings to me! why doesn't it just take strings!) requiring me to explicitly include an unrelated package. I may have no other need for the unicase package.

Is this a normal thing, though? Since unicase is required in order to use this feature of hyper, shouldn't it just come along with it? Or the Vary item could just use Strings on its public interface so the user doesn't have to go through this extra work?

[0]: https://hyper.rs/hyper/v0.8.1/hyper/header/enum.Vary.html#ex...


I believe what hyper should do there is `pub use ::unicase` in the hyper crate root, allowing you to use unicase directly through hyper - since it's a hyper dependency and you need to ensure you're using the same versions as it.


Thank you! At least I'm not the only person who believes this would be the expected behavior. I'm still half expecting someone else to pop out and explain that I'm way wrong, though.


development "ergonomics is something close to my heart.

I'm a statically-typed, "easy-on-the-eyes" Python looking-loving guy, but there's something beyond that, when it comes to "ergonomics".

I'm really impressed with the Clojure community and parinfer, paredit(old lisp school) and just slinging code around with rapid feedback.


I forgot, what's Rust's opinion on OO? I would hope it's non-traditional. We need to get away from tradition OO and concentrate on what really matters - dispatch!.


Trait specialization is Rust's answer to several of the abstractions provided in most OOP languages and the initial implementation (with conservative type resolution) has been in nightly for a while. I believe there is still a bit of work to do before stabilization but the feature is under active development. One of the primary challenges is the soundness and predictability of the algorithm that selects the right implementation for each type.

With specialization, instead of inheriting classes, traits can provide blanket and default implementations for types that depend on the traits implemented by that type. So, for example, you can provide several different "impl<T> ExTrait for T" with different trait bounds like "T: Clone", "T: Clone + Send", and "T: Clone + Send + Future<...>". Once the conservative inference algorithm is improved, this will be a much more powerful way to compose functionality than OOP but it will require a different approach to abstraction.

I think the plan is to eventually add trait objects that implement multiple traits which will cover more complex cases of dynamic dispatch like those in virtual inheritance.


Rust has flat level class: Struct and its implementation methods. No inheritance here. Struct method can be called as struct_obj.func1().

Rust also has Trait, which is like interface, with a set of method function signatures. A Trait is implemented for a Struct to have concrete method implementation. Trait supports inheritance.


Rust doesn't have classes or inheritance. Instead of classes you just have data, and you can give those data types methods via traits, which gives you composition.


Rust calls them "structs". They're like classes, but different. Rust also has "traits". They're like abstract classes, but different.


Structs are for data and traits are for methods?


Struct is for data and Struct's implementation is for methods. Trait is like interface, just a set of function signatures.


Why isn't it called interface then?


Because they are traits [1], not interfaces, and they also have some features from type classes [2].

When you create a class in an OOP language, you have do declare all interfaces it supports, and provide implementations for them (or mark the class as abstract). In Rust's case, you declare the data separately as a struct declaration, and then implement the traits you want in impl declarations. You can also implement traits for other traits, or for types from other modules, including the standard library.

Finally, compared to traditional OOP interfaces Rust's traits can require static functions in addition to instance methods and they can also implement methods with an overridable default implementation.

[1] https://en.wikipedia.org/wiki/Trait_(computer_programming) [2] https://en.wikipedia.org/wiki/Type_class


Ah, I read the Wiki article and my impression of traits was "interfaces with implementation" and as far as I could tell, Rust traits have no implementation, they 'need' impl(emendations) so they seemed more like interferences to me :)


You can have default implementations for methods if you'd like.


Traits are closer to interfaces than abstract classes, iirc abstract classes can have fields.


I've noticed this and while I don't use it much it has caught me off guard once or twice. And while it wasn't the end of the world, I wasn't too fond of the solutions I came up with. Is there a reason why they can't have fields? Why wouldn't they just be an "extension" to a struct/enum, modifying their memory representation?


Rust doesn't do single inheritance, basically.

If they had fields, what would you do when you implemented two of them on a struct? It also muddles up the ability to tack on traits in later crates.

Traits aren't supposed to be used the way you do single inheritance. Rust prefers composition over inheritance, so you combine structs and enums to get what you want there.


> Is there a reason why they can't have fields?

The design isn't done yet: https://github.com/rust-lang/rfcs/pull/1546


Note that this is different from what's being asked, what's being asked is why they can't have fields like abstract classes that get appended to the original struct. That would make traits more like mixins.

That RFC lets you declare fields that the trait implementor is supposed to provide; implementing a trait will not automatically add a field, instead, you will be forced to add such a field yourself (unless one already exists) and link it to the trait field.

It's a more rusty way of addressing many of the same use cases, but it's not the same thing. It's basically sugar for declaring a getter and setter as trait/interface methods.


While it's not what I asked, I actually like the answer as long as I can provide overrideable default methods in the that use the lvalues. I can't think of any of my own use cases that couldn't adequately be covered by it.


Ah yeah, I read your post wrong, my bad!


Oh, yeah, like I said it solves many of the same use cases, it's just not the same thing :)


It is like Go and Haskell, in that it supports structs and interfaces and that's it.


A challenge I've had with Rust lately is factoring initialization code into separate functions. Because of stack-based allocation it has to stay in the main function. For example:

    pub fn do_many(iter: &mut Iterator<Item=String>) {
      let mut job_id = None;
      let job_id_env = env::var("MYAPP_JOB_ID");
      let mut log = if let Ok(val) = job_id_env {
        write_pid_file(&val);
        job_id = Some(val.clone());
        let home = env::var("HOME").expect("HOME must be set");
        let path = format!("{}/log/myapp-{}.log", home, val);
        let path = Path::new(&path);
        match File::create(&path) {
          Ok(mut f) => Box::new(f) as Box<Write>,
          Err(e) => {
            if format!("{}", e) == "No such file or directory (os error 2)" {
              Box::new(io::stdout()) as Box<Write> // oh well
            } else {
              panic!("Can't open log file: {}", e);
            }
          },
        }
      } else {
        Box::new(io::stdout()) as Box<Write>
      };

      // Commit the tx if we get these signals:
      let signal = chan_signal::notify(&[Signal::INT, Signal::TERM]);

      let negotiator = OpenSsl::new().unwrap();
      let url = env::var("MYAPP_DATABASE").unwrap_or("postgres://myapp_test:secret@localhost:5432/myapp_test".to_owned());
      let tls = if url.contains("@localhost") { TlsMode::None }
                else { TlsMode::Require(&negotiator) };
      let conn = Connection::connect(url, tls).expect("Can't connect to Postgres");
      let db = make_db_connection(&conn); // defines a bunch of prepared statements
      
      // now we can do stuff . . . 

    }
I would really like to have just this:

    let log = open_log();
    let db = prepare_db();
But those don't work, because all the temporary values are going to fall off the stack when the helper functions return. I wish rust were smart enough to make the functions put the values directly in the caller's stack frame. Alternately, I wish rust would let me say that all those temporary values should live as long as the returned thing (log and db), so it can keep them around even if I don't have variables for them.

I thought maybe macros would help here, since there is no new stack frame, but they still introduce a new scope that limits the lifetime of the temporary variables.

Even worse, if I want to write tests for functions that use the log and db, I need to repeat all that code again and again.

I think the answer is to use Box here? I haven't worked that out yet, but it definitely feels harder than it should. And even if I can make it work, I'm a little sad that I have to give up stack-based allocation.

I've also read that the answer might be OwningRef (https://crates.io/crates/owning_ref), but I'm not sure yet. I wish the Rust book had a section about it. It seems like Cow and Rc might also help me---I don't think so, but I'm not positive yet. Covering these allocation-related crates in a systematic way would be nice.

Anyway, I'm just a Rust newbie, but it sounds like the ergonomics effort is (partly) for newbies like me, so I'm trying to express my struggles in terms of a pattern that the Rust team could optimize for. It seems like something that people would hit quite often. I'm sure there is an answer to what I'm trying to do, so my point is that maybe it should be easier to find, or at least better documented.


> Alternately, I wish rust would let me say that all those temporary values should live as long as the returned thing (log and db), so it can keep them around even if I don't have variables for them.

Possibly I've missed something critical about your example, but I think you may want to create a struct Log, turn open_log() into Log::new(), and put the things the log needs (such as the log file) inside Log, owned by Log.


So I passed out a lot of upvotes, but I thought I would add a thank you to you and others trying to help me. :-)

I will try your suggestion re the log. The database example is trickier I think since the prepared statements have references to `conn`, so it can't move. Also it's annoying that I have to make `negotiator` even when I don't need it.


It sounds like you're coming from the land of GC. Rust helps a lot with managing memory but the pattern you're talking about is creating garbage which the GC would then have to collect.

Manually collect the things you need to hold onto and put them into a struct and return that from open_log(); Do the same with prepare_db(). Then give the structs some methods for getting to the actual db object.

Alternatively use log4rs and rust-postgres. Or inspect their code to see how they handle it.


The answer is probably to return a value directly, although it's tough to say without a complete example.

But in general, if you have a routine that is creating stuff, and the stuff is meant for the caller to use, you create it in the routine and return it by value; the caller will then automatically "own" that value and either pass it somewhere else, or let it fall out of scope (which is when it'll be dropped and cleaned up for you).


It is not totally clear to me why you can't have those functions, or rather, those functions with a little bit of change to their signature. If you have this somewhere as a compilable example, I'm happy to look at it, but it's tough when there's so much stuff here that I don't know the signatures of.


I will certainly take you up on your offer!:

https://github.com/pjungwir/rust-initialization-functions



> I wish rust were smart enough to make the functions put the values directly in the caller's stack frame.

That's one reason why macros exist.

> I thought maybe macros would help here, since there is no new stack frame, but they still introduce a new scope that limits the lifetime of the temporary variables.

Why can't you just "return" the variables you need later on ?


From moving from Perl to Rust, the only thing that I miss is the postfix "if":

  return true if number > 2;
VS:

  if number > 2 {
    return true
  }
I find that one-liners like these are really ergonomic.


You can do something similiar in rust. 'if' is an expression, so it can appear on the right hand side of an assignment.

  let x = if number > 2 { "yes" } else { "no" };
It's not quite the same, and I'm not sure what the failure case would be if the 'else' statement was ommitted, but it comes close.

After experimenting with Rust and Elixir I've really come to like the 'everything is an expression' approach. That and pattern matching can make some things really expressive.


Ah yes... ternaries. Actually, it's two things I miss :)


That would this in rust: if number { true } else { false }

If it's the last expression it will implicitly return the values. Easily fits on one line.


My one puny objection to this is that rustfmt won't put the arms on a single line, but will always break and indent around the blocks. That makes Rust's expression-if considerably more verbose than what Perl has, or even the ternary operator.


Hm. I wonder if this is configurable, or if there's an RFC open for this.


I have found using conditionals as expressions super useful, like:

``` fn foo(number: u32) -> bool { if number > 2 { true } else { false } } ```

I mean obviously in this situation you could just have the body of the function be `number > 2` but I write a lot of Rust code that does similar things now.


In Perl, it's fairly idiomatic to use a postfix condition on a return like that when doing early return. Some of that is obviated by Rust being typed, some is not. e.g.

    sub compute_interest( $amount, $interest ) {
        return $amount if $interest == 0; # Quick return
        die "We don't allow computation of negative interest rates" if $interest < 0; # Throw an exception
    
        # Do the actual work
        ...
    }
Edit: Also, it's worth noting that Perl enforces some behavior on this by only allowing postfix conditionals to follow a single statement, not blocks, so it's not just a regular conditional with the order reversed.


I've always disliked this aspect of Perl... I prefer control flow to be obvious.


Hiding the actual control part on the right is pretty bad, yes.

Other than that, early returns can simplify flow a lot - otherwise you may have to do massive nesting ifs or many flags. Or even goto or exceptions.


It's less bad than it seems, since it's a common idiom in Perl, so you're used to looking at it. It can be quite bad if abused, but so can so much in Perl.

When used with a return or die (or my personal favorite for debugging with 'warn "FOO" if $ENV{DEBUG};"), the fact that flow is affected is obvious by the very first characters in the statement, so it's obvious to then look for when it applies.

Like so many features of languages, how it looks from the outside compared to how it looks from those that are well versed in the language can be quite different (not to say that everything that looks like a wart in Perl is okay once you get used to it, every language has real warts). That's another aspect to this whole thing, how much to you emphasize ergonomics that are primarily for learning and novices. Features focused at novices to the expense of those familiar with the language are interesting, because they may draw a lot of people to your language, but you may not retain them very well.


yep, that works... but having it postfix affords salience


ProTip:

In both languages you can do the following:

    return number > 2;


Not really the same thing. The Perl version is generally used as an early abort either from a subroutine or a loop.

    return if($number > 2);
Is equivalent to:

    if($number > 2){
        return;
    }


I didn't know that. Thanks for explaining it.


But that's different. Perl's variant isn't that intuitive.


Also that was just an example. Another along the same line:

  x = 42 if y > 7;
Rather than

  x = match y > 7 { true => 42, false => 0 };
Or even:

  foo() if bar();
Rather than:

  if bar() { foo() }


Given how crucially important scopes are to Rust, this would be a uniquely poor fit. :P


Yeah, I totally understand why not. Just my comments from the peanut gallery :)


Can't trust such ideas. Especially when I read about "conventions are good". No, "conventions" is the plague and that example of mod.rs is an example of crappy thing in Rust, difficult to gasp by a newbie. Exactly because it's just "convention". Hope they will not destroy Rust by adding more "conventions" or by switching to " implicity over explicity" camp of noobs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: