Hacker News new | past | comments | ask | show | jobs | submit login
Rust is mostly safety (graydon2.dreamwidth.org)
487 points by awalGarg on Dec 29, 2016 | hide | past | favorite | 458 comments



I'm a lowly ancient Java programmer and I think Rust is far far more than safety.

In my opinion Rust is about doing things right. It may have been about safety at first but I think it is more than that given the work of the community.

Yes I know there is the right tool for the right job and is impossible to fill all use cases but IMO Rust is striving for iPhone like usage.

I have never seen a more disciplined and balanced community approach to creating PL. Everything seems to be carefully thought out and iterated on. There is a lot to be said to this (although ironically I suppose one could call that safe)!

PL is more than the language. It is works, community and mindshare.

If Rust was so concerned with safety I don't think much work would be done on making it so consumable for all with continuous improvements of compiler error messages, easier syntax and improved documentation.

Rust is one of the first languages in a long time that makes you think different.

If it is just safety... safety is one overloaded word.


I think this comment and the OP are both correct. The point of Rust is safety, in the sense that memory safety should be the default for all programming languages, and has been the default for all non-systems languages since the 90s. The only holdouts have been because people used to claim that memory safety wasn't possible without making a language unusably slow, which Rust has disproven. In all my years of teaching Rust, I can't count how many times I've told people that you can have a memory-safe systems language without garbage collection and have them look me in the eye and say, "what, no, that's supposed to be impossible", and I still get a kick out of it every time.

The importance of Rust is that it's raised the baseline for low-level languages in the modern age. If any future systems languages emerge that don't feature memory safety, it will have to be a deliberate choice that must be defended rather than just an implicit assumption of how the world works.


I really want to love Rust, and while I understand the Rust borrow checker in theory, actually using it in practice has been a major headache. I tried Rust on a simple terminal-based project, and after a week of feeling that I was getting nowhere I switched to Go and had a proof of concept in several hours.

With that said, can you recommend a good source to really understand best practices and patterns for ownership and borrowing? I feel that's the biggest hurdle to using Rust (at least in my case).


Alas, I think a week is too short to give Rust. Go will get you more pay-off in the short term, but as you internalise the rules of Rust you'll really start to reap the benefits. Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.


Agreed. It took me about two months to really understand Rust, and I was coming from a background of languages like C++ and Scala. It pays off, though, for what I want to do with computers.

The rise of "it demos well so we should use it" is in a lot of ways troubling. The inflection point for productivity doesn't need to be "five minutes in" to be worthwhile if you're doing something that is, itself, worthwhile.


> Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.

How about http://rust-lang.github.io/book/ ?


Mind sharing the source code? I can whip out a Rust version in the same amount of time. I have been using Rust for two years. I never worry about the borrow checker as it never gives me problems.


If you know swift/objc pretty well with it's ARC reference counting memory management, do you pick up the borrow checker faster?


AFAIK Swift doesn't really have references or anything to enforce unique ownership. If you've ever used a language with pointers before, including the simple pointers that Go has, then that's a good start. If you further understand how escape analysis works in Go (or the various escaping annotations in Swift), then imagine a language where every variable must never escape, and where this is enforced by the compiler.

Mostly I think that the fear of the borrow checker has become more meme than truth at this point. The difficulty with Rust is that it has things found in different languages, so no matter who you are you probably have to learn something: a strong type system and unique ownership and pointers, and if you're coming from a language like Python and Javascript you may know none of this! But that's what we're here for (that's where I came from!), and we like to help. :)


Seeing that people keep testifying to the fact that the borrow cHecker gives them problems, I wouldn't say it's just a meme. Its easy and tempting to imagine problem points disappearing over time, but realistically, some portion of people will always run into it, because it's unique and fundamentally a bit complex. It's better to spread that message, that running into borrowing problems is a common, real, yet temporary hangup that can be overcome.


I would say that this is a common message, to a degree. What I commonly see people expressing about Rust is that it took them a while to internalize the borrow checker, possibly a few weeks, and where before that point it was painful to work with to some degree, afterwards it was a boon as it helped them more quickly spot problems and forced them to consider the problem more closely before committing to code that might need to be scrapped.

As someone who has yet to do more than do minimal dabbling in the language, this is a very positive message. It expresses that there is some work to learning this so don't be put off if you experience it, it's normal, and that the work required to learn it pays off in the end.

That's probably a more appropriate message than that it's not hard. I don't think it's appropriate to express to people that learning pointers in C and C++ is "easy". It's not "hard", but it's not necessarily easy for some people. It requires a specific mental model, and depending on how they learned to program, it may be more or less easy for them to wrap their head around. Afterwards, it's easy and makes sense. I assume the borrow checker follows a similar learning hurdle. That doesn't mean we should forget what it's like before we've learned it though (and at this point, there's probably a lot of people in the midst of learning rust that haven't quite fully internalized the borrow checker).


I'm inclined to agree. It took me a week or two of daily usage to finally "get used" to the borrow checker. It's a bit of a paradigm shift, to be sure, but it's not insurmountable. In fact, I think learning Rust has made even my C code better, because now I'm in the habit of thinking thoroughly about ownership and the like, which is something that I did to an extent before (because you have to to write robust software without a GC), but it was never explicit and I never would've been able to articulate the rules like I can now.

That said, I still occasionally have problems where I feel like I'm doing something "dirty" or "hacky" just to satisfy the borrow checker. It's easy to program yourself into a corner and then find yourself calling `clone()` (the situation, I've been told, has gotten much better in recent releases with improvements to the borrow checker, but alas I haven't had a chance to play much with Rust in nearly a year).

Another thing that I still find difficult is dealing with multiple lifetimes in structs, to the point that I usually just say "to hell with it" and wrap everything in an `Rc<T>`. And sometimes there's simply no safe way (afaict) to do some mutation that I want to do without risking a panic at runtime (typically involving a mutable borrow and a recursive call), which leads to a deep re-thinking of some algorithm I'm trying to implement. That's not Rust's fault, though—it's a real, theoretical problem that arises in the face of mutation. In time, I'm sure there will be well-understood patterns for handling such cases.


I absolutely agree.

If you look at Rust from a systems programmer perspective and compare it with the systems languages OP lists then, yes, safety is THE most radical feature.

But Rust can compete on so many more levels. Web services, user facing applications for example. Languages competing in that space usually bring memory safety, so it's kind of a non-issue. Safety enables Rust to be a viable choice for these tasks, but it needs more than that to be on-par with the other languages. And Rust's got plenty of things going for it, so there's nothing wrong with stopping to play the safety card (since that is expected anyways) and painting Rust as a language that is actually fun to work with.


>But Rust can compete on so many more levels

How ? There are languages with more expressive type systems high level type systems (Haskel/OCaml presumably). There are languages with much more mature libraries, ecosystems and tooling (C#/Java). There are languages with both (F#/Scala).

What is it that makes Rust a good applications programming language ? You said it your self GC doesn't really matter that much in this space and GC based languages are just more elegant and not to mention the tooling is way more mature. Runtime also doesn't matter that much and with the recent changes to .NET can be avoided anyway.

This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io", "use rust for everything because the type safety is literally the best and it's the only language with a strong type system out there". I get it it's new and shiny, I like it to, and it has a strong argument to make in the systems programming - the designers are doing a good job of making trade offs that let you retain low level control while still having memory safety - but these tradeoffs are just that and they come at the expense of higher level stuff, higher level languages don't need to make these because they don't pretend to be systems programming languages.


Yeah, on any given dimension, even safety, you'll find languages that are stronger than Rust. It excels IMO because of the balance it finds between type system, safety, functional vs imperative code, etc. It puts all these together in one package that I feel like I can use for actual work. I don't know of any other solid contenders here, except maybe Swift.

I have to agree that the "Rust all the things!" game is getting old.


When I write in Rust, I have feeling "that's how programming should work". I don't know how to express it in more scientific way. Errors handling, pattern matching, Result type - it's all how it should work. I know, some other languages have similar features, but Rust also has race-conditions protection (very important thing), good package manager out of the box, testing tool (cargo test), very smart compiler and great performance - just all the best.


Lots of good programming languages out there today honestly. Rust is great. I recommend everyone learns themselves a language for each need.

I've got Clojure/Clojurescript, Rust, Python, HTML5 JavaScript, F#/C# and Java.

Clojure/Clojurescript is used for long running processes and web development. So backend/frontend stuff. I also use it for fun experiments, the REPL is great at that.

Rust is used when performance matters, when I want the simplicity of running a native executable with no heavy dependency, and when I need to write low level components.

Python is my goto scripting language for quick scripts, hacks, messing around. It also serves my scientific computing needs, data mining, visualization, etc. Also, this is the best language I've found for doing coding interviews or practice. So easy to whiteboard python.

HTML5 JavaScript is used for most web development, for customizing my editor Atom.io, and other things. With Metro apps, and other OS moving towards an HTML5 JavaScript stack, its also quite good at some user apps.

F# is good for simply knowing an ML language and when I want more functional flair in the .Net world.

C# and Java are used to pay the bills, as they are easiest to find jobs for.

There would be valid alternatives for most of those categories, but I highly recommend everyone invest in knowing one language for each one.


I get the opposite feeling. When I write in Rust, I have feeling "that's how programming shouldn't work". The syntax is awkward and you are expected to spend weeks to get anything done. Ultimately, it seems to come down to Rust fans who have a sunken cost fallacy - I spent weeks learning this, so they think it is worth it.


When I was learning C back in 1990, it took me a long time to get comfortable with it, and that's after years of programming in Assembly language (8-bit, 16-bit, 32-bit). It took me some time to stop writing assembly in C and start writing C in C, if that make sense.

I haven't switched to Rust yet (I still deal with C/C++ at work on "legacy" code, and I'm having too much fun with Lua for my own stuff) but I don't expect to pick it up "quickly" and I'm sure I'll be trying to write C in Rust for some time. It comes with the territory.


Just because some people have a different opinion than you, doesn't mean it is due to fallacious reasoning. Ultimately, it seems to come down to rust haters who have a sunken cost fallacy - I spent years writing bad code in bad lanaguages, so I don't want to give that up to do things better.


Please be more patient :) Flamewars will not help us, definitely. I'm nobody to criticize your way to communicate, but for the sake of Rust community reputation - please let's avoid "Rust vs X" wars.


Please be more patient :) Replying to posts you didn't read will not help us, definitely.


Few weeks is not enough time to get it when you don't like ideas of the language initially. I tried to read one dynamic language (will not name it to avoid fans fury) and realized very quickly that it's not my language. The only difference - I've been never thinking about "sunken cost fallacy" - there are languages I can use and there are others, and this diversity is necessary for healthy evolution.


I spent weeks learning other languages, just as I have Rust, but I still prefer to write code in Rust.


This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io"

If what you are suggesting is true, isn't the biggest problem by far the fanboyism posing as knowledge? Wouldn't complaining about Rust be like living in the age of alchemy, and complaining about someone's particular potion? Isn't the epistemological squishyness of the entire field the biggest problem by far?


No, that's cache invalidation. And naming things.


IMO the biggest plus for rust is actually cargo. Building, versioning, and sharing modular code is essentially copy/paste in C/C++, and compared to that cargo is lightyears ahead.

I actually wish rust would accept its systems niche even more and move the stdlib to crates and make nostd the default mode. Personally, I see no reason to market rust for webapps or gui stuff, it cant/wont compete with Rails/QT for years to come if ever there.


I couldn't agree more with this. Cargo (and the general desire to think and work hard on ease-of-use) is a huge part of what makes Rust a pleasure. It alone would be enough for me to steer someone with experience in neither to Rust over C++ for lots and lots of use cases.


I haven't looked at rust, and this sentiment is exactly what has kept me away. I watched perl, java, python, ruby, node and c++ (boost) fall into the trap of "we know better than the end-user/developers/sysadmins/os vendor, so let's reinvent dpkg poorly".

Why should cargo be any different? It is solving a problem I don't have (debian, ubuntu, openbsd, and freaking illumnos all have acceptable package management), and creating a massive new problem (there is a whole thread below this one talking about rust dll hell between nightly and stable, and the thread links to other HN articles!). From my perspective all this work is wasted just because some developers somewhere use an OS that doesn't support apt or ports.

Sorry this is so ranty, but I really want to know if anyone has had luck using rust with their native package manager.


TL;DR: I think language-centric package managers do a better job at versioning packages per-project. Here's an anecdote to explain what I mean.

-----

Let's say I want to build a piece of software that depends on some software library written in C at version 1.0.1. It's distributed through my system package manager, so I sudo apt-get install libfoo.

~~ some time later ~~

Now let's say I want to build a different piece of software that also depends on foo, but at version 1.2.4. I notice that libfoo is already installed on my system, but the build fails. After a quick sudo apt-get install --only-upgrade libfoo. This piece of software now builds.

~~ Even later ~~

When I revisit the first project to rebuild it, the build fails, because this project hasn't been updated to use the newer version yet.

I'm fairly inexperienced with system package managers, but this is the wall I always hit. How should I proceed?


I'm arguing that you should just have one package manager, so in my world the only way for this to happen is if both the packages you're installing are tarballs that no one has bothered to port to your system. If the language-specific package manager did not exist, then there would be a better chance the package would exist for your OS already.

Anyway, Debian/Ubuntu has multiple fallbacks for this situation:

a. ppa's

b. parallel versions for libraries that break API compatibility (libfoo-1.0...deb, libfoo-1.2...deb that can coexist).

c. install non-current libfoo to ~/lib, and point one package at it (not really debian-specific)

d. debootstrap (install a chroot as a last resort -- this is better than "versioning packages per-project" from an upgrade / security update point of view, but worse from a usability perspective -- you need to manage chroots / dockers / etc).

I suspect the per-project versioning system is doing b or d under the hood. b is clearly preferable, but hard to get right, so you get stuff like python virtual environments, which do d, and are a reliability nightmare (I have 10 machines. The network is down. All but one of my scripts run on all but one of the machines...)

A long time ago, I decided that I don't have time for either of the following two things:

- libraries that frequently break API compatibility

- application developers that choose to use libraries with unstable APIs that also choose not to keep their stuff up to date.

This has saved me immeasurable time, as long as I stick to languages with strong system package manager support.

Usually, when I hit issues like the one you describe, it is in my own software, so I just break the dependency on libfoo the third time it happens.

When I absolutely have to deal with one package that conflicts with current (== shipped by os vendor), I usually do the ~/lib thing. autotools support something like ./configure --with-foo=~/lib. So does cmake, and every hand-written makefile I've seen.

[edit: whitespace]


> talking about rust dll hell

No, what that thread is talking about is that somebody wrote a library to exercise unstable features in the nightly branch of the Rust compiler, and that inspired somebody else to write a sky-is-falling blogpost claiming that nightly Rust was out of control and presented a dozen incorrect facts in support of that claim, so now we have to bother refuting the idea that nightly Rust is somehow a threat to the language.

As for the package manager criticism, the overlooked point is that OS package managers serve a different audience than language package managers. The former are optimized for end-users, and the latter are optimized for developers. The idea that they can be unified successfully is yet unproven, and making a programming language is already a hard enough task that attempting to solve that problem is just a distraction.


From the thread, I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree--people are talking about when packages will land in stable, but I'd expect that to all be automated by test infra, and too trivial for developers to work around to warrant a forum thread.

Anyway, it sounds like I stepped on a FUD landmine. Sorry.

It sounds like you work in this space. From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s, and it supports many languages, some of which don't seem to have popular language-specific package managers.

What's missing? Is it just cross-platform support? I can't imagine anything I'd want beyond apt-get build-dep and apt-get source.


> Anyway, it sounds like I stepped on a FUD landmine.

That's the problem with FUD, it gets everywhere and takes forever to clean up. :)

> I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree

Let's be clear: stable is a strict subset of nightly. And I mean strict. All stable code runs on nightly, and if it didn't, that would mean that we broke backwards compatibility somehow. And even if you're on the nightly compiler, you have to be very explicit if you want to use unstable features (they're all opt-in). Furthermore, there's no ironclad reason that any given library must be on nightly, in that boring old every-language-is-Turing-complete way; people use unstable features because they either make their code faster or because they make their APIs nicer to use. You can "backport" them by removing those optimizations or changing your API, and though that seems harsh, note that people tend to clamor for stable solutions to their problems, so if you don't want to do it then somebody else will fork your library and do it and steal your users. There are strong incentives to being on stable: since stable code works on both nightly and stable releases, stable libraries have strictly larger markets and therefore mindshare/userbase; and since stable code doesn't break, library maintainers have much less work to do. At the same time, the Rust developers actively monitor the community to find the places where relatively large numbers of people are biting the bullet and accepting a nightly lib for speed or ergonomics, and the Rust developers then actively prioritize those unstable features (hence why deriving will be stable in the February release, which will get Serde and Diesel exclusively on stable, which together represent the clear plurality of reasons-to-be-on-nightly in the wild).

> What's missing?

I've already typed enough, but yes, cross-platform support is a colossal reason for developers favoring language-specific package managers. Another is rapid iteration: it's way, way easier to push a new version of a lib to Rubygems.org than it is to upstream it into Debian. Another is recency: if you want to use the most recent version of a given package rather than whatever Debian's got in stock, then you have to throw away a lot of the niceties of the system package manager anyway. But these are all things users don't want; they don't want to be bleeding-edge, they don't want code that hasn't seen any vetting, and they really don't care if the code they're running isn't portable to other operating systems.


> From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s

I think a more accurate assessment would be that both Red Hat and Debian extended their package support through repositories to enough packages that developers often opt for the easy solution and use distribution packages instead of package manager provided ones because it's easy to, and there are some additional benefits if you are mainly targeting the same platform (and to some degree, distribution, if that applies) that you are developing on.

Unfortunately, you then have to deal with the fact that some modules or libraries invariably get used by code parts of the distribution itself, making their upgrade problematic (APIs change, behavior changes, etc). This becomes problematic when using or targeting a platform or distribution that provides long term support, when you could conceivably have to deal with 5+ year old libraries and modules that are in use. This necessitates multiple versions of packages for a module or library to support different versions sometimes, but that's a pain for package managers, so they tend to only do that for very popular items.

For a real, concrete example of how bad this can get, consider Perl. Perl 5.10 was included in RHEL/CentOS 5, released in early 2007. CentOs 5 doesn't go end of life until March 2017 (10 years, and that's prior to extended support). Perl is used by some distribution tools, so upgrading it for the system in general is problematic, and needs to be handled specially if all provided packages are expected to work (a lot of things include small Perl scripts, since just about every distro includes Perl). This creates a situation where new Perl language features can't be used on these systems, because the older Perl doesn't support them. That means module authors don't use the new features if they hope to have their module usable on these production systems. Authoring modules is a pain because you have to program as if your language hasn't changed in the last decade if you want to actually reach all your users. Some subsert of module authors decide they don't care, they'll just modernize and ignore those older systems. The package manager notice that newer versions of these modules don't work on they older systems, so core package refreshes (and third party repositories that package the modules) don't include the updates. Possibly not the security fixes as well, if it's a third party repository and they don't have the resources to backport a fix. If the module you need isn't super popular, you might be SOL with a prepackages solution.

You know the solution enterprise clients take for this? Either create their own local package manager repo and package their own modules, and add that to their system package manager, or deploy every application with all included dependencies so it's guaranteed to be self sufficient. The former makes rolling out changes and system management easier, but the latter provides a more stable application and developer experience. Neither is perfect.

Being bundled with the system is good for exposure, but can be fairly detrimental for trying to keep your user base up to date. It's much less of a problem for a compiled language, but still exhibits to a lesser degree in library API change.

Which is all just a really long-winded way of saying the problem was never really solved, and definitely not in the 90's. What you have is that the problem was largely reduced by the increasing irrelevancy of Perl (which, I believe was greatly increased by this). Besides Python none of the other dynamic languages (which of course are more susceptible to this) have ever reached the ubiquity Perl did in core distributions. Python learned somewhat from Perl with regard to this (while suffering it at the same time), but also has it's own situation (2->3) which largely overshadows this so it's mostly unnoticed.

I'm of the opinion that the problem can't really be solved without very close interaction between the project and the distribution, such as .Net and Microsoft. But that comes to the detriment of other distributions, and still isn't the easiest to pull off. In the end, we'll always have a pull between what's easiest for the sysadmins/user and what's easiest for the "I want to deploy this thing elsewhere" developers.


Cargo isn't competing against nor replacing distribution package managers. Cargo is a build tool, not a package manager. You're free to package Rust software the same way you do non-Rust software for specific distributions. They are entirely different unrelated things with no overlap. Cargo solves a lot of problems that we've been facing for a long time. We have the Internet now, so let's use it to speed up development.


apt & ports don't follow you into other OSes. Language-specific package managers do, without requiring entanglement with the OS or admin permissions. All you need is sockets and a filesystem.

I think the language-specific ones will win for developer-oriented library management for platform-agnostic language environments.


Apt and/or ports follow you to every OS kernel I can think of (Linux, MacOS, Windows, *BSD, the Solarises), though the packaging doesn't always (for instance, the OpenBSD guys have a high bar, but that's a feature).

My theory is that each language community thinks it will save them time to have one package manager to rule them all instead of just packaging everything up for 4-5 different targets.

The bad thing about this is that it transfers the burden of dealing with yet another package manager to the (hopefully) 10's or 100's of thousands of developers that consume the libraries, so now we've wasted developer centuries reading docs and learning the new package manager.

Next, the whole platform agnostic thing falls apart the second someone tries to write a GUI or interface with low-level OS APIs (like async I/O), and the package manager ends up growing a bunch of OS-specific warts/bugs so you end up losing on both ends.

Finally, most package manager developers don't seem to realize they need to handle dependencies and anti-dependencies (which leads you to NP-Complete land fast), or that they're building mission-critical infrastructure that needs to have bullet proof security. This gets back to that "reinvent dpkg poorly" comment I made above.

In my own work I do my best to minimize dependencies. When that doesn't work out, I just pick a target LTS release of an OS, and either use that on bare metal or in a VM.

Also, I wait for languages to be baked enough to have reasonable OS-level package manager support. (I'm typing this on a devuan box, so maybe I'm an outlier.)


Funny, I've found cargo to be one of the major negatives of rust.

Is there anyone out there saying "builds only when connected to the internet so it can blindly download unauthenticated software ... SIGN ME UP!"


> In my opinion Rust is about doing things right.

On the other hand there is a quite dark cloud on the horizon with the stable vs nightly split. You can't run infrastructure on nightly builds; or add nightly builds to distributions.


There are very few libraries that are nightly-only in Rust. Clippy is a big one, but clippy is a tool, not a library, so it's no big deal (we're working on making it not require a nightly compiler).

Rocket is a recent one. I talked with the owner of Rocket and one of their goals was to help push the boundaries of Rust by playing with the nightly features. With that sort of meta-goal using nightly is sort of a prerequisite. Meh.

You can use almost all of the code generation libs on stable via a build script. Tiny bit more annoying, but if it's a dependency nobody cares. A common pattern is to use nightly for local development (so you get clippy and nicer autocompletion) and make the library still work on stable via syntex so when used as a dependency it just works.

The most used part of the code generation stuff will stabilize in 1.15 so it's mostly not even a problem.


I'm sorry your comment has gotten the response it has.

The looming dark cloud of stable vs. nightly only looks like a dark cloud to those outside the Rust community.

The article that made its way up Hacker News awhile ago (https://news.ycombinator.com/item?id=13251729) got pretty much no traction whatsoever in the Rust community.


I have found the split has only gotten better with time. It used to be most package maintainers assumed you were using nightly/beta. The last holdouts I see are diesel and serde which have instructions for using nightly. Even then they realize no one wants to ship code on nightly so they provide directions for making a building using stable rust. Once the procedural macros stuff is stabilized they can stop.

I have been extremely pleased with the rust community and the rust maintainers.

And no I was not paid by them to say this... :)


From my impression, they'll be working with the stable compiler in a few weeks.


That article had so many inaccuracies and/or was hyperbolic. Most of the points there are plain wrong.

http://xion.io/post/programming/rust-nightly-vs-stable.html#...


I think the Rust community didn't take the discussion very far, because it felt like the comparison to Python 2 vs 3 was out of proportion. The single most important difference is that stable Rust code is always compatible with nightly, and comparisons that gloss over that difference feel frustrating. (Other folks have raised more detailed objections too, like the macro features that are about to land in stable.)


I don't see a split here. I know lots of cool ideas have been posted to Hacker News lately, that showcase something possible in future versions of Rust. But I seriously hope people keep these libraries as showcases, not production-quality tools. I'd say this nightly/stable split is a bit out of proportion.

In 2017 we'll get Tokio and lots of Tokio-ready libraries. Some of them already work and compile with stable Rust. And maybe in the end of the year we can take a proper look what we can do with Rocket, or Diesel...


Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Diesel does work on stable today, but its nightly features will be on stable in five weeks with 1.15.


> Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Are you talking about impl Trait or some other language changes?


impl Trait is what I'm thinking of, yeah.


There's little evidence for a "dark cloud" on the horizon because of Rust nightlies, besides people complaining of dark clouds. I'd suggest providing evidence of problems with nightlies and filing bugs about things that need to be stabilized.


As I noted in https://news.ycombinator.com/item?id=13277477, there are really only two major Rust libraries that are easier to use on nightly Rust (serde and diesel), and both can already be used on stable using a `build.rs` file (which takes about 10 minutes to set up–see the docs for the projects). You'll be able to get rid of `build.rs` in about 5 weeks when Macros 1.1 lands.

That said, if you want to play with nightly Rust, it's pretty trivial. Rustup https://www.rustup.rs/ makes it easy to install multiple Rust toolchains and switch between them, in a fashion similar to rvm and rbenv.


How so? Most big projects have both stable releases and nightly builds that contain unfinished features. Why is it particularly troublesome for Rust?


I am not sure it's really a split when you can build stable on nightly. Nightly is great for experimentation.


Although others don't see the 'dark cloud', as a long time user of Rust it has definitely put me off pushing for it too hard at work. That said, Diesel and Serde are now the only big libs that require nightly, but this is due to them relying on syntax extensions which are going to be stabilized in the next release. So if I were to start a new microservice using those libs it would be ready to go on stable in a few weeks time, which is super exciting. The only other thing would be async IO. Tokio is making big strides in that direction - not sure what the time-frame is on being able to use it on a server nicely though.


> add nightly builds to distributions

rustup run nightly cargo build --release

You were saying? In any case, there's no such thing as a stable vs nightly split. There's pretty much zero libraries that require nightly, and the few that have a nightly option for optional features will no longer require nightly after the macros update lands.


I think Rust is mostly about safety in the same way that skydiving is mostly about safety. Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

(I guess in this analogy C is a parachute that you have to open manually, while Rust is a parachute that always opens at exactly the right altitude, but isn't any heavier than a normal parachute.)


Rust safety is ultimately a productivity boost.

For example, if I have a big string, I may create a hashmap where both keys and values are references to some portions to that original string.

Then, I may pass this hashmap to another function that will transform this hashmap into structs that contain reused portions of those string references.

Rust compiler will make sure that the original string is not destroyed or moved in any way in memory while this is happening.

While it is certainly possible to do this in C and C++, the development cost there is way higher. In C++, one would be sensible to stick to a slower version that copies and allocates data, unless he/she is really sure that the need for performance justifies the code complication.

Meanwhile, juggling the references this way in Rust is common and almost sloppy: the hashmap contains references to strings because it was probably created from iterator that iterated over references. The developer might not even need to notice it unless the string reference in result might be kept around longer than original. The compiler will tell about the type mismatch, and the developer will then create a new string from the reference, and then he/she will move on.

There is a similar story when such code needs to be refactored. The word "sloppy" still works here, but in a good way: offloading "being very, very careful" stuff to compiler is deliciously fun.


I wrote about this concept a bit in http://manishearth.github.io/blog/2015/05/03/where-rust-real..., with an example of a situation where in Rust I was able to make things work but in C++ I'd be totally terrified and use a shared_ptr or something.

I like to say that Rust lets you toe the line perfectly, and dance near it as much as you want. C++ does not, since you're afraid you may accidentally cross it.


Is that a joke? Just doing a cursory reading of your blog post was terrifying to me and I could hardly understand why things are so complicated. It almost make it seem like programming is some 'puzzle' (I feel same way about boost library).


What did you find complicated or terrifying there?

The fact that the code was complicated was sort of the point; the generic deriving code in the Rust compiler is in general a very tricky and confusing area and pretty complicated to deal with. In C++ I would have been very careful around code like this and not introduced new pointers. In Rust, I could do this without being afraid of memory issues.

You're not supposed to understand the actual code in the blog post; there's heaps of context and explanations of compiler internals I didn't want to do. I tried to make it clear what task I wanted to do, and how the compiler helped me carry it out; the code is just there to give a better idea of what was going on. If you read closely I do mention that a lot of things should be ignored, there.

The post also walks through the explanations of why the lifetimes work out that way, but that's more for the readers' edification -- I didn't need to actually figure anything out whilst working on it; the explanations are something I thought about later.


I'd argue that in order to properly evaluate your statements regarding C++ (and Rust, for that matter) it actually is important to understand the code and what you're trying to do. Given that there generally is more than one way to solve a problem it isn't unreasonable to think you found a solution that doesn't map (well) to C++ and from there generalize (perhaps incorrectly) that therefore it's impossible in C++ (with similar performance).


Well, in this case there aren't multiple solutions -- my solution was literally the problem statement. I wanted to make a particular array accessible later in the pipeline in a specific API. That was the problem statement (the reason behind this problem was that the plugin API needed to be able to expose this array). This itself was pretty simple. The array arose from a particularly entangled bit of code. Possible ways to persist it are with a regular or smart pointer/slice to the vector, in both Rust and C++.

This isn't an instance of the XY problem.


Few recent posts that I can see are about the compiler internals, design of a tracing GC and a crypto problem. These things might just be complicated anyway :)


C++ users would say that shared_ptr is the idiomatic way to handle this scenario, and it's no more unsafe than Rust is in general use.


Right, but there's a runtime cost associated with it, unlike the borrowed pointer in the rust version. Using shared_ptr is not "toeing the line", it's stepping back from the line in fear you will cross it.


A lot of C libraries and applications are more like a parachute that was folded by someone who saw a two minute video about parachute folding a couple years ago.


Have you done much skydiving? I used to go three days a week, for a couple years, between 4-10 jumps a day at a place that had world class experts. My experience is that only beginning skydivers are constantly preaching safety. They go around (vocally) judging everything they see, and I think they do it because it alleviates their own fear. Instructors would teach safety, but really only to their own students.

I think the safety aspect of Rust appeals to a lot of beginning programmers. They can feel safer looking down their nose at us dangerous C or C++ programmers.

> Rust is a parachute that always opens at exactly the right altitude

This isn't a good metaphor. Frequently it's safer to pull higher, and on some occasions, you're safer opening lower than you had planned... I think a canopy that always opened at the prescribed height would cause many unnecessary deaths. That doesn't say anything about Rust, one way or the other.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Maybe, but it also appeals a lot to many of us experienced programmers who know how hard things can bite us. It's not so much that we can't get things right. It's that it's really expensive to revisit old assumptions when circumstances change, and it's phenomenal to be able to document more of these in a machine-checked way.


Please don't get me wrong - I would take safe over non-safe if everything else were equal. It's just that Rust made many other choices that are worse for me than what's in C++. Also, I think it would be very painful trying to explain some of Rust's features to my coworkers (who are generally very smart, but generally not interested in clever programming languages).

> It's that it's really expensive to revisit old assumptions when circumstances change, [...]

That's very dependent on the type of work you do. Over the last 23 years, my job has been to write many small programs to solve new problems. It's not expensive for me because I've aggressively avoided making monolithic baselines. I have medium sized libraries that I drag from project to project, but I can fix or rewrite parts of those as needed without breaking the old projects.


> That's very dependent on the type of work you do.

True, if your code never gets big or old, you can keep all of it in mind and write correct code without too much worry. Though in my experience, it really doesn't need to be very old or very big before tooling starts paying big dividends.

> I have medium sized libraries that I drag from project to project

I'd wonder in particular about those libraries. Certainly you know more about your context. But I expect that there are both contexts where it wouldn't be helpful, and also contexts where it would be substantially helpful but authors don't know what they're missing. I don't have a way of distinguishing the two here.


I think it's a misconception to classify type-safety and memory-safety techniques as 'clever', they should be seen as the bread-and-butter of day-to-day coding. To put it another way, Rust's memory safety is no more clever than C++'s smart pointers, the only difference is what people mistakenly believe about the two.


> I think it's a misconception to classify type-safety and memory-safety techniques as 'clever'

I didn't call Rusts type-safety of memory-safety clever. The clever stuff is lifetime specifications, a multitude of string types, traits as indications to the compiler for move vs copy, Box Ref Cell RefCell RefMut UnsafeCell, arbitrary restrictions on generics, needing to use unsafe code to create basic data structures, and many other things.

If I tried to advocate Rust in my office, many of my coworkers would simply say, "I didn't have to do that in Fortran, and Fortran runs just as fast. Why are you wasting my time?!"


Almost everything you mentioned as 'clever' is trying to achieve either type-safety or memory-safety. To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?


I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity. Rust deserves credit for it's good ideas, but these aren't those, and I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists [1].

> To your coworkers I would reply: would you like the compiler to handle error-prone memory deallocations? Or do you want to keep doing it manually and wait till runtime to find potential mistakes?

They don't really care about memory deallocations - the program will finish soon anyways, and the operating system will cleanup the mess. Sorry, they've already excused you from the office and have gotten back to getting their work done.

Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

[1] http://benchmarksgame.alioth.debian.org/u64q/compare.php?lan... (yes, most people disregard benchmarks, but you need someway to discuss performance)


> I don't believe those clever things are necessary for safety or performance. I think many of them are incidental and caused by a lack of taste or just a disregard for the value of simplicity.

Well, I just re-read your list of 'clever' features, and can't really see how any of them is incidental, or in fact how some of them are worse than the exact same features in Swift, which you mentioned.

> ... I believe there will be other high performance (non-GC) languages that are more accessible to non Computer Scientists....

Not sure what to make of this comparison, given that Rust beat Swift in the majority of the benchmark tasks. Also, you have to look at the quality of the compilers themselves. Rust is universally acknowledged to be a high-quality compiler, while Swift (especially together with Xcode) are often bemoaned as buggy and crashy.

> They don't really care about memory deallocations ... the operating system will cleanup the mess.

Well, I have to say they are an extremely lucky bunch. Most systems programmers don't have the luxuxy of writing script-sized programs which use the OS as their garbage collector.

> Btw, modern C++ programmers don't worry about memory deallocations either. You should find a better bogeyman.

I was specifically replying to your Fortran example, but for the sake of argument, to C++ programmers I'd ask, 'Would you like to do high-performance concurrency with statically guaranteed no data races?'


> a multitude of string types

There are two string types in Rust, `String` (growable and heap-allocated) and `&str` (a reference to string data). Anything else is just a shim for FFI.


> There are two string types in Rust, `String` [...] Anything else is just a shim for FFI.

I guess I don't have to worry about the non-Scotsman strings then... You've heard the criticisms about Rust's strings before, and I'm unlikely to tell you anything you don't know.


I think it's especially hasty to criticize Rust's string types in the context of C++, given the standardization of string_view in C++ as an analogue of Rust's &str :P


To me, Rust's &str seems a lot more like const char* (with a size tacked on for bounds checking). But you're the expert, so if I did agree they were the same, then C++ adopting it in the STL is practically proof it's a mistake in Rust.

You never addressed my other "too clever" items in Rust. Does that mean, other than strings, we agree?


> Does that mean, other than strings, we agree?

Not necessarily. :P Features exist, and I'm not about to dictate where others draw the cleverness line.


No, I haven't done any skydiving. After writing my comment, I suspected my analogy might not hold up if I knew more about skydiving. I guess just imagine an abstract form of skydiving where you just need to have fun in the sky and then open your chute at the right altitude, and you'd like to wait as long as you can before opening it.

I still think the point is valid, though, even if the analogy isn't.


Fair enough. I'm sorry about calling you out on the metaphor. Instead I'll call you out on the point itself :-)

> Having safety features that you know you can rely on allows you to take risks that you normally wouldn't in order to accomplish some really awesome things.

Unless you rush to publish a public facing version of your code, I can't see why you'd be afraid to take risks in any language. What's so scary about a buffer overflow on your home workstation running data from a source that's never even seen your program? It will just segfault, which is no worse than a Rust panic. If I could exploit your new code, it means I've already gotten so far into your workstation or server that I could just run my own code. Where does the fear come from?


I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns. But unlike other languages, you still have confidence that your code will compile to equally performant machine code (e.g. No GC overhead).

The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern. (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I guess the analogy would be that you can have more fun cavorting across the sky if you knew with 100% confidence that your parachute automatically would deploy itself at the appropriate time (and not a moment sooner).


> I think it's more like: since you know the compiler won't let you write a buffer overflow, use-after-free, data race, etc., you no longer have to waste time worrying about whether your code might contain such problems, which frees up more mental bandwidth for other concerns.

I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

> The scary thing isn't causing a segfault on your local machine. The scary thing is writing code that could segfault but doesn't do in testing until after you've deployed it publicly. If your compiler rejects code that can segfault, this is no longer a concern.

If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

> (Or replace segfault with a buffer overflow that leaks your private keys or something equally bad.)

I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking. That whole exploit was caused by sloppy optimizations, and Rust is not immune from that.


> I've spent a lot of time figuring out how to do completely mundane things in Rust. At this point, buffer overflows and use-after-frees are not my biggest concerns in C++.

I could visit a country with a completely different set of laws regarding driving and a different road marking system. After a few days of driving, I might also feel like I've spent a lot of time trying to figure out how to navigate the rules of the road rather than actually getting to my destination when compared to driving in my native lang. I would also be unable to accurately ascertain whether one system was better than the other, because of inadequate experience with the new system. It would be a mistake to assume that I could become proficient enough in such a complex system in such a short period of time as to ascertain whether one was better than the other.

To put it another way, I don't feel like avoiding bicyclists is my biggest problem when driving, but having a dedicated bike lane at all times would probably be a good idea anyways. Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

> If your testing didn't catch the problem (which I can fully understand), a panic at runtime is not much different than a segfault.

No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

> I firmly believe the OpenSLL team would've used unsafe blocks in Rust to disable the performance overhead of bounds checking.

Even if they did, that would still reduce the portion of the code that needs to be audited to those blocks. Effort could be made to reduce the size and scope of those blocks. There is something to be said for having the ability to categorize and enforce different safety levels in your codebase, when the alternative is no categorization or enforcement.


> I could visit a country with a completely different set of laws regarding driving [...]

Arguments by metaphor aren't my thing. It's very likely I would become more proficient at Rust if I programmed in it more. It's also very likely the poster above would worry less about memory errors if s/he programmed in C or C++ more. Yes, Rust is safer in some ways, but I still can't understand where all the fear of other languages comes from.

> having a dedicated bike lane at all times would probably be a good idea anyways.

I used to live in a city with a lot of dedicated bike lanes. I commuted to work on a particularly long stretch that was very popular for cycling. The majority of the cyclists refused to ride in the lane. It turns out that cars naturally blow the dust and small pebbles out of the main road way, but bikes don't do that in the bike lane. Cars also smooth out the pavement in their tire tracks. The result was a road that's 5 foot narrower for cars (speed limit 45 mph) with bicyclists in it (not moving 45 mph), a generally unused bike lane, lots of uncomfortable passing, and a lot of indignation from cyclists who claimed an equal right of way despite having a separate lane designated for them.

> Sure, maybe you've never hit a cyclist, and never will. That doesn't mean it doesn't happen enough that we shouldn't do something about it, because it does.

The city I live in now has many bike paths, completely separate from major roads. It's also a different climate, so there are less pebbles and they have street sweepers clean the road after snow season to remove the sand. There really doesn't seem to be much interaction between the cyclists and the cars. So should I choose a programming language with bike lanes on major roads or separate paths though the parkways? :-)

> No, a segfault at runtime is something that is possibly exploitable. A Panic is not.

Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.


> It's very likely that I would become more proficient at Rust if I programmed in it more. It's also very likely that the poster above would worry less about memory errors if s/he programmed in C or C++ more.

> The city I live in now has many bike paths, completely separate from major roads.

Which wasn't the point of that at all. It was to point out that you assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels. There are plenty of people here with quite a bit of C and C++ experience that have weighed in about this, not just the person above who you assess as not having much experience in C or C++.

A bike path is a dedicated bike lane,just not necessarily parallel to the road. You're taking the metaphor too literally to be useful. A metaphor is as useful as you allow it to be. They can be extremely useful in pointing out somewhat parallel situations where people may find their beliefs are different. When that is so, it allows the people involved to examine what is different about the situations that leads to a different opinion, if anything. Sometimes we fall prey to our cognitive biases, and a metaphor can be a shortcut out of that bias if it exists, and you allow it be that shortcut. Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick, but doesn't actually advance the conversation, and at the extreme end if done purposefully is not acting in good faith.

> Anything is possible, but it's very unlikely. I will write a program and intentionally put a buffer overflow in it. Can you send me some data that will exploit it?

Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

> Here's a metaphor that also isn't one: I'm not afraid of terrorists despite some high profile events in the last 20 years. I certainly wouldn't optimize my life around avoiding terrorist attacks because the empirical evidence shows me the probability is very low.

No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

Here's the thing. It's not about you. At any point in time, some percentage of C and C++ programmers are neophytes that may not be as proficient as you at avoiding the pitfalls possible in those languages. Given the average amount of time it takes someone to be proficient in C or C++, divided by the average career length of a programmer of those languages, and you'll have a rough estimate of what percentage of programmers of those languages we might conceivably have to deal with problems from them being inadequate for the job they are assigned. I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.


"not acting in good faith"

An accurate diagnosis, I think. You'll never get anywhere with people like that ... or where you get is not anywhere you want to be. In this case, you have someone arguing against Rust because a) his coworkers don't bother to free memory because their programs will finish soon and b) because he doesn't care whether toy programs that he writes for his home computer are subject to buffer overflow exploits.

And on top of that was missing the point of your analogies that, if not willful, was certainly convenient. To use another one: some people are like quicksand.


> > The city I live in now has many bike paths, completely separate from major roads.

> Which wasn't the point of that at all. It was to point out that you[r] assessment of how much time is wasted working around problems in each case is irrelevant given your vastly different experience levels.

That was almost my exact point, and it's odd you're repeating it back to me. I guess I could've laid it out more plainly.

> [Metaphors] Driving it into irrelevancy through focusing on minutiae is a useful rhetorical trick. [...] at the extreme end if done purposefully is not acting in good faith

Using a metaphor is a rhetorical trick. If you want to explain something to a non-technical audience, maybe analogies "get the hay down to the horses" so they can have at least a limited understanding. However, we both seem to understand programming languages so talking about roads obfuscates the discussion, leaving me to wonder whether there really is a parallel between the two topics. I know more about programming languages than I do about bike paths.

> Depending on the segfault? I could. It would take me a lot of work, because it's been nearly 15 years since I paid much attention to that, but I have done it before.

Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop. It's not something I should fear today. Over the last 25 years, I've had lots of segfaults, but I think I've done the most damage by accidentally overwriting files. I'm a little afraid of that.

> No, you don't optimize your life around them, but you might also support checking of identities on international flights to prevent access to your nation from known terrorists.

No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

> Here's the thing. It's not about you.

Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language? I was new once, and I survived lots and lots of segfaults. Don't you think neophytes should hear that? They're definitely getting a large dose of doom and gloom about the bad old days.

> Some percentage of C and C++ programmers are neophytes. [...] I think that reducing this has such a large impact, that this is of vast benefit to society at large (given the botnets we are currently seeing), and would total billions of dollars.

In one of your other comments, you indicated you haven't tried Rust yet. You should - you sound interested. It definitely has its nice parts. However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book. Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it with a lot of compromises)


> Using a metaphor is a rhetorical trick.

Rhetorical tricks can be used to deepen the conversation, or to dismiss points out of hand. The first is useful to the discussion, the second is useful for winning, but at the detriment to the discussion.

> However, we both seem to understand programming languages so talking about roads obfuscates the discussion

I provided a example where it may provide value even if two people are experts in the area being discussed. Metaphors can help explain someone underlying reasoning and motivation in a way that is hard to express technically. People talk past each other enough in discussions by slightly misinterpreting what is trying to be expressed, that I find metaphors a valuable tool. I find many disagreements in text are rooted in people assuming a comment is countering a point of their or someone the agree with, and interpreting it in that light when often they are saying very close to the same thing. Thus I believe expressing a point in multiple ways, even if it's through metaphor, to have merit.

> Even if I offer to run malicious data, it sounds to me like a low probability event - probably lower than my being in an airplane crash or shot by a cop.

First, airplane crashes are extremely rare. Second, being shot by a cop is rare too, depending on your vocation and behavior. Third, remote code execution exploits are not rare, given the relatively small amount of public facing software compared to airplane flights an all police interactions.[1] Were you to author or contribute to any non-trivial size C or C++ project that was publicly available, I would put better than even money on there being an exploit findable in it. There's a vast difference in how much software is written to how much is public facing, but that doesn't mean things that were originally private don't sometimes make their way public years later, for example internal libraries that a company open sources or even just includes in another project that ends up being public facing.

> No, I definitely would not. It's very easy to get into this country, and an organized (dangerous) group would have no more difficulty than the drug dealers do smuggling cocaine. There is no benefit to harassing millions of citizens if you can't actually stop the problem.

So, again, because you can render a metaphor in more detail to make it irrelevant in context doesn't mean that's appropriate. So, in more generic terms, "do you support keeping known detrimental people out of a defined area to facilitate the usefulness of that area"? If can can do so, and it's not to cumbersome on those that are not detrimental, depending on the problems caused by the people in question, at some point it becomes worth it. There are parallels that can be drawn here, if you're willing to entertain the thought. It appears you aren't.

> Are you suggesting the only people allowed to share their experiences in a thread like this are new programmers and the people pushing their language?

No, I'm expressing that a single person't ability to avoid negative behavior has little bearing in an argument regarding community norms and herd behavior, which is what I'm getting at. Whether you are a perfect programmer and never make a single mistake in any language you use doesn't matter when discussing the merits of enforced safety in general as in this discussion regarding C and C++. What does matter is whether other programmers in general do, and what percentage of them, which you've also made a point of expressing. I think that is worth discussing, because I think we either disagree on the proportion of those programmers that can code with adequate safety, or some other facet of them that results in them yielding far more problematic code every year than you think they are producing.

> In one of your other comments, you indicated you haven't tried Rust yet.

I've tried it. I haven't done more than dabble though, while playing it futures-rs. I understand the borrow checker is cumbersome at my level of uncerstanding, and I fought with it. I don't think I have sufficient experience to make an assessment of the language personally based on my level of experience with it, and especially not with how it feels to write in comparison to C or C++, because I strive to avoid using those languages.

> However, I don't think you will find the safety features to be a big productivity gain, and you will have to use unsafe code to accomplish tasks from a freshman level computer science book.

I believe have the ability to define safe and unsafe portions of code is in itself laudable and useful. Allowing me to categorize possibly problematic portions of code is a benefit. In any case, I could essentially write the entire program in an unsafe block and have a C/C++ alike with a different syntax. I'm not sure how "unsafe" can be presented as a downside, when it's strictly a way to enforce separation of a feature that C and C++ don't have.

> Think about that - you can't cleanly use the safe subset of Rust to teach computer science to beginners... (you could do it a lot of compromises)

What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe. Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language. It exists as a concession that sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

1: https://www.exploit-db.com/remote/


You seem like a forthright person, but with or without metaphors, we're still talking past each other.

My point about remote exploits, airplane crashes, and cops is not about me. Yes, public facing software needs to be careful, but (fun metaphor) that's like saying prostitutes should use condoms. Web servers, browsers, firewalls, and the like are built specifically to communicate with untrusted entities. That's some of the most promiscuous software out there, and yes it gets exploited. But most people don't need to use condoms with their wives, and nobody is going to exploit software a newbie wrote and runs on his home computer. Safety should not be the fundamental criteria for a newbie programmer to choose a language and learn how to write fibonacci or hello world. When they're ready to write nginx, then they should be careful.

My point about the questionable productivity gain and safety was a reply to your estimate of the billions of dollars lost. If you're not more productive, and you aren't really safe, then you aren't going to save those billions.

> What, you can't use that explicit separation of what is known safe and known unsafe to point out computational problems and ways they can be solved? I find that hard to believe.

I didn't say anything like that. We're talking past each other.

> Unless you think unsafe is Rust but "lesser, not really". It isn't. It's part of the language.

(Metaphor time again) I've got a really safe bicycle. When the safety is on, children can't get hurt while riding it. If you care about the safety of the world's children, they should use my new safer bicycle. Oh, but you can't pedal it on paths I don't provide unless you disable the safety. Is my bike really that safe?

> 1: https://www.exploit-db.com/remote/

I have no idea how many people compiled and ran a program today. It's probably millions. Bayes's theorem might be a useful way to normalize that long list you linked. I don't see a single program from a home programmer on that list.


"I didn't say anything like that."

No one said that you said anything like that. Of course you didn't. But what you said necessarily implied that.

"We're talking past each other."

No, you willfully ignored and misrepresented all his points.

"Oh, but you can't pedal it on paths I don't provide unless you disable the safety."

That's a grossly dishonest misrepresentation the situation with Rust.


> Unless you think unsafe is Rust but "lesser, not really". ... It's part of the language ... sometimes things are needed that can't be proven safe by the compiler, but you may be able to prove to yourself it is.

This. Unsafe is to the borrow checker as 'Any' is to the typechecker.


Perhaps a better analogy would be the way that much of modern medicine is enabled by access to antibiotics. Without antibiotics, the risk of post-operation death by infection would be so high as to rule out many of the procedures that we now consider safe and routine.


I would prefer a surgeon who washed his hands over one who didn't but gave me antibiotics. I've had stitches a few times, but only one real surgery. I never got antibiotics for any of those. Maybe we could skip the analogies? I don't think they help the discussion.


They help in a discussion with someone intellectually honest.


> I think the safety aspect of Rust appeals to a lot of beginning programmers.

Is that a bad thing? All programmers start as beginners, and if C is too painful to begin with then they'll learn via an easier language, and then comfortably spend their whole careers using those easier languages. If we want to expand the field of systems programmers organically, then we need to make tools that don't punish beginning programmers.

> They can feel safer looking down their nose at us dangerous C or C++ programmers.

What makes you feel like anyone's looking down their noses at you? Every language in history has been made to address the perceived flaws of some prior language. Safety is a crucial selling point for a huge chunk of people, and C and C++ have failed to appeal to this market. Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.


> > I think the safety aspect of Rust appeals to a lot of beginning programmers.

> Is that a bad thing?

The appeal to beginners is fine, maybe even a good thing, but the condescending comments from beginners is a lot like listening to a teenager who thinks they know everything.

> What makes you feel like anyone's looking down their noses at you?

There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

A recent one implied the whole world is going to end because of Heartbleed-like exploits. Don't they realize that despite the occasional high profile exploits, the world is generally running just fine? Don't they realize that the OpenSSL developers would've probably used pools of dirty memory to avoid allocation costs and unsafe blocks to avoid bounds checking had they developed that code in Rust? They got bit by sloppy optimization, and Rust isn't immune to that. I really wish people weren't so afraid of everything that achieving safety is their primary goal.

> Just because safety isn't a priority for you doesn't mean that the people for whom it is a priority are suddenly pretentious.

It's not pretentious if you make your own decision for your own project. It's not even pretentious to spread the good word and say how much you like Rust. It is very pretentious and condescending when you say something like in Graydon's article: """When someone says they "don't have safety problems" in C++, I am astonished: a statement that must be made in ignorance, if not outright negligence."""

Are you going to stand by that sentence? You probably should, because the newbies will love you for it, and it might help increase adoption of your language. It really shouldn't matter if you alienate a few of us old-timers who really don't have safety problems in C++.

To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++ (which I really don't like).


"the condescending comments from beginners"

You like to make stuff up.

"If you can't see them, it might be because you're aligned with that point of view."

Or it might not. It might be that you're just being abusive and dishonest.


> There're are no shortage of obnoxious comments from beginning Rust users here and on Reddit. If you can't see them, it might be because you're aligned with that point of view.

Can you give me an example of a comment in this thread that you find to be from a pretentious beginner? Alternatively, if you're calling the author of this article a beginner, I can assure you that he isn't.


The guy's a troll.


> To be clear, I like Rust. I've been following it for years, and I'm disappointed that it's not an adequate replacement for C++

Just out of curiosity, what is it about Rust that means it's an inadequate replacement for C++?


There are many things you could dismiss as style issues, but here is one relating to performance. Rust does not (yet) have integer generics. If I use Eigen (the C++ library), I can declare a matrix of size 6x9 and have the allocation live (cheaply) on the stack. I do this kind of thing frequently (not always 6x9), and in Rust I would pay for heap allocated matrices. The cost in performance can be huge. Maybe this will get fixed in the near future.


Humans are prone to error (fine), therefore you are prone to error (condescension, not fine). Post-aristotelian logic?

I'm not completely serious, it's more complex that this.


"Is that a bad thing?"

Regardless, it's a complete mispresentation of Rust, which is all that zero has to offer.


I'll probably be writing a slightly longer response post to this later, but for now... EDIT: here it is: http://words.steveklabnik.com/fire-mario-not-fire-flowers

I think the core of it is this:

> Safety in the systems space is Rust's raison d'être. Especially safe concurrency (or as Aaron put it, fearless concurrency). I do not know how else to put it.

But you just did! That is, I think "fearless concurrency" is a better pitch for Rust than "memory safety." The former is "Hey, you know that thing that's really hard for you? Rust makes it easy." The latter is, as Dave[1] says, "eat your vegetables."

I'm not advocating that Rust lose its focus on safety from an implementation perspective. What I am saying is that the abstract notation of "safety" isn't compelling to a lot of people. So, if we want to make the industry more safe by bringing Rust to them, we have to find a way to make Rust compelling to those people.

1: https://thefeedbackloop.xyz/safety-is-rusts-fireflower/


I would argue that if the Rust project would have just one mission statement, it wouldn't be "create a safe systems programming language". It would be "move towards a world where safe systems programming is the norm".

What's the difference? Both of the statements have the premise that Rust is – and ought to be – a safe systems programming language. However, the latter captures not only the REAL goal, but also the nuances and tensions: while safety is indispensable, we must do something else too, for the programming society to accept the safe tools we are trying to promote. That means ergonomics, that means performance, that means ease of use, that means wide availability – and that might also mean advocation of visions of a better world, which is what this blog post of Graydon's does.


I really, really like this. Thank you. Well put.


It's important to get the nuance of this statement right. Consider, for instance, the PSF's mission statement:

> The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.

"Facilitate the growth" does not imply that dominance of a certain practice (whether that is use of Python, or adoption of its core principals) is a goal. So you have an incentive for the PSF to say "our international community is growing at a sufficient rate" and focus instead on "advancing the language," which may or may not be aligned with adoption. In such a framework, it becomes easy to justify backwards-incompatible major releases that, regardless of whether their opinion is justified, many users consider to be user-hostile. Framing Rust towards a larger mission that implies user adoption of core principals as a fundamental goal seems much cleaner in that regard, and could conceivably avoid similar pitfalls.


As someone watching from the sidelines... maybe it's the cynic in me, but I think you're giving too much credit to the "but I don't have safety problems" folk. Judging by how defensive they get, it's just a post-hoc rationalization to why they won't invest time learning it. That's fine but, IMHO, not a marketing problem and just good old resistance to change. There's an endless stream of excuses to choose from, no matter how you pitch Rust to them.

I'm all up for a marketing shift though. I agree Rust is much more than safety and I think graydon2 missed your point there. The fireflower simile is excellent. Plus I've seen lots of people confused about Rust, including here on HN. Those are the people to whom marketing failed and should be targeted better.


Or we were severely disappointed when we realized Rust doesn't really give you that safety.


No, that's not it.


> I think "fearless concurrency" is a better pitch...

I would go a step further, "fearless programming". Though I would hesitate on 'easy'.

Rust gives you fearlessness in all the things, but it does mean learning new style and discovering new solutions to old problems. To fully understand 'Send' vs. 'Sync', for example, means really groking the Rust type system. Once you get the type system, then fully utilizing it with the expressive generics becomes unlocked, and then at that point you've transcended from Rust dabbler to fully fearless Rust user.

Once this world of development is unlocked to you it is mind-blowing, but it is a journey to get there, and not everyone will have the heart to make it. It comes in stages, is wonderfully rewarding, will make you a better programmer in you other favorite languages, but I think we should be careful with statements like 'Rust makes it easy'.

It does make hard things easy, but only after you've fully embraced Rust. This feels more accurate: "Hey, you know that thing that's really hard for you? Rust makes it fun."


Making software free of data races and memory safe (assuming you don't use any unsafe code...) is still a long way from being free of serious defects.

Rust is cool enough that it doesn't need to be promoted with excess hype.


I think the problem is that Graydon is better at technology than marketing. Getting language adoption is largely a marketing problem combining what those in control of the language push and how those in the community pull. Rust's success comes from doing that dynamic correctly.

If it was just safe and not C/C++, it would be another Ada, Modula-2, D, etc. It's important to market all the key benefits of it in a way that lets potential users know it will help them solve problems faster and with less trouble down the road.


I was surprised to see Ada in the list of unsafe languages, since it always was sold to me as being designed for safety. A bit of searching leads me to believe that Ada is better about memory even though it mostly uses types for safety, and better enforcement of bounds on array access should solve overflow issues regardless. Am I missing something?


Ada still requires manual heap management, although it can be mostly automated.

So you might occasionally see the unsafe package being used to deallocate memory, even though there are better ways to do it, e.g. controlled types.

The other point, is that Rust prevents data races via the type system, while you can deadlock Ada tasks if the monitors aren't properly implemented.


> while you can deadlock Ada tasks if the monitors aren't properly implemented.

It's not clear to me if you're suggesting otherwise, but you can definitely deadlock Rust as well (although it's true that Rust statically prevents data races).


Ah, that was my understanding.


The other thing he semi got wrong is on safe concurrency. Ada has Ravenscar and Eiffel has SCOOP. Ravenscar didn't need a GC since it was for real-time while Eiffel has one. Before them, there was Concurrent Pascal. The author would be right if he said Rust had much better approach to safe concurrency in terms of expressiveness and performance.

Ada side is producing a new model for parallel and concurrent programming called ParaSail:

http://www.embedded.com/design/programming-languages-and-too...


Any idea on what the status of ParaSail is? Seems to have been pretty quiet lately :(


Have no idea. Contacting Taft et al about that and some other things... especially adding Rust's dynamic or concurrency safety to Ada... is on my backlog for now.


I think it's important to continue to do research in this area. I use Rust now because it is a great language with a strong community, tooling, and momentum, in spite of it's flaws and blind spots, but I see it more as a stepping stone to even better, safer, more expressive systems languages. Rust has challenged our preconceptions on what is possible - I think we may be able to push it even further. My preference is to try to move more towards Idris and the lambda calculus, but it would be interesting to see what an Ada-spin on it would look like.


Well there is this famous Ada failure:

http://www.math.umn.edu/~arnold/disasters/ariane.html

Although that was not a failure of language safety, but an overflow issue. The software was designed successfully for another rocket, and was reused for a rocket that didn't match the original specification.


It's a specifications issues, not a programming issues.


I disagree. The programmer is the expert in types, so it is the programmers duty to ensure that the possible values stored in a variable of a given type are compatible with the type selected. Particularly when it comes to these sort of critical applications.

Programmers blindly following specs put together by people who have no claim to expertise in these matters without questioning the assumptions behind the spec is the cause of all manner of disasters. And "but that's what the spec said to do" is all to common an excuse when the problem is with subtle runtime behavior issues that fall squarely in the programmer's domain of responsibility.


The software component in question was implemented according to its specification, and never failed in the environment for which it was developed, the Ariane 4.

The decision to re-use the component as-is in the Ariane 5 without sufficiently investigating the consequences of the higher horizontal velocities that it is subject to compared to the Ariane 4 cannot so obviously be blamed on the programmer that implemented it years before in a different context.


Thanks for the extra context, and alternate interpretation. You seem like you might know this story better than the writer of the referenced article, but you and the author seem to be making contradictory causal claims. I hold to my conclusion if the author's story is taken as authoritative. If yours is more authoritative, then it sounds like your conclusion is better.

I kind of took the story as an allegory when writing my comment. The article is quite vague about the details of the situation. And for all I know, it IS programmers that write specs for this European Space Agency unmanned rocket project. But the way the story is told aligns with a more universal experience of programmers blaming specs for the failings of programs, even when they should have recognized that the program was misspecified before implementing it. I ran with that interpretation because it was illustrative of something important, but it is not particularly surprising to me that the details are being questioned. The article was never a rock solid account.


I believe you have three choices with Ada:

1) No manual memory management, everything is static and you have memory safety

2) Garbage collection, you have memory safety

3) Manual memory management, you lose memory safety

Rust provides memory safety in the case of manual memory management.


GC was dropped from the Ada2005 standard, because no Ada83 compiler ever implemented it.

Ada provides more ways to automate memory management, though.

Controlled types are Ada's version of RAII, used for arenas and smart pointers.

Also in Ada you can dynamically allocate everywhere, so a common pattern is to use a subroutine parameter to do stack allocation. If it fails, by throwing an exception, recall the subroutine with a smaller size.


Rust is about letting the compiler slap you for your mistakes in the privacy of your own Xterm, instead of letting Jenkins do it 10 minutes later, in front of all your co-workers.


Those slaps represent a bunch of tests that were being reinvented for each program that have now been factored out and up into the compiler.


Maybe it will change in future. Currently slaps seems so hard that developers are still smarting and not producing code for production


We've been in production with Rust almost six months already. Couldn't be happier with the language. Works like a charm with our consumers.


Can I ask which company you're with? Are you already listed on https://www.rust-lang.org/friends.html ?


Appears to be 360dialog.


Rust/C++/Clojure/Scala in services and Python as a general glue language for ops.


I should add our company there soon...


Rust is being used in a number of places in production for a wide variety of things: https://www.rust-lang.org/en-US/friends.html


I know and I would also try Rust if I can make some small but useful things at work. But I mostly deal with various combinations of XML, SOAP, HTTP, LDAP etc. Rust does not have anything over Java, which I use currently, in my usecases.


It's perfectly reasonable to say "Rust isn't appropriate for my use case". Your comment higher up was more along the lines of "Rust isn't appropriate for anyone" which is far less reasonable.


If you're going to put words in his mouth, you should make them much stronger words. It's not a valid argument either way, but it'll seem more dramatic. (He didn't say either of your quotes...)


I downvoted you initially, but changed to an upvote to hopefully ungrey your comment.

The use of quotation marks on the Internet (especially on Internet discussion forums) has become non-standard, and I can see how it could be confusing. I think on HN that we tend to use italics or email-style

> block-quoting

to indicate direct quotations of posts or user comments.

Quotation marks on forums like HN tend to be used either to mark dialogue (things spoken out loud) or to mark paraphrased or "hypothetical" thoughts. This is different from the use of quotation marks in formal English writing, as described by Wikipedia [1]. Here, the quotation marks are used to separate the "paraphrased thought" from the rest of the sentence.

I'm actually finding it hard to describe exactly how quotation marks are used this way on the Internet; it's something I've just developed a "feel" for.

There's more discussion of this phenomenon here. http://metatalk.metafilter.com/23184/Should-we-keep-quotatio...

[1]: https://en.wikipedia.org/wiki/Quotation_marks_in_English


Sorry, next time I'll say something like "If you're going to misrepresent his intention" so as not to confuse a quoted sentence. And I won't use the word "say", because clearly nobody says anything in a text forum. /s

I find it very obnoxious when people exaggerate what someone else said so as to make it easier to contradict. I gather you don't have any problem with that? Yet you do have a problem with people calling it out as bad behavior? Are you sure you know why you're policing anything?


He didn't misrepresent anything. You're the one doing all the misrepresenting, exaggerating, and being obnoxious.


I have to deal with a pre-REST, pre-SOAP XML API, which I would love, love, to be able to handle in Rust. But until Serde-XML is able to deserialize more of that stuff and preferably handle XSDs, I'm stuck too.


Maybe you could call out to a C XML library in the interim?


It takes time to get used to writing Rust, dealing with the borrow checker, and fulfilling the proper trait bounds (Sync, Send, Sized, etc). I don't run into nearly as many compiler complaints now that I've written my fair share of Rust code. I feel like I'm quite productive in Rust, actually.


What exactly is being figuratively slapped? And what exactly do you propose the compiler should do when the code is obviously wrong?


Its the safe and performant that attracts me.

If you look at Rust from C then the point is safety, but if you look at it from the other direction, e.g from F# then what attracts you is that you will get the same safety guarantees (and perhaps a few more) but without the GC and heap overhead.


> e.g from F# then what attracts you is that you will get the same safety guarantees (and perhaps a few more)

As a big rust fan, I wouldn't go that far. You can offload a lot more work to the type system in a language like F# or Haskell. Rust is very safe from e.g. a memory perspective (excepting unsafe operations), but there are additional levels of assurance you can get by aggressively forcing the type system to catch logic errors for you that you can't really do in Rust.

As for performance, I agree, although a more accurate description would be that it's much easier to get C-level performance with Rust code while you have to put in some more effort to get it in any high-level functional language.


Bingo. I've written high perf F# for a DB indexing and searching. The entire time I was wishing for allocation-free inlined closures and stack allocation. And for a few places, I'd really like an easy way to do asm or get really top notch codegen (integer decoding). Rust seems a lot like a fast ML. Not quite as concise as I'd like but worth it for the perf without being ugly mentally.

And the perf will surpass C in some situations due to abstraction. One popular open source platform spends 30% CPU time on malloc and strcpy because tracking ownership was so difficult and it wasn't obvious it'd be a hotspot. In Rust that would be a non issue from beginning.


You may be interested in MLton, which is an ML compiler that achieves high levels of optimisation by aggressively specialising your code: generic functions get specialised to the type and higher order functions get inlined to eliminate closure allocation.


In case you missed that there's a big disillusioned C++ crowd out there.

Just hear the pain: https://news.ycombinator.com/item?id=13276351

And some of them are watching you with great interest.


And there's a tired security crowd watching Rust with great hope; C++ and C have created innumerable security holes at the expense of "convenience". Cryptographic libraries, codec libraries, image conversion libraries, OS kernels, sandboxes, virtual machines, browsers, (the list is endless) have all suffered glaring security holes from the lack of memory hygiene afforded by C and C++.

Any time your code takes in untrusted input, it should not be written in an unsafe language.


Exactly. Which is why I've been so critical, in Rust discussions, of the excessive use of "unsafe". The reply is usually something equivalent to "it's not unsafe the way I do it". Sometimes the claimed performance gain isn't there. I had a link yesterday to a forum post where someone was complaining that using an unsafe vector access function didn't speed up their program. Optimizer 1, programmer 0.

(Early in my career, I spent four years doing maintenance programming for a mainframe OS. Every time a machine crashed, taking a few hundred users off line for several minutes, I got a crash dump, which I had to analyze and fix. Most of the errors were pointer problems in assembly code. When Pascal came out, I thought we were past that. Then came C. I had hope for SafeMesa, but nobody outside PARC used it. I had hope for Modula I/II/III, but DEC went under. I had hope for Ada, but it was considered a complex language back then. Rust finally offers a way out of this hole. Don't fuck up this chance.)


I am still skeptical that "excessive use of unsafe" is actually a thing happening in Rust. Almost all the unsafe I see is for doing FFI (either for interfacing with a library or OS primitives). There's a bunch of it for implementing datastructures and stuff, and extremely little unsafe being used "for performance". Off the top of my head nom and regex do this in a few places, and that's about it. Grepping through my cargo cache dir seems to support my assertion; most of the crates there are FFI (vast majority is FFI) or abstractions like parking_lot/crossbeam/petgraph.

I agree that we should avoid unsafe as much as possible and be sure that unsafe blocks are justifiable (with stringent criteria on justification). I'm don't think as-is this is currently a problem in the community.

It's good to be wary though :)


You keep making that claim without backup. Two days ago I posted links to extensive use of "unsafe" in matrix libraries. (Some of that code was clearly transliterated from C. Raw pointers all over the place.) That's entirely for performance; all that code could be safe, at some performance penalty.

I'd suggest using only safe code for whatever matrix/math library gets some traction, and then beating on the optimizer people to optimize out more checks.


I just gave you backup; I grepped my whole .cargo cache dir (both the one used by servo and my global one). You have also made your claim without backup -- you have repeatedly claimed that this is an endemic problem in Rust, with only individual crates (most of them obscure ones) to back it up, and I only usually make my claim in response to claims like yours -- the burden of proof is on you. Anyway, I do provide some more concrete data below, so this isn't something we should argue about.

Marices fall under the abstraction umbrella IMO. This is precisely what unsafe code is for. However, I totally agree that we should be fixing this in the optimizer, with some caveats. Am surprised it doesn't get optimized already, for stack-allocated matrices. I'm wary of adding overly specific optimizations, because an optimization is as unsafe as an unsafe block anyway, it just exists at a different point of the pipeline. If there's a general optimization that can make it work I'm all for it (for known-size matrices there should be I think), but if you have a specific optimization for the use case imo it's just better to use unsafe code.

The raw pointers thing is a problem, but bad crates exist. They don't get used.

I recently did start going auditing my cargo cache dir to look for bad usages of unsafe, especially looking for unchecked indexing, since your recent comments -- I wanted to be sure. This is what I have so far: https://gist.github.com/Manishearth/6a9367a7d8772e095629e821...

That's a list of only the crates containing unsafe code in my global cargo cache (this contains most, but not all, of the crates used by servo -- my servo builds use a separate cargo cache for obsolete reasons, but most of those deps make it into the global cache too whenever I work on a servo dep out of tree)

I've removed dupe crates from the list. I have around 600 total crates in my cache dir, these are just the ones containing unsafe code.

Around a 70 of these crates use unsafe for FFI. Around 30 are abstractions like crossbeam and rayon and graphs.

I was surprised at the number of crates using unchecked indexing and unchecked utf8. I suspected it would be less than 10, but it's more like 20. Still, not too bad. It's usually one or two instances of this per crate. That's quite manageable IMO. Though you may want to be stricter about this and consider those numbers to be problematic, which I understand.

I bet you're right that many of these crates can have the unchecked indexing or other unsafe code removed (or, the perf penalty is not important anyway). I probably should look into this at some point. Thanks for bringing this to my attention!


I looked at a few.

"itoa" is clearly premature optimization. That uses an old hack appropriate to machines where integer divide was really expensive, like an Arduino-class CPU. It's unlikely to help much on anything with a modern divide unit.

"httpparse", "idna", "serde-json", and "inflate" should be made 100% safe - they all take external input, are used in web-facing programs, and are classic attack vectors.

Not much use of number-crunching libraries; that reflects what you do.

I'll look at some more later. How to deal effectively with incoming UTF-8, especially bad UTF-8, may need some thinking.


I maintain two of the crates you called out so here is a bit more detail on the use cases:

"itoa" is code that is copied directly from the Rust core library. Every character of unsafe code is identical to what literally everybody who uses Rust is already running (including people using no_std). Anybody who has printed an integer in Rust has run the same unsafe code. It is some of the most widely used code in Rust. If I had rewritten any of it, even using entirely safe code, it would be astronomically more likely to be wrong than copying the existing code from Rust. The readme contains a link to the exact commit and block of code from which it is copied.

As for premature optimization, nope it was driven by a very standard (across many languages) set of benchmarks: https://github.com/serde-rs/json-benchmark

"serde_json" uses an unsafe assumption that a slice of bytes is valid UTF-8 in two places. This is either for performance or for maintainability, depending on how you look at it. Performance is the more obvious reason but in fact we could get all the same speed just by duplicating most of the code in the crate. We support deserializing JSON from bytes or from a UTF-8 string, and we support serializing JSON to bytes or to a UTF-8 string. Currently these both go through the same code path (dealing with bytes) with an unchecked conversion in two important spots to handle the UTF-8 string case. One of those cases takes advantage of the assumption that if the user gave us a &str, they are guaranteeing it is valid UTF-8. The other case is taking advantage of the knowledge that JSON output generated by us is valid UTF-8 (which is checked along the way as it is produced).

Here again, both of those uses are driven by the benchmarks in the repo above and account for a substantial performance improvement over a checked conversion.


"serde_json" uses an unsafe assumption that a slice of bytes is valid UTF-8 in two places. This is either for performance or for maintainability, depending on how you look at it. Performance is the more obvious reason but in fact we could get all the same speed just by duplicating most of the code in the crate.

Could that be done safely with a generic, instantiated for both types?


Yes, that is what we already do. The two unsafe UTF-8 casts are the two critical spots at opposite edges of the generic abstraction where the instantiation corresponding to UTF-8 string needs to take advantage of the knowledge that something is guaranteed to be valid UTF-8.

What we have is as close as possible to what you suggested.

As I mentioned, we could get rid of the unsafe code in other ways without sacrificing performance. Ultimately it is up to me as a maintainer of serde_json to judge the likelihood and severity of certain types of bugs and make tradeoffs appropriately. There are security-critical bugs we could implement using only safe code, for example if you give us JSON that says {"user": "Animats"} and we deserialize it as {"user": "admin"}. My judgment is that using 100% safe code would increase the likelihood of other types of bugs (not related to UTF-8ness) and the current tradeoff is what makes the most sense for the library.

From another point of view, performance and safety are synonyms in this case, not opposites. If we use 0.1% unsafe code and perform faster than the fastest 100% unsafe C/C++ library (which is what the benchmarks show for many use cases) then people will be inclined to use our 0.1% unsafe library. If we give up unsafe but sacrifice performance, people will be inclined to use the 100% unsafe C/C++ alternatives.


Yeah, this was basically my conclusion too.

I'm somewhat okay with the parsing ones using unsafe if we can be very sure that the unsafe code actually has a performance impact, and be very careful about it. Some of them already do this, but not all.


There's also the tired sysadmin crowd who are tired of rebooting thousands of hosts for kernel, shell, libc, etc. patches. And tired of patching web, mail, dns, etc servers. I'm sure there are really smart C and/or C++ developers out there that never make mistakes but I've spent a large part of my career patching/upgrading really smart peoples code.

For me, safety is the killer feature in Rust. It's also exciting because it brings systems level programming to a new generation of programmers without all the risk.


> Any time your code takes in untrusted input, it should not be written in an unsafe language.

So basically just about all programs, all of the time?

https://www.owasp.org/index.php/Don't_trust_user_input


I agree, but people seem to feel that their code should somehow be exempt from such advice, and so sacrifice safety for performance. This leads to today's sorry state of affairs.


The problem is that safety doesn't sell. If you're getting a new IoT heat lamp you look at the price and not the firmware's code. To your surprise, the first hacker coming along toasts your cat.


Rust may ultimately be the better solution for many or most cases, but right now SaferCPlusPlus[1] may be the more expedient solution for existing C/C++ code bases.

> Any time your code takes in untrusted input, it should not be written in an unsafe language.

Not just that, but my theory is that untrusted input should only be stored in data types specifically designed for untrusted input [2], and should undergo safety/sanity checks during conversion to more high-performance types. For example, a general rule might be that untrusted integer inputs may only be converted to (high-performance) native integers if their value is less than the square root of the max integer value.

[1] shameless plug: https://github.com/duneroadrunner/SaferCPlusPlus

[2] https://github.com/duneroadrunner/SaferCPlusPlus#quarantined...


I agree with the premise of the article.

However, I feel that Steve Klabnik is trying to dispel myths about Rust not being anything "but" safety, to shape how other Rust developers talk about Rust, not denying that Rust's central purpose is around being a safe language.

This is because there is a lot of miscommunication about Rust. A lot of people who aren't immediately sold on the language walk away thinking it's slow (it's not), it's complicated (not really), and not production ready (it actually is). And that's because Rust developers don't know how to talk about Rust. I am guilty, for one.

Since Steve is such a huge part of RustLang development, it's his duty to direct the conscious effort to promote the language.

No reason to get into a debate over click-baity titles. :)


The issue with safety is that nothing is really safe. Once you have some level of safety in your programming language, you realize that there are still a lot of other sources of hazard (hardware errors, programming logic errors etc.)

So I guess, it would be better to say that Rust is about decreasing unsafetyness or whatever the correct word for that is.

edit: since I see posts about Go, this is evidently another approach toward decreasing unsafetyness by providing fewer and easier to understand primitives so that the programming logic is harder to write wrong. It might come at a moderate cost for some applications.


> The issue with safety is that nothing is really safe.

There is a trade-off between safety and expressiveness. Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language).

But I think that is beside the point here. This is about eliminating whole classes of errors.

A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

A good memory model can eliminate all unsafe memory problems.

There are also languages that can eliminate all data races from your programs.

All these advances in PL theory make it easier and safer to deal with hard problems like concurrency, memory management etc. and thus allow us to focus on what our programs can actually do.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors. The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Especially since the systems we use are often dynamic with changing requirements.


> It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors.

Like I said: It will eliminate a specific class of errors, namely all type errors. Your program will literally not compile if there are any type errors.

> The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Sure, I don't disagree with that statement. But it's important to recognize that eliminating whole classes of errors is extremely valuable and allows us to focus on the important things.


Every type system eliminates all its own type errors by definition. Even the trivial system with one type eliminates all its own type errors (vacuously, since there are zero of them).

There is no universal set of errors called type errors. What are type errors depend on your type system. A good type system allows more errors to be encoded as type errors so you can catch them at compile time, but it doesn't mean anything to say that a language like Rust or Haskell eliminates all type errors. There are certainly type systems which could catch more errors.


Sure, not all type systems are created equal. And there are indeed type systems that can catch more errors than Haskell's (although that usually comes at the price of losing type inference).

But I read OP's point as "Well, you can never catch all programming errors with PL_feature_X, so why even bother." And my point is simply that a lot of PL features make formerly hard things easy and thus allow you to go faster and focus on more interesting things.


You completely misunderstood what he was trying to tell you.


There was nothing interesting in what he was trying to say.

Yes, no programming language perfectly eliminates all classes of unsafety. But that's no reason to let the perfect be the enemy of the good! "The issue with safety is..." no issue at all. Being safe in a bunch of problem domains (Rust) is still strictly better than being safe in very few if any of them (C).


So you did it on purpose. That just makes you a bad actor in the conversation.


I am not whoever you imagine you're responding to (rkrzr, I guess)


No, that's you.


More to the point, Rust and other statically typed languages* eliminate type errors at compile-time.

In e.g. Python, the following code:

    foo = Bar()
    foo.baz()
will compile without complaint but, supposing that baz() is not a member of class Bar, will cause errors when the code is actually run. In statically typed languages this will be caught by the compiler and treated as an error; type errors are simply not allowed in compiled programs.

This distinction is significant, as Python and other dynamically-typed languages require comprehensive test suites for any non-trivial software written in them, shifting the burden of ensuring type safety to the programmer. Testing systems for statically typed languages don't need to concern themselves with type safety. Dynamic typing also carries performance penalties at run-time (checking type safety for e.g. every member access).

* Some statically-typed languages (e.g. C++) allow for very specific subversions of type safety at run-time, but its usually clear to the programmer when they are doing something dangerous.


"but its usually clear to the programmer when they are doing something dangerous"

Um, no. Every pointer is fundamentally unsafe, and a lot of C++ code is written with pointers through and through.


"Every type system eliminates all its own type errors by definition."

Nonsense. C/C++ will happily produce a warning, if you specified -Wall, that you violated a type constraint and then go ahead and compile your program. To suggest that such violations are part of C's type system is to pedantically and willfully miss the point.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

Do you consider silently trimming value during mandatory explicit type conversion a type error?


No, because if you explicitly converted to another type, that's what you wanted.


Sometimes you want it to be lossy, but not most of the time, and yet there is no choice. I had a bug caused by that, that's why I remember it. Silent explicit type conversions are essentially unsafe.


What do you mean by "silent explicit type conversion"? If you said "silent type conversion" I'd read that as "implicit type conversion". But you said "explicit", which means you've got it very clearly in your code that you're doing a type conversion (to a smaller type), so what's silent about that?


Silent in terms of compiler not complaining.


Why would the compiler complain? Doing a narrowing type conversion is a perfectly legitimate thing to do. So when you ask the compiler to do it, it should.


It forces you to explicitly say that you know you're forcing a value into a narrower type; I think the fact that might mean loss of information is understood, by definition. What would you like it to do?


I would like it to tell me if I accidentally converted to a narrower type. This is a problem, because I don't necessary see the type I'm converting from due to type inference or simply am too far from the context where that type is declared and have to make assumptions to keep going. These assumptions of course fail sometimes and cause bugs. Same problem with precisions, by the way. I'm not sure how exactly compilers should fix this, the easiest fix seems to simply have different operators to explicitly allow lossy conversions, when necessary. But the bigger deal would be to treat numbers as sets of possible values with solvers or whatever to warn about mistakes where you use unhandled values in the code somewhere.


Rust actually has this, this works:

    fn main() {
        let x = 5i32;
        let y: i64 = x.into();
        println!("{}", y);
    }
this doesn't work:

    fn main() {
        let x = 5i32;
        let y: i16 = x.into();
        println!("{}", y);
    }
you have to write:

    fn main() {
        let x = 5i32;
        let y = x as i16;
        println!("{}", y);
    }
where `as` has the potential of being lossy!


I agree that this is where we should be headed; it seems to me that Liquid Haskell, which was submitted to HN recently[1], could actually do what you need, since it uses an SMT solver to check for preconditions.

The casting function could specify the valid input values, and force the programmer to handle the rest of the cases when the input type is wider.

[1] https://news.ycombinator.com/item?id=13125328


> Silent explicit type conversions are essentially unsafe.

This is a contradiction, something cannot be both silent and explicit.


Sounds like an implicit conversion.


In Rust, lossy conversions only occur if you you explicitly write `var as type` and even that syntax is limited to certain types e.g. you can't coerce an integer to a function. In order to do something crazy like that, you'd need to call the unsafe `mem::transmute` function. The language cannot be much safer in this regard short of disallowing any sort of type conversions.


"Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language)."

This is a common misunderstanding of being Turing complete. A program running in an interpreter isn't unsafe just because the programming language and the interpreter are Turing complete. Being Turing complete doesn't mean being able to gain root access, overwrite the boot block, scribble on the hard drive, etc.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It depends what you call a "type error". Is calling `car` on a `nil` instead of a `cons` a type error?


In common lisp 'nil is of type 'null which is a subtype of 'list, which is a union of the types 'null and 'cons so it wouldn't be an error. Other lisps might chose to do it differently.


It is in Rust, although I'm sure you can come up with something where the type system won't save you.


True, they messed up in their PR a bit with bold claims about safety. It definitely would be better to be careful with the words they use.

Like this "safe concurrency" claim sounds really fearless to me, even though I know they mean some guarantees towards thread safety and all that, not actual safe concurrency.


The docs are very clear about what safety means, but agreed that the subtleties can get lost in advertising. https://doc.rust-lang.org/book/unsafe.html#what-does-safe-me...


Yes, for instance, it's easy to create concurrent programs that are semantically wrong (in other words inadequate for use) albeit correct in terms of "types" because the coder made an erroneous assumption about determinism somewhere. The type systems that we see nowadays do not help with that.


> Rust is about decreasing unsafetyness or whatever the correct word for that is.

I think the word you're looking for is "safety". Safety is inherently relative and mostly about risk management. There's no such thing as 100% safe by definition.


"unsafety" rather ? ;) decreasing non-safety is not the same as increasing safety. One starts with the assumption that things are safe. The other does not.


Yeah, I'm a bit worried that Rust is raising the floor, but maybe lowering/hardening the ceiling when it comes to code safety. I mean, if you consider static (compile-time) versus dynamic (run-time) safety, Rust leans heavily toward the former, and presumably gains a performance benefit because of it. But Rust acknowledges that it is not practical to achieve memory safety completely statically and so provides dynamically checked data types as well (vectors, RefCell, etc.).

As you consider higher (application) level notions of safety, it generally becomes less practical to achieve that safety statically (at compile-time), so you'd want your language or your framework or whatever to facilitate the implementation and performance optimization of dynamic (run-time) safety. At the moment I'm thinking about automatic injection of run-time asserts (of application level invariants) at appropriate places in the code. (At the start and maybe at the end of public member functions for example.)

If you subscribe to this idea, then it sort of follows that Rust's borrow checker may be "in the wrong place". That is, rather than forcing you to write code that is memory safe in a particular statically verifiable way, Rust could have instead enforced memory safety by injecting run-time checks into the code and optimizing them out when it recognizes code that appeases the borrow checker. (Of course the optimizer could report what run-time checks it was not able to optimize out, if you wanted to self-impose static verification.)

(Statically optimized) dynamic safety is more scalable than statically verified safety. As a "systems language", Rust may be less concerned with higher/application level safety. But I think this might be a little short-sighted. The definition of "system" is expanding, and the proportion of "higher level" safety concerns along with it.


> If you subscribe to this idea, then it sort of follows that Rust's borrow checker may be "in the wrong place". That is, rather than forcing you to write code that is memory safe in a particular statically verifiable way, Rust could have instead enforced memory safety by injecting run-time checks into the code and optimizing them out when it recognizes code that appeases the borrow checker.

That kind of lack of transparency about what in the hell your code is doing at runtime is really inappropriate in a system's language.

It's an interesting idea, and it would be neat to play with in a language that wanted to restrict itself to more business-logic level safety concerns, but it would absolutely come at the cost of not being appropriate for systems-level tasks.


Hmm. Is it less transparent than vectors which use implicit run-time bounds-checks? Don't RefCells use implicit run-time checks? (Btw I don't know Rust very well, so feel free to correct me.) And what about the question mark operator for dealing with exceptions/errors? Isn't there a lot going on under the hood there?

But yeah, I can understand the sentiment of wanting to minimize that kind of thing in a lot of cases. Perhaps, like C and C++, Rust might consider bifurcating into a "high transparency" language, and a "high productivity" superset of the language. In that case, would all of the existing Rust language make it into the "high transparency" subset?

Like I said, the problem is that no one's defining what a "system" is. Haven't they written a browser rendering engine in Rust? Is that a "system"? Is there any part of the browser that does not qualify as a system?


RefCell does use run-time checks; that's it's entire reason for existing. They're "implicit" in the sense that they're inside of the functions, but that's the job of calling the function, so.

Question mark is roughly six lines of code, it's a match statement on Result, which has two cases.


So do you agree that Rust should remain a "high transparency" language? Do you have an opinion on a "high productivity"/"application level safety supporting" superset of the language? Rust seems to be creeping out of the system space and into the application space. Will/should Rust go out of its way to support application level programming? (By, among other things, facilitating enforcement of application level invariants?)

edit: Or is that just looking too far ahead?


To me, this is a library concern. Rust the language should remain high transparency, but that doesn't mean that programming in it should force you to always deal with every single thing if you don't want to. Look at the recent addition of ?, for example: you can get very succinct code if you don't want to deal with the particulars, but you still have access if you want to. I think good Rust libraries will end up like that.


Exactly. There is always the danger of self-hypnosis, by repeating 'memory safety means safety, full stop' too often.


In the context of Rust, "safety" usually means "memory and data-race safety".


Yes! Rust adds a way to manage it, in a two-tier system. There is `unsafe` marked code blocks and code without. The trusted code base has to be in the part marked `unsafe`.

It's simple (only the two tiers), but it is another tool for abstraction and managing complexity.


when compiler doesn't let you write race conditions or unintended variable mutation - it's a huge thing, not just "decreasing unsafetyness". Although I hope Rust will also get rid of arrays bounds errors.


If you're a C++ programmer, Rust is mostly about memory safety.

If you're a Java programmer, Rust is mostly about tighter resource usage.

If you're a Python programmer, Rust is mostly about type safety and speed.


I'm a polyglot programmer and for me Rust is mostly about the awesome abstractions and the great community.


> If you're a Javascript programmer, Rust is mostly about the awesome abstractions and the great community.

;)


If you are a c++ programmer, rust is also a lot about developer ergonomics. Syntax is nicer, build system is built in, less UB to think about, package management comes built in, and finally getting rid of those damn header files is such a relief.


I do not mean to pick on C++: the same problems plague C, Ada, Alef, Pascal, Mesa, PL/I, Algol, Forth, Fortran ... show me a language with manual memory management and threading, and I will show you an engineering tragedy waiting to happen.

I think if programming is to make progress as a field, then we need to develop a methodology for figuring out how to quantify the cost-benefit trade-offs around "engineering tragedies waiting to happen." The fact that we have all of these endless debates that resemble arguments about religion shows that we are missing some key processes and pieces of knowledge as a field. Instead of developing those, we still get enamored of nifty ideas. That's because we can't gather data and have productive discussions around costs.

There are significant emergent costs encountered when "programming in the large." A lot of these seem to be anti-synergistic with powerful language features and "nifty ideas." How do we quantify this? There are significant institutional risks encountered when maintaining applications over time spans longer than several years. There are hard to quantify costs associated with frequent short delays and lags in tools. There are difficult to quantify costs associated with the fragility of development environment setups. In my experience most of the cost of software development is embodied in these myriad "nickel and dime" packets, and that much of the religious-war arguing about programming languages is actually about those costs.

(For the record, I think Rust has a bunch of nifty ideas. I think they're going down the right track.)


> A lot of these seem to be anti-synergistic with powerful language features and "nifty ideas."

I think this is a pretty big myth that only applies to some of these language features.


I think this is a pretty big myth that only applies to some of these language features.

If you admit that it applies to some language features, then it's not a myth by definition.

In Smalltalk, #doesNotUndertand: handlers and proxies and unfortunate "clever" use of message sends synthesized within custom primitives could result in outsized costs. (It's where method_missing comes from in Ruby.) It's not that you couldn't do powerful and useful things with those facilities. It's that large projects that were around for years tended to accumulate "clever" hacks from bright young developers with a little too much hubris. Often, those costs would be incurred years after the code was written.

Yes, it only applies to some language features. But it clearly does apply to some of them. I don't think it's easy to come by quantified costs for these. Doesn't this strike you as a problem for our field?


The original Rust author make great points about safety. I think this new thrust on marketing emerges from Rust Roadmap 2017 which puts Rust usage in industry as one of the major goal. Currently Rust is about Go's age but nowhere close in usage. As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.


> Currently Rust is about Go's age but nowhere close in usage.

Rust was released in 2015, it's merely one and a half years old, while Go was released in March 2012.

If you count the inception period of Rust (pre-1.0) you should also count Ken Thompson's and Rob Pike's work at plan9, which doesn't make more sense …

Fun fact: Go's first commit is 44 years old [1] ;)

[1] https://github.com/golang/go/commit/7d7c6a97f815e9279d08cfae...


It is not my intention to show Rust in bad light. I roughly mean to say both languages have put about 6-7 years of engineering effort by now but usage differs by an order of magnitude or so.

I agree that they had very different priorities in beginning and it changes with time. My goal was to merely point out core rust people in Mozilla and elsewhere now recognize that industry usage is an area of high importance in coming months/years in contrast to purely technical concerns of past.


I think your not looking at this correctly. Swift also had a very fast pasted release cycle like Go.

Rust took a different path, the developers until the 1.0 release basically said; use at your own risk, we reserve the right to change anything and everything and break it all. This freed them of trying to keep the language backward compatible.

After the 1.0 release, there have been nearly no breaking changes introduced to the language, and they have signaled that they want to keep this stability going into the future. This is a big difference from Go which decided to go for an earlier public release, and now is much more constrained on how it can change (if they don't want to break all the stuff built on it out there).

So it's not fair to include the 6-7 year development cycle, as that could be more thought of as a research period, one that laid the groundwork for the safety in everything which is the basis for Rust now.


I said 6-7 years of engineering effort not development cycle which are often linear. I am not blaming for taking long to get things right. If authors think they need more time then of course they need more time. Right now they really want to have broader industry usage and this can't be any clearer when they say:

"Production use measures our design success; it's the ultimate reality check. Rust takes a unique stance on a number of tradeoffs, which we believe to position it well for writing fast and reliable software. The real test of those beliefs is people using Rust to build large, production systems, on which they're betting time and money."


Well, Go is Limbo with some Oberon-2 touches.


I thought it was supposed to be Oberon-2 with some Limbo, C, etc touches. That's part of how it becomes my slam dunk against C in anther discussion. ;)


If you read the Inferno programming guide, you will see how much the languages resemble themselves.

Major differences are that Limbo uses a VM based runtime with dynamic loading and Abstract Data Types.

But your approach is also good, still Oberon had some issues that were eventually improved in Oberon-2 that Go lacks.

On the other hand Oberon-07 is even more bare bones than Go.


I only glanced at Limbo. I'll have to take a more detailed look at it. Inferno was definitely interesting. It was even ported to Android to replace Java runtime by people at Sandia.


Most languages resemble themselves very strongly.


You lost me there, regarding Go vs Limbo.


Dumb joke. "resemble each other" is a more unambiguous way to say what you meant.


It might be Go’s age overall from initial inception, but a typical point at which people start paying attention is the 1.0 release. In Go this was March 2012, for Rust it May 2015, over three years later.


To be honest, Rust took so long to stop making breaking changes and stabilize, I sort of tuned out on it -- it was never at a point it made sense to start using.

Has Rust actually settled down on some stability?


A little over 18 months ago, 1.0 was released. We've had very strong compatibility guarantees since then.


While that is true, in Go, almost no libraries are written to use non-stable features. This is not the case in Rust.


There are really only two popular Rust libraries that use unstable features:

1. serde, the best serialization/deserialization library. This works on stable now using the `serge_codegen` crate and a custom `build.rs` script. This will Just Work on stable with no extra setup once Macros 1.1 lands, theoretically in about 5 weeks. But I'm using it on stable now in a half-dozen projects, thanks to a `build.rs` script.

2. Diesel, the high-level ORM library for Rust. This works on stable using a `build.rs` script, and 90% of it will work on stable without the `build.rs` script once Macros 1.1 ships.

There are a few other experimental libraries like Rocket (which looks very slick) that only work on nightly Rust. But I don't think any of them are particularly popular yet.

Personally, I maintain something like two dozen Rust crates and applications, and only two use nightly Rust. Both need Macros 1.1, which should be on stable in about 5 weeks.


serde works on stable without serde_codegen or custom build scripts, too. It works like a regular library in fact, just with less features (no code generation feature). A custom data structure might not be able to use the default code generation for its impls anyway.


This is true of diesel as well: http://diesel.rs/guides/getting-started/


Right. Go and Rust have completely different development models, so that wouldn't make any sense.

To recap, in Rust, to make additions to the language:

1. Small additions mean make a PR.

2. Big additions mean make an RFC, then a PR if accepted.

3. These PRs go behind a feature flag that lets us track the feature, and only allows it on nightly.

4. People who desire the new feature try it out. (This is what you refer to.)

5. If any issues are found, they're fixed, or, in the worst case, the feature is removed.

6. The feature is made available to stable users, the feature flag is removed, and backwards incompatible changes can no longer happen.

What would be un-healthy is if everyone had to rely on nightly for everything. At the moment, most people use stable Rust. And of the people that use nightly, the largest single feature will be stable as of the next stable release for Rust. But some people are always going to be using nightly features, and that's a good thing: it means that by the time it hits stable, it's already undergone some degree of vetting.


Any idea when custom allocator will become stable? That's the only thing holding us to nightly.


I literally had a conversation about this yesterday. We need someone to champion the work. If that's you, we should get in touch.


email sent


Excellent! It might take me a day or two; I have some stuff to look up in order to give you a proper response.


> Most people use stable Rust.

However, many popular or important libraries like Serde or Rocket require the use of nightly. I recall the article a very short while ago on the front page that noted how Rust has effectively diverged into two languages, stable and nightly.


This article has been unanimously qualified as FUD by the Rust user community. Nightly is for innovations and experimentations, not for real use :

- Rocket is an amazing example of innovation, but it's currently just an experimentation.

- Serde works on stable for a long time but they experimented a more ergonomic version on nightly. They iterated upon it with Rust developers to create a new and well thought stable API which will land in the next version of Rust (macro 1.1 in february).

Rocket is likely to follow Serde's path during 2017 and will eventually work on stable in 2018. Building a great langage takes time ;)


Serde does not require nightly, though it is nicer to use on nightly. That's the "will be stable as of the next stable release for Rust" I alluded to above.

Rocket just came out; I think it's an extremely interesting library, but https://crates.io/crates/rocket shows that it's been downloaded 618 times. It hasn't exactly taken the world by storm yet. I think it shows great potential though! But it's not a good case of showing that the Rust ecosystem largely depends on nightly.

The article you refer to contained a number of factual errors.


Over the last year of intermittently dossing around with Rust I encountered the need to use nightly regularly, with various crates refusing to compile (and usually I stopped my experimentation there as my barrier to entry was anything harder than pacman -S rust cargo). Yet I did not know until just now that Serde is moving off a dependency on nightly and it is encouraging that this is a trend with similar libraries; I stand corrected.


Serve creator here. The core library 'serde' never actually required nightly. It was just the helper library 'serde_derive' which can automatically implement the serde traits that needed nightly. dtolnay has been doing a wonderful job getting us off nightly. We can't wait to finally get rid of using it.


If you happen to remember which libraries those were, I'd be interested in hearing about them!

> my barrier to entry was anything harder than pacman -S rust cargo

Today, that also wouldn't be the case: "rustup update nightly" and "rustup run nightly cargo build" or whatever, with extra stuff to automatically switch per directory.


We've been using Rust on production since the summer and we only use the stable compiler.

These "popular or important libraries" are nice use cases what you might be able to do with Rust in future. But relying on them right now and using them in production is not really a good idea.


That article posted half a week ago [1] claiming Rust developers and libraries rely on nightly was FUD. It's terrible that this is being repeated, because it is simply not true.

[1] https://news.ycombinator.com/item?id=13251729


Er, there are no unstable features in Go... the language is deliberately frozen.


What constitutes a strong compatibility guarantee?

How many breaking changes happened in the past 18 months?



    >> Currently Rust is about Go's age but nowhere close in usage.
However, Go has been suitable for production projects for several years longer than Rust.

Rust sits in a very useful niche not served by other languages, and in steady state will probably be more popular than Go.

Go has a very well designed ecosystem. I like it, use it, and am very impressed with everything about it I have seen. However, I don't see use cases that Go serves vastly better than other available alternatives.


Go has one very significant advantage over Rust: Simplicity. Reading someone else's Go code is such a breath of fresh air compared to reading someone else's C++ code (or god forbid, Haskell). Having a standard formatting further enhances this. I feel like this aspect of Go is not given enough weight. It's a huge benefit to large software projects and large software companies. I also suspect that this may be why the Go team has been so conservative re: generics: If they aren't implemented carefully, they could unacceptably complicate the language.

Rust is a powerful language, but with that power comes the ability to write "fancy" code -- esoteric, unreadable, and unmaintainable.


In my experience working with new and experienced Go developers that simplicity is pretty superficial.

Take a look at SliceTricks[0]. These are all pretty simple operations in other languages that the Go authors think users should be forced to write manually so they understand the costs. Reading these in code reviews is definitely not a breath of fresh air.

I also see Go users develop bad habits that will burn them in other languages, like returning a reference to a "stack" allocated variable. In the same vein, if you care about performance you have to internalize the escape analysis rules, which most new users won't know about.

There are tons of footguns[1] in Golang as well. I constantly see people get burned by issues at runtime. I truly believe that the Golang benefits are short-sighted. You get some upfront development gains that you end up pain for with less reliable software. I personally would rather pain that cost upfront where possible with the compiler telling me when something is wrong.

[0] https://github.com/golang/go/wiki/SliceTricks [1] http://devs.cloudimmunity.com/gotchas-and-common-mistakes-in...


Go is just the current incarnation of the so-called New Jersey approach.

> I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing.


> re: generics: If they aren't implemented carefully, they could unacceptably complicate the language.

Can you explain why?

For example, simple generics which could be implemented as regex doesn't seem to complicated? Am I missing something?

For example (in Java), an ArrayList <String> could be turned into a StringArrayList by a basic internal pre-processor.


How do you handle error messages? Can you make functions, methods or their parameters generic? Can you dynamically link generics?


> Currently Rust is about Go's age but nowhere close in usage.

Citation? I see a lot of people talking about both, but not very many public projects in either. Rust at least has a "killer app" on the way in the form of Servo, whereas I haven't heard of any user-facing programs in Go.


FWIW, Go and Rust are 16 and 43, respectively on the TIOBE index:

http://www.tiobe.com/tiobe-index/

A Github search turned up 2,658 Go repositories with more than 100 stars:

https://github.com/search?l=go&q=stars%3A%3E100&type=Reposit...

compared to 348 Rust repositories:

https://github.com/search?l=rust&q=stars%3A%3E100&type=Repos...

Notably, Docker has more stars than Go itself.

Edit: you may also be interested in IEEE Spectrum's interactive list of the top programming languages:

http://spectrum.ieee.org/static/interactive-the-top-programm...

With the default parameters, Go and Rust are 10 and 26, respectively.


It's also worth noting that you're a bit less likely to see rust users because the kind of software that wants rust instead of golang tends to be the kind that gets written by a large company with deep pockets and a preference for closed source repositories.


If only Go was useful in building closed source products. Google must not have gotten the memo about Rust yet.


Contrary to popular legend, Google is not particularly strong on Go use, and it's not "THE official language".

It wasn't officially commissioned or officially adopted by Google to solve Google's coding problems as some believe.

It was merely initiated by a small team in Google, as their proposal for solving Google-scale coding problems. And has never been mandatory for new Google projects etc.

Go is ONE of the allowed languages, from what I know, but tons of stuff is written in Java, C++ and Python with no intentions of switching.

All those years, only a few, and basically trivial with respect to Google's needs, use examples for Go have come out of Google-land (a proxy/balancer for MySQL used in YouTube, Google Downloads caching, etc).


Why would they switch a perfectly working software code with something in Go? Java is 15 year older than Go so obviously lot more code would be in it. More interesting case is Dart as despite having 'official' Google support, industry wide usage is rather tepid compared to Go.


Some googler claimed that Go currently has single-digit MLOC in Go, and that it's growing. But they have way more C++ and Java code.


Go: https://github.com/golang/go/wiki/GoUsers

Rust: https://www.rust-lang.org/en-US/friends.html

Without counting Go usage roughly looks 10 times more than Rust.


Docker and Kubernetes are relatively popular, and both are written in Go.


Go is the language in the LXC, Docker, containerisation space.


Which is something I never understood. Since you are mainly wrapping OS API, why not take a higher level langage ?


For all the benefits of higher-level languages. Remember that the designers of C had to agree on every feature that went into Go: a language they co-designed for better programming experience & results. Wirth-style languages also compile ultra-fast for quick iterations.


My question is really why not choose something really higher level ? Go is like stuck in the middle. If I need very low level, I'd choose rust. If I need high level, I'd choose Python. Go is kinda filling a weird niche between the two and I rarely find a case where I feel like it's the best choice.


What language you have in mind?


Anything that is robust, dynamic, with a big ecosystem. Python, Ruby, whatever have you. You don't need raw speed since the OS API does most of the work, and the ease of programming would make it more productive. In the end, they added Python anyway for stuff like "compose", so I'm missing the point of using Go for this. The Go code is not even network bound, the ability to get multicore easily is not a bit advantage here since you spawn a new process anyway, so really, why ?


Go has much lower memory usage, high throughput, std ssl/tls/http libraries, high performance GC and produce fully static binaries. I do not think Python/Ruby can provide required performance for containers, cluster scheduling, orchestration etc. Even Java with much higher perf compared to Python/Ruby is not suitable because of high memory usage.


> lower memory usage, high throughput [...] high performance GC

Irrelevant, as most ressources will be consummed from underlying OS API.

> std ssl/tls/http libraries

So does Python and Ruby. But even if it didn't, you provide a package anyway.

> produce fully static binaries

You can do that with Python and nuikta. But there is need for it, since docker is provided as an msi/deb/whatever that take care of distribution.

> high memory usage.

On your xGb RAM server, the memory usage of your container is the least of your problem. Your DB will dwarf it, your app will dwarf it. Anything you put in your containers will take 100 times more memory.


OCaml is my first thought in that space.


Ocaml is a good choice, and Anil Madhavapeddy used it for his unikernel ideas. But Ocaml doesn't have strong backing any more. JaneStreet alone isn't enough.


InfluxDB and Prometheus are written in Go. Similarly, etcd.


Docker is go, and afaik has always been.

https://github.com/docker/docker


Rust code is also going into Firefox, slowly for now, but it will speed up over 2017.


Part of the issue stopping me from jumping in is that it feels like the language is still changing in ways large enough to make it difficult to learn. That may not be true anymore, but it seems like it would take a lot of work to keep up with the current 'best practices'.


It's true that idioms are still developing; there are tools like clippy to help you learn them, though.


> As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.

I don't. It's a measure of the overall success. Design is but a small part of that. Community, outreach, corporate back up… play a huge part in the success of a language.

Go is a wonderful example: doesn't even provide parametric polymorphism (generics), and they got away with that! Feels like Google backup matters more than the core language here. Either that, or someone please explain why omitting generics today is not a big mistake. Feels like dynamic scope all over again.


Think about who is picking up which language and how many programmers with what background there are. Go offers solutions for certain problems that many of programmers have. I.e. performance and ease of deployment are the biggest ones for people coming from scripting languages, and there are a lot of them. Having easy to pick up syntax doesn't hurt easier. But generics are not as valuable for them at the beginning. And Go's syntax has like a dozen of critical problems either way, might as well add generics in 2.0 together with all the fixes.


Google backup is really more towards Dart when compared to Go. Dart has much more Google contributors vs Non-Google. Dart Dev summit seems to be totally sponsored by Google and it even had free entry whereas Gophercon is independent of Google.

Though Dart has generics but not much usage in industry.


> Go is a wonderful example: doesn't even provide parametric polymorphism (generics), and they got away with that!

Only because one can opt-out of the type system and use a catch all "interface {}" .Actually even the std lib does that ... a lot.


> As the roadmap says "Production use measures our design success; it's the ultimate reality check." I agree with that.

C (and later C++) became popular because Unix was successful. Safe systems programming and safe browsers are nice to have but not completely safe if the underlying OS is unsafe (Windows in particular). Rust's "killer app" would be a safe OS. The first attempt (Redox) is already there.


That's what I'm getting from the safe features emphasis and the fiercely C/C++ competitive slant argued by the evangelists in this thread.

I was looking through the pointer safety portion of the safety brochure: https://doc.rust-lang.org/book/unsafe.html#what-does-safe-me... and wondering how to approach unwrapping an ethernet packet on the wire in the easy unsigned pointer-specific way it can be done in C and came across a pcap implementation in Rust (sample source): https://github.com/ebfull/pcap/blob/master/src/raw.rs Are you serious? Rust is not for me, thanks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: