Hacker News new | past | comments | ask | show | jobs | submit login

I'm a lowly ancient Java programmer and I think Rust is far far more than safety.

In my opinion Rust is about doing things right. It may have been about safety at first but I think it is more than that given the work of the community.

Yes I know there is the right tool for the right job and is impossible to fill all use cases but IMO Rust is striving for iPhone like usage.

I have never seen a more disciplined and balanced community approach to creating PL. Everything seems to be carefully thought out and iterated on. There is a lot to be said to this (although ironically I suppose one could call that safe)!

PL is more than the language. It is works, community and mindshare.

If Rust was so concerned with safety I don't think much work would be done on making it so consumable for all with continuous improvements of compiler error messages, easier syntax and improved documentation.

Rust is one of the first languages in a long time that makes you think different.

If it is just safety... safety is one overloaded word.




I think this comment and the OP are both correct. The point of Rust is safety, in the sense that memory safety should be the default for all programming languages, and has been the default for all non-systems languages since the 90s. The only holdouts have been because people used to claim that memory safety wasn't possible without making a language unusably slow, which Rust has disproven. In all my years of teaching Rust, I can't count how many times I've told people that you can have a memory-safe systems language without garbage collection and have them look me in the eye and say, "what, no, that's supposed to be impossible", and I still get a kick out of it every time.

The importance of Rust is that it's raised the baseline for low-level languages in the modern age. If any future systems languages emerge that don't feature memory safety, it will have to be a deliberate choice that must be defended rather than just an implicit assumption of how the world works.


I really want to love Rust, and while I understand the Rust borrow checker in theory, actually using it in practice has been a major headache. I tried Rust on a simple terminal-based project, and after a week of feeling that I was getting nowhere I switched to Go and had a proof of concept in several hours.

With that said, can you recommend a good source to really understand best practices and patterns for ownership and borrowing? I feel that's the biggest hurdle to using Rust (at least in my case).


Alas, I think a week is too short to give Rust. Go will get you more pay-off in the short term, but as you internalise the rules of Rust you'll really start to reap the benefits. Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.


Agreed. It took me about two months to really understand Rust, and I was coming from a background of languages like C++ and Scala. It pays off, though, for what I want to do with computers.

The rise of "it demos well so we should use it" is in a lot of ways troubling. The inflection point for productivity doesn't need to be "five minutes in" to be worthwhile if you're doing something that is, itself, worthwhile.


> Unfortunately being an experienced user, I'm not aware of a single online source for learning this stuff - I mainly teach people directly in person or on IRC.

How about http://rust-lang.github.io/book/ ?


Mind sharing the source code? I can whip out a Rust version in the same amount of time. I have been using Rust for two years. I never worry about the borrow checker as it never gives me problems.


If you know swift/objc pretty well with it's ARC reference counting memory management, do you pick up the borrow checker faster?


AFAIK Swift doesn't really have references or anything to enforce unique ownership. If you've ever used a language with pointers before, including the simple pointers that Go has, then that's a good start. If you further understand how escape analysis works in Go (or the various escaping annotations in Swift), then imagine a language where every variable must never escape, and where this is enforced by the compiler.

Mostly I think that the fear of the borrow checker has become more meme than truth at this point. The difficulty with Rust is that it has things found in different languages, so no matter who you are you probably have to learn something: a strong type system and unique ownership and pointers, and if you're coming from a language like Python and Javascript you may know none of this! But that's what we're here for (that's where I came from!), and we like to help. :)


Seeing that people keep testifying to the fact that the borrow cHecker gives them problems, I wouldn't say it's just a meme. Its easy and tempting to imagine problem points disappearing over time, but realistically, some portion of people will always run into it, because it's unique and fundamentally a bit complex. It's better to spread that message, that running into borrowing problems is a common, real, yet temporary hangup that can be overcome.


I would say that this is a common message, to a degree. What I commonly see people expressing about Rust is that it took them a while to internalize the borrow checker, possibly a few weeks, and where before that point it was painful to work with to some degree, afterwards it was a boon as it helped them more quickly spot problems and forced them to consider the problem more closely before committing to code that might need to be scrapped.

As someone who has yet to do more than do minimal dabbling in the language, this is a very positive message. It expresses that there is some work to learning this so don't be put off if you experience it, it's normal, and that the work required to learn it pays off in the end.

That's probably a more appropriate message than that it's not hard. I don't think it's appropriate to express to people that learning pointers in C and C++ is "easy". It's not "hard", but it's not necessarily easy for some people. It requires a specific mental model, and depending on how they learned to program, it may be more or less easy for them to wrap their head around. Afterwards, it's easy and makes sense. I assume the borrow checker follows a similar learning hurdle. That doesn't mean we should forget what it's like before we've learned it though (and at this point, there's probably a lot of people in the midst of learning rust that haven't quite fully internalized the borrow checker).


I'm inclined to agree. It took me a week or two of daily usage to finally "get used" to the borrow checker. It's a bit of a paradigm shift, to be sure, but it's not insurmountable. In fact, I think learning Rust has made even my C code better, because now I'm in the habit of thinking thoroughly about ownership and the like, which is something that I did to an extent before (because you have to to write robust software without a GC), but it was never explicit and I never would've been able to articulate the rules like I can now.

That said, I still occasionally have problems where I feel like I'm doing something "dirty" or "hacky" just to satisfy the borrow checker. It's easy to program yourself into a corner and then find yourself calling `clone()` (the situation, I've been told, has gotten much better in recent releases with improvements to the borrow checker, but alas I haven't had a chance to play much with Rust in nearly a year).

Another thing that I still find difficult is dealing with multiple lifetimes in structs, to the point that I usually just say "to hell with it" and wrap everything in an `Rc<T>`. And sometimes there's simply no safe way (afaict) to do some mutation that I want to do without risking a panic at runtime (typically involving a mutable borrow and a recursive call), which leads to a deep re-thinking of some algorithm I'm trying to implement. That's not Rust's fault, though—it's a real, theoretical problem that arises in the face of mutation. In time, I'm sure there will be well-understood patterns for handling such cases.


I absolutely agree.

If you look at Rust from a systems programmer perspective and compare it with the systems languages OP lists then, yes, safety is THE most radical feature.

But Rust can compete on so many more levels. Web services, user facing applications for example. Languages competing in that space usually bring memory safety, so it's kind of a non-issue. Safety enables Rust to be a viable choice for these tasks, but it needs more than that to be on-par with the other languages. And Rust's got plenty of things going for it, so there's nothing wrong with stopping to play the safety card (since that is expected anyways) and painting Rust as a language that is actually fun to work with.


>But Rust can compete on so many more levels

How ? There are languages with more expressive type systems high level type systems (Haskel/OCaml presumably). There are languages with much more mature libraries, ecosystems and tooling (C#/Java). There are languages with both (F#/Scala).

What is it that makes Rust a good applications programming language ? You said it your self GC doesn't really matter that much in this space and GC based languages are just more elegant and not to mention the tooling is way more mature. Runtime also doesn't matter that much and with the recent changes to .NET can be avoided anyway.

This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io", "use rust for everything because the type safety is literally the best and it's the only language with a strong type system out there". I get it it's new and shiny, I like it to, and it has a strong argument to make in the systems programming - the designers are doing a good job of making trade offs that let you retain low level control while still having memory safety - but these tradeoffs are just that and they come at the expense of higher level stuff, higher level languages don't need to make these because they don't pretend to be systems programming languages.


Yeah, on any given dimension, even safety, you'll find languages that are stronger than Rust. It excels IMO because of the balance it finds between type system, safety, functional vs imperative code, etc. It puts all these together in one package that I feel like I can use for actual work. I don't know of any other solid contenders here, except maybe Swift.

I have to agree that the "Rust all the things!" game is getting old.


When I write in Rust, I have feeling "that's how programming should work". I don't know how to express it in more scientific way. Errors handling, pattern matching, Result type - it's all how it should work. I know, some other languages have similar features, but Rust also has race-conditions protection (very important thing), good package manager out of the box, testing tool (cargo test), very smart compiler and great performance - just all the best.


Lots of good programming languages out there today honestly. Rust is great. I recommend everyone learns themselves a language for each need.

I've got Clojure/Clojurescript, Rust, Python, HTML5 JavaScript, F#/C# and Java.

Clojure/Clojurescript is used for long running processes and web development. So backend/frontend stuff. I also use it for fun experiments, the REPL is great at that.

Rust is used when performance matters, when I want the simplicity of running a native executable with no heavy dependency, and when I need to write low level components.

Python is my goto scripting language for quick scripts, hacks, messing around. It also serves my scientific computing needs, data mining, visualization, etc. Also, this is the best language I've found for doing coding interviews or practice. So easy to whiteboard python.

HTML5 JavaScript is used for most web development, for customizing my editor Atom.io, and other things. With Metro apps, and other OS moving towards an HTML5 JavaScript stack, its also quite good at some user apps.

F# is good for simply knowing an ML language and when I want more functional flair in the .Net world.

C# and Java are used to pay the bills, as they are easiest to find jobs for.

There would be valid alternatives for most of those categories, but I highly recommend everyone invest in knowing one language for each one.


I get the opposite feeling. When I write in Rust, I have feeling "that's how programming shouldn't work". The syntax is awkward and you are expected to spend weeks to get anything done. Ultimately, it seems to come down to Rust fans who have a sunken cost fallacy - I spent weeks learning this, so they think it is worth it.


When I was learning C back in 1990, it took me a long time to get comfortable with it, and that's after years of programming in Assembly language (8-bit, 16-bit, 32-bit). It took me some time to stop writing assembly in C and start writing C in C, if that make sense.

I haven't switched to Rust yet (I still deal with C/C++ at work on "legacy" code, and I'm having too much fun with Lua for my own stuff) but I don't expect to pick it up "quickly" and I'm sure I'll be trying to write C in Rust for some time. It comes with the territory.


Just because some people have a different opinion than you, doesn't mean it is due to fallacious reasoning. Ultimately, it seems to come down to rust haters who have a sunken cost fallacy - I spent years writing bad code in bad lanaguages, so I don't want to give that up to do things better.


Please be more patient :) Flamewars will not help us, definitely. I'm nobody to criticize your way to communicate, but for the sake of Rust community reputation - please let's avoid "Rust vs X" wars.


Please be more patient :) Replying to posts you didn't read will not help us, definitely.


Few weeks is not enough time to get it when you don't like ideas of the language initially. I tried to read one dynamic language (will not name it to avoid fans fury) and realized very quickly that it's not my language. The only difference - I've been never thinking about "sunken cost fallacy" - there are languages I can use and there are others, and this diversity is necessary for healthy evolution.


I spent weeks learning other languages, just as I have Rust, but I still prefer to write code in Rust.


This Rust fanboyism is turning in to the new node hype "use node for everything node is web scale fast because it uses an event loop io instead of thread based io"

If what you are suggesting is true, isn't the biggest problem by far the fanboyism posing as knowledge? Wouldn't complaining about Rust be like living in the age of alchemy, and complaining about someone's particular potion? Isn't the epistemological squishyness of the entire field the biggest problem by far?


No, that's cache invalidation. And naming things.


IMO the biggest plus for rust is actually cargo. Building, versioning, and sharing modular code is essentially copy/paste in C/C++, and compared to that cargo is lightyears ahead.

I actually wish rust would accept its systems niche even more and move the stdlib to crates and make nostd the default mode. Personally, I see no reason to market rust for webapps or gui stuff, it cant/wont compete with Rails/QT for years to come if ever there.


I couldn't agree more with this. Cargo (and the general desire to think and work hard on ease-of-use) is a huge part of what makes Rust a pleasure. It alone would be enough for me to steer someone with experience in neither to Rust over C++ for lots and lots of use cases.


I haven't looked at rust, and this sentiment is exactly what has kept me away. I watched perl, java, python, ruby, node and c++ (boost) fall into the trap of "we know better than the end-user/developers/sysadmins/os vendor, so let's reinvent dpkg poorly".

Why should cargo be any different? It is solving a problem I don't have (debian, ubuntu, openbsd, and freaking illumnos all have acceptable package management), and creating a massive new problem (there is a whole thread below this one talking about rust dll hell between nightly and stable, and the thread links to other HN articles!). From my perspective all this work is wasted just because some developers somewhere use an OS that doesn't support apt or ports.

Sorry this is so ranty, but I really want to know if anyone has had luck using rust with their native package manager.


TL;DR: I think language-centric package managers do a better job at versioning packages per-project. Here's an anecdote to explain what I mean.

-----

Let's say I want to build a piece of software that depends on some software library written in C at version 1.0.1. It's distributed through my system package manager, so I sudo apt-get install libfoo.

~~ some time later ~~

Now let's say I want to build a different piece of software that also depends on foo, but at version 1.2.4. I notice that libfoo is already installed on my system, but the build fails. After a quick sudo apt-get install --only-upgrade libfoo. This piece of software now builds.

~~ Even later ~~

When I revisit the first project to rebuild it, the build fails, because this project hasn't been updated to use the newer version yet.

I'm fairly inexperienced with system package managers, but this is the wall I always hit. How should I proceed?


I'm arguing that you should just have one package manager, so in my world the only way for this to happen is if both the packages you're installing are tarballs that no one has bothered to port to your system. If the language-specific package manager did not exist, then there would be a better chance the package would exist for your OS already.

Anyway, Debian/Ubuntu has multiple fallbacks for this situation:

a. ppa's

b. parallel versions for libraries that break API compatibility (libfoo-1.0...deb, libfoo-1.2...deb that can coexist).

c. install non-current libfoo to ~/lib, and point one package at it (not really debian-specific)

d. debootstrap (install a chroot as a last resort -- this is better than "versioning packages per-project" from an upgrade / security update point of view, but worse from a usability perspective -- you need to manage chroots / dockers / etc).

I suspect the per-project versioning system is doing b or d under the hood. b is clearly preferable, but hard to get right, so you get stuff like python virtual environments, which do d, and are a reliability nightmare (I have 10 machines. The network is down. All but one of my scripts run on all but one of the machines...)

A long time ago, I decided that I don't have time for either of the following two things:

- libraries that frequently break API compatibility

- application developers that choose to use libraries with unstable APIs that also choose not to keep their stuff up to date.

This has saved me immeasurable time, as long as I stick to languages with strong system package manager support.

Usually, when I hit issues like the one you describe, it is in my own software, so I just break the dependency on libfoo the third time it happens.

When I absolutely have to deal with one package that conflicts with current (== shipped by os vendor), I usually do the ~/lib thing. autotools support something like ./configure --with-foo=~/lib. So does cmake, and every hand-written makefile I've seen.

[edit: whitespace]


> talking about rust dll hell

No, what that thread is talking about is that somebody wrote a library to exercise unstable features in the nightly branch of the Rust compiler, and that inspired somebody else to write a sky-is-falling blogpost claiming that nightly Rust was out of control and presented a dozen incorrect facts in support of that claim, so now we have to bother refuting the idea that nightly Rust is somehow a threat to the language.

As for the package manager criticism, the overlooked point is that OS package managers serve a different audience than language package managers. The former are optimized for end-users, and the latter are optimized for developers. The idea that they can be unified successfully is yet unproven, and making a programming language is already a hard enough task that attempting to solve that problem is just a distraction.


From the thread, I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree--people are talking about when packages will land in stable, but I'd expect that to all be automated by test infra, and too trivial for developers to work around to warrant a forum thread.

Anyway, it sounds like I stepped on a FUD landmine. Sorry.

It sounds like you work in this space. From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s, and it supports many languages, some of which don't seem to have popular language-specific package managers.

What's missing? Is it just cross-platform support? I can't imagine anything I'd want beyond apt-get build-dep and apt-get source.


> Anyway, it sounds like I stepped on a FUD landmine.

That's the problem with FUD, it gets everywhere and takes forever to clean up. :)

> I got the impression that it is not trivial to backport packages from the nightly tree to the stable tree

Let's be clear: stable is a strict subset of nightly. And I mean strict. All stable code runs on nightly, and if it didn't, that would mean that we broke backwards compatibility somehow. And even if you're on the nightly compiler, you have to be very explicit if you want to use unstable features (they're all opt-in). Furthermore, there's no ironclad reason that any given library must be on nightly, in that boring old every-language-is-Turing-complete way; people use unstable features because they either make their code faster or because they make their APIs nicer to use. You can "backport" them by removing those optimizations or changing your API, and though that seems harsh, note that people tend to clamor for stable solutions to their problems, so if you don't want to do it then somebody else will fork your library and do it and steal your users. There are strong incentives to being on stable: since stable code works on both nightly and stable releases, stable libraries have strictly larger markets and therefore mindshare/userbase; and since stable code doesn't break, library maintainers have much less work to do. At the same time, the Rust developers actively monitor the community to find the places where relatively large numbers of people are biting the bullet and accepting a nightly lib for speed or ergonomics, and the Rust developers then actively prioritize those unstable features (hence why deriving will be stable in the February release, which will get Serde and Diesel exclusively on stable, which together represent the clear plurality of reasons-to-be-on-nightly in the wild).

> What's missing?

I've already typed enough, but yes, cross-platform support is a colossal reason for developers favoring language-specific package managers. Another is rapid iteration: it's way, way easier to push a new version of a lib to Rubygems.org than it is to upstream it into Debian. Another is recency: if you want to use the most recent version of a given package rather than whatever Debian's got in stock, then you have to throw away a lot of the niceties of the system package manager anyway. But these are all things users don't want; they don't want to be bleeding-edge, they don't want code that hasn't seen any vetting, and they really don't care if the code they're running isn't portable to other operating systems.


> From my perspective, debian successfully unified the developer and end-user centric package manager in the '90s

I think a more accurate assessment would be that both Red Hat and Debian extended their package support through repositories to enough packages that developers often opt for the easy solution and use distribution packages instead of package manager provided ones because it's easy to, and there are some additional benefits if you are mainly targeting the same platform (and to some degree, distribution, if that applies) that you are developing on.

Unfortunately, you then have to deal with the fact that some modules or libraries invariably get used by code parts of the distribution itself, making their upgrade problematic (APIs change, behavior changes, etc). This becomes problematic when using or targeting a platform or distribution that provides long term support, when you could conceivably have to deal with 5+ year old libraries and modules that are in use. This necessitates multiple versions of packages for a module or library to support different versions sometimes, but that's a pain for package managers, so they tend to only do that for very popular items.

For a real, concrete example of how bad this can get, consider Perl. Perl 5.10 was included in RHEL/CentOS 5, released in early 2007. CentOs 5 doesn't go end of life until March 2017 (10 years, and that's prior to extended support). Perl is used by some distribution tools, so upgrading it for the system in general is problematic, and needs to be handled specially if all provided packages are expected to work (a lot of things include small Perl scripts, since just about every distro includes Perl). This creates a situation where new Perl language features can't be used on these systems, because the older Perl doesn't support them. That means module authors don't use the new features if they hope to have their module usable on these production systems. Authoring modules is a pain because you have to program as if your language hasn't changed in the last decade if you want to actually reach all your users. Some subsert of module authors decide they don't care, they'll just modernize and ignore those older systems. The package manager notice that newer versions of these modules don't work on they older systems, so core package refreshes (and third party repositories that package the modules) don't include the updates. Possibly not the security fixes as well, if it's a third party repository and they don't have the resources to backport a fix. If the module you need isn't super popular, you might be SOL with a prepackages solution.

You know the solution enterprise clients take for this? Either create their own local package manager repo and package their own modules, and add that to their system package manager, or deploy every application with all included dependencies so it's guaranteed to be self sufficient. The former makes rolling out changes and system management easier, but the latter provides a more stable application and developer experience. Neither is perfect.

Being bundled with the system is good for exposure, but can be fairly detrimental for trying to keep your user base up to date. It's much less of a problem for a compiled language, but still exhibits to a lesser degree in library API change.

Which is all just a really long-winded way of saying the problem was never really solved, and definitely not in the 90's. What you have is that the problem was largely reduced by the increasing irrelevancy of Perl (which, I believe was greatly increased by this). Besides Python none of the other dynamic languages (which of course are more susceptible to this) have ever reached the ubiquity Perl did in core distributions. Python learned somewhat from Perl with regard to this (while suffering it at the same time), but also has it's own situation (2->3) which largely overshadows this so it's mostly unnoticed.

I'm of the opinion that the problem can't really be solved without very close interaction between the project and the distribution, such as .Net and Microsoft. But that comes to the detriment of other distributions, and still isn't the easiest to pull off. In the end, we'll always have a pull between what's easiest for the sysadmins/user and what's easiest for the "I want to deploy this thing elsewhere" developers.


Cargo isn't competing against nor replacing distribution package managers. Cargo is a build tool, not a package manager. You're free to package Rust software the same way you do non-Rust software for specific distributions. They are entirely different unrelated things with no overlap. Cargo solves a lot of problems that we've been facing for a long time. We have the Internet now, so let's use it to speed up development.


apt & ports don't follow you into other OSes. Language-specific package managers do, without requiring entanglement with the OS or admin permissions. All you need is sockets and a filesystem.

I think the language-specific ones will win for developer-oriented library management for platform-agnostic language environments.


Apt and/or ports follow you to every OS kernel I can think of (Linux, MacOS, Windows, *BSD, the Solarises), though the packaging doesn't always (for instance, the OpenBSD guys have a high bar, but that's a feature).

My theory is that each language community thinks it will save them time to have one package manager to rule them all instead of just packaging everything up for 4-5 different targets.

The bad thing about this is that it transfers the burden of dealing with yet another package manager to the (hopefully) 10's or 100's of thousands of developers that consume the libraries, so now we've wasted developer centuries reading docs and learning the new package manager.

Next, the whole platform agnostic thing falls apart the second someone tries to write a GUI or interface with low-level OS APIs (like async I/O), and the package manager ends up growing a bunch of OS-specific warts/bugs so you end up losing on both ends.

Finally, most package manager developers don't seem to realize they need to handle dependencies and anti-dependencies (which leads you to NP-Complete land fast), or that they're building mission-critical infrastructure that needs to have bullet proof security. This gets back to that "reinvent dpkg poorly" comment I made above.

In my own work I do my best to minimize dependencies. When that doesn't work out, I just pick a target LTS release of an OS, and either use that on bare metal or in a VM.

Also, I wait for languages to be baked enough to have reasonable OS-level package manager support. (I'm typing this on a devuan box, so maybe I'm an outlier.)


Funny, I've found cargo to be one of the major negatives of rust.

Is there anyone out there saying "builds only when connected to the internet so it can blindly download unauthenticated software ... SIGN ME UP!"


> In my opinion Rust is about doing things right.

On the other hand there is a quite dark cloud on the horizon with the stable vs nightly split. You can't run infrastructure on nightly builds; or add nightly builds to distributions.


There are very few libraries that are nightly-only in Rust. Clippy is a big one, but clippy is a tool, not a library, so it's no big deal (we're working on making it not require a nightly compiler).

Rocket is a recent one. I talked with the owner of Rocket and one of their goals was to help push the boundaries of Rust by playing with the nightly features. With that sort of meta-goal using nightly is sort of a prerequisite. Meh.

You can use almost all of the code generation libs on stable via a build script. Tiny bit more annoying, but if it's a dependency nobody cares. A common pattern is to use nightly for local development (so you get clippy and nicer autocompletion) and make the library still work on stable via syntex so when used as a dependency it just works.

The most used part of the code generation stuff will stabilize in 1.15 so it's mostly not even a problem.


I'm sorry your comment has gotten the response it has.

The looming dark cloud of stable vs. nightly only looks like a dark cloud to those outside the Rust community.

The article that made its way up Hacker News awhile ago (https://news.ycombinator.com/item?id=13251729) got pretty much no traction whatsoever in the Rust community.


I have found the split has only gotten better with time. It used to be most package maintainers assumed you were using nightly/beta. The last holdouts I see are diesel and serde which have instructions for using nightly. Even then they realize no one wants to ship code on nightly so they provide directions for making a building using stable rust. Once the procedural macros stuff is stabilized they can stop.

I have been extremely pleased with the rust community and the rust maintainers.

And no I was not paid by them to say this... :)


From my impression, they'll be working with the stable compiler in a few weeks.


That article had so many inaccuracies and/or was hyperbolic. Most of the points there are plain wrong.

http://xion.io/post/programming/rust-nightly-vs-stable.html#...


I think the Rust community didn't take the discussion very far, because it felt like the comparison to Python 2 vs 3 was out of proportion. The single most important difference is that stable Rust code is always compatible with nightly, and comparisons that gloss over that difference feel frustrating. (Other folks have raised more detailed objections too, like the macro features that are about to land in stable.)


I don't see a split here. I know lots of cool ideas have been posted to Hacker News lately, that showcase something possible in future versions of Rust. But I seriously hope people keep these libraries as showcases, not production-quality tools. I'd say this nightly/stable split is a bit out of proportion.

In 2017 we'll get Tokio and lots of Tokio-ready libraries. Some of them already work and compile with stable Rust. And maybe in the end of the year we can take a proper look what we can do with Rocket, or Diesel...


Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Diesel does work on stable today, but its nightly features will be on stable in five weeks with 1.15.


> Tokio runs entirely on stable, though 1.0 won't happen until some language changes land.

Are you talking about impl Trait or some other language changes?


impl Trait is what I'm thinking of, yeah.


There's little evidence for a "dark cloud" on the horizon because of Rust nightlies, besides people complaining of dark clouds. I'd suggest providing evidence of problems with nightlies and filing bugs about things that need to be stabilized.


As I noted in https://news.ycombinator.com/item?id=13277477, there are really only two major Rust libraries that are easier to use on nightly Rust (serde and diesel), and both can already be used on stable using a `build.rs` file (which takes about 10 minutes to set up–see the docs for the projects). You'll be able to get rid of `build.rs` in about 5 weeks when Macros 1.1 lands.

That said, if you want to play with nightly Rust, it's pretty trivial. Rustup https://www.rustup.rs/ makes it easy to install multiple Rust toolchains and switch between them, in a fashion similar to rvm and rbenv.


How so? Most big projects have both stable releases and nightly builds that contain unfinished features. Why is it particularly troublesome for Rust?


I am not sure it's really a split when you can build stable on nightly. Nightly is great for experimentation.


Although others don't see the 'dark cloud', as a long time user of Rust it has definitely put me off pushing for it too hard at work. That said, Diesel and Serde are now the only big libs that require nightly, but this is due to them relying on syntax extensions which are going to be stabilized in the next release. So if I were to start a new microservice using those libs it would be ready to go on stable in a few weeks time, which is super exciting. The only other thing would be async IO. Tokio is making big strides in that direction - not sure what the time-frame is on being able to use it on a server nicely though.


> add nightly builds to distributions

rustup run nightly cargo build --release

You were saying? In any case, there's no such thing as a stable vs nightly split. There's pretty much zero libraries that require nightly, and the few that have a nightly option for optional features will no longer require nightly after the macros update lands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: