Hacker News new | past | comments | ask | show | jobs | submit login
Rust 1.24 (rust-lang.org)
503 points by steveklabnik on Feb 15, 2018 | hide | past | favorite | 208 comments



Incremental compilation! And it's only going to incremental-er from here, as the compiler learns how to cache more and more sorts of interim artifacts to avoid redundant work. Though I think the wording in the OP is a bit off:

> Historically, the compiler has compiled your entire project, no matter how little you’ve changed the code.

This isn't quite correct. When you change the code in a given library, it has historically recompiled that entire library, and all libraries that depend on that library. But the compiler has never recompiled any libraries that were deeper in the dependency chain: if you have crates foo and bar, and bar imports foo, and you change the code in bar, foo was not recompiled.

Looking towards the future, I'm most excited about the prospect of using SIMD on stable Rust, which looks to be coming this year: https://github.com/rust-lang/rfcs/pull/2325

EDIT: Also worth mentioning is that Rayon (an official Rust library providing easy data parallelism) just reached 1.0 today, which is an important milestone in Rust's march towards ecosystem stability: https://github.com/rayon-rs/rayon


> Incremental compilation!

Is there some algorithm in Rust which has a necessary big-O? There were compilers in the 90s which ran a million lines per second (on much slower computers). Incremental compilation feels like working around the problem rather than solving it, and I have to imagine the complexity and maintenance is much worse.

I'm sure some of the LLVM optimization passes are expensive, but those are used in clang++ too, and it's not terribly slow.


> There were compilers in the 90s which ran a million lines per second (on much slower computers).

Those compilers performed nowhere near the level of optimization that modern compilers do.

> I'm sure some of the LLVM optimization passes are expensive, but those are used in clang++ too, and it's not terribly slow.

clang++ is sure slow if it has to rebuild an entire codebase from scratch.

The main reason that C++ compilers get away with not having partial compilation is that it forces programmers to do the division into code units manually through separate .h and .cpp files. Rust doesn't have this, except at the crate level, so it needs partial compilation instead. It turns out that partial compilation is better in the first place, because compilation units tend to be fairly coarse-grained: .cpp files frequently grow very large.


> Those compilers performed nowhere near the level of optimization that modern compilers do.

I think this gets at Wirth's law a bit. Yes compilers perform lots of optimizations now, but in Rust's case it is at least partly because they need to in order to claw back the performance that the Rust gives up with its abstraction (as seen in debug builds).

I do much prefer Rust's safe abstractions to writing more direct code and hoping I get it right. I hope (and do believe) that the MIR effort will somewhat rein in the burden shunted over to LLVM, as it is super painful for some of us.


> I think this gets at Wirth's law a bit.

I think Proebsting's law is also relevant. Compilers did an impressive job of optimization in the 90s, and while yes there have been advancements in the last 18 years, it's foolish to over-estimate the progress.

Also, considering how much hardware changes have improved performance (speculative execution, register renaming, etc...), it wouldn't surprise me if many compiler optimizations have become less important with time. For instance, peep-hole instruction scheduling has to matter less than it once did.


Have you guys spend any time with the incremental compilation and linking of Visual C++ or C++ Builder?

It really does make a difference versus traditional UNIX compilers.


clang++ is slow, but it certainly feels faster compared to rustc (and I typically use the write everything in one include C++ style). I’m curious, has there been any outreach to the clang community about rustc performance issues?


Not going to budge an inch, eh? It must be as good as it will ever get.


Is it they who didn't "budge an inch", or you, that despite a reasoned explanation, insist on the original accusation?

Not to mention the underlying insinuation, that the Rust compiler developers are dumb for not being able to get at the 80s compiler's level of speed -- since you don't seem to accept that this is a byproduct of the advanced optimizations and security guarantees they do.


My criticisms are meant to point out areas for actual improvement.

Try to be scientific. If you've got a conjecture, test it and see if it holds water. One excuse was that C++ is only faster because people build C++ projects with separate compilation units. Well, you can compile a project in C++ with a single compilation unit, and it's not horribly slow. Now you need a new conjecture.

The type system is comparable to Standard ML from the 70s. The lifetime analysis is a solid improvement over mainstream languages, but I have a tough time believing it needs to be an order of magnitude slower than const correctness. The optimizations are on par with C++ and Fortran (mostly because they're leveraging the guts from an existing C++ compiler). SML, C++, and Fortran do not compile as slowly as Rust. There's probably another reason.

I don't think the Rust compiler developers are dumb, and I never said that. I think several of them are brilliant but young (and occasionally very arrogant).

Rust is so close to being a great replacement for C++. I read every one of these threads about new releases hoping they'll eventually fix the problems. Compilation speed and memory use are lesser issues, but they're still important, and things like incremental compilation dodge the issues instead of fixing them.


Borrow checking is a lot more involved than const correctness. It's not even in the same league. Const correctness just adds a few more types, whereas borrow checking involves generating sets of constraints and solving them with a constraint solver, something C++ doesn't really have to do at all.

The type system of Rust goes significantly beyond that of Standard ML, because of traits/typeclasses among many other features. Also, Standard ML is usually not implemented with monomorphization, which affects compile time.

> Compilation speed and memory use are lesser issues, but they're still important, and things like incremental compilation dodge the issues instead of fixing them.

Incremental compilation is a way of addressing the compile time and memory usage issues. It's just not the way you seem to want them to be fixed. Nobody did incremental compilation for fun; it was done because without it we didn't believe we could reach our compiler performance goals.

Again, you're asking for a magic bullet, and I'm saying it's not likely that there is going to be one. If one existed, it probably would have been found already. There's lots of hard work that has been done, and lots of hard work ahead.

If I had to guess what the most important issue is, it's that idiomatic Rust generates a lot more LLVM IR than idiomatic C++ does, because it leans on abstractions for safety and developer ergonomics. Think about how many more closures Rust code uses, what for loops expand to, what pointer arithmetic compiles to (ptr::offset), etc...


I'm not going to dive into your points about constraint solvers, type classes, or "monomorphization". Unless those things take more than a few percent of the compile time, they're just more red herrings.

> Incremental compilation is a way of addressing the compile time and memory usage issues. It's just not the way you seem to want them to be fixed. Nobody did incremental compilation for fun.

I'm well aware that my opinion doesn't matter, but I doubt incremental compilation will make any real strides towards making things as good as they could be. It sounds like the real problem is here:

> If I had to guess what the most important issue is, it's that idiomatic Rust generates a lot more LLVM IR than idiomatic C++ does

I suspect every single feature in idiomatic Rust has a one-to-one translation to some bit of C++ that isn't too horrible. You can do lambdas, iterators, generators, or whatever else it takes to do Rust style for loops and closures in C++, and it won't choke the compiler. So while I don't doubt rustc is generating a lot more IR, I doubt it needs to.

Maybe the MIR optimization pass will move in that direction.


I don't know what this means.


It means things don't usually get better until you admit they're not as good as they can be.


rustc could definitely be faster.

I'm simply pushing back on the idea that there's some obvious low-hanging fruit that clang did that rustc didn't. Compiler performance has more or less much been the #1 goal for well over a year now. The Rust compiler team (which hasn't included me for years, by the way) is fantastic. We'd all like for there to be some magic bullet that makes the compiler faster, but at this point I don't believe it exists. It's hard work from here on out.


> I'm simply pushing back on the idea that there's some obvious low-hanging fruit that clang did that rustc didn't. [...] magic bullet

I didn't suggest low hanging fruit or a magic bullet. If anything, incremental compilation is being promoted as that kind of thing, and I just wanted call it into question.


Nobody thinks incremental compilation is a magic bullet! The article specifically says "This is still not the end story for compiler performance generally, nor incremental compilation specifically."


I hope you see the irony in accusing them of that in a post announcing that the compiler just got better.


It's not ironic. I see this kind of thing all the time. Rather than cleanup and speedup an implementation, engineers will make it multi-threaded or send it off to a GPU, generally achieving limited improvements and greater complexity. In this case, rather than make the compiler faster, they hack it up with incremental compilation. It's still slow, but they apply the slow code to less data and call it a win.


Nobody who reads the rustc commit history would agree that this is what is going on. The number of straightforward changes that are constantly made in the service of small performance improvements far outweighs the work on incremental compilation.


I’m pretty sure no one in their right mind claims that rustc is “as good as can be.”

It’s called incremental progress.


The rust compiler provide huge guaranties no other provide, like for memory safety. This has a performance impact.


The memory safety checking (or more generally, the type checking) isn't what makes the compiler slow. That part is pretty fast because it's intentionally modular.

It's the cross-module optimizations that make the compiler slow. Those optimizations are in service of C++-inspired ideal of being able to use layers of abstraction to make your code cleaner, but having the compiler flatten them down so your program still runs fast.


It's not a big-O thing. The optimization passes aren't necessarily huge either; right now, 50% of the time is in LLVM compiling IR -> machine code. The static checks, optimization passes, and everything else are minuscule overall. You can see this with -Z time-passes.

We have some hunches, but nothing super conclusive yet; basically, right now it's all about how much IR we generate.

Don't forget the differences in compilation model; C/C++'s compilation units are much smaller, so incremental isn't as big of a deal.


Is there a way to generate IR which is easier on LLVM? The Jai compiler compiles 60kloc with an llvm backend in seconds. Could performance like this be seen in rustc?


Possibly; we'll see. MIR optimization passes may help.

It's hard to say what exactly Jai does since it's not public yet.


Apparently that's :

https://inductive.no/jai/


> Don't forget the differences in compilation model; C/C++'s compilation units are much smaller

Not everyone compiles C++ the same way. I prefer header-only libraries and a single compilation unit. Counting lines of code with "gcc -E" to get the headers, I just compiled 20,000 lines of C++ from scratch in .6 seconds with gcc and .4 seconds with clang. I don't have a comparable project in Rust.


How much code did the preprocessor strip out?


No idea, maybe a lot from the system headers. However, it was 20,000 lines of code that survived the preprocessor, 11,000 of which were inline or template from the project itself.


- A large fraction of compiler time is spent on code generation; how many of those lines did the compiler actually generate code for? (i.e. not unused functions or uninstantiated templates)

(And in the case of uninstantiated templates, a C++ compiler doesn't even need to type check them…)

- Did you have optimizations enabled?


> how many of those lines did the compiler actually generate code for?

Not many. How many lines would a comparable Rust program have to instantiate? (should be the same)

All I was trying to do was make an apples to apples comparison by discrediting the implication that the clang++ and g++ compilers are only faster because people use separate compilation units.

I think it's fair to compare Rust and C++ this way, and if Rust is slower to compile, then it's fair to ask why.

> Did you have optimizations enabled?

Yes, -O3. Also -Wall -Wextra and -fwrapv if you're interested.


Those languages were Wirthian, and were designed for fast compilation.


Oberon certainly compiles quickly, and I believe Delphi (Pascal-ish) was fast too, but I was thinking of Microsoft's Java compiler before they got spanked.


Java without generics used to compile pretty fast.

When you add generics, lifetimes, and type inference, the amount of work the compiler does grows significantly. It allows to check much more interesting invariants, leading to the "if it compiles, it runs correctly" effect.


Basic generics in rust don’t have that much overhead; it sounds like your conflating generics with templating. Generics aren’t Turing complete, unlike C++’s templates.

Other languages with powerful generics and type inference like modern C# clearly manage just fine, too.


Yes, but things like `fn do_something<T>(t: T) -> impl Future Where t: Serializable` definitely take a while. And C# has the added benefit of being JIT'ed, it doesn't have to flatten all the generic code during compilation, it can optimise later.


C# could always be AOT compiled to dynamically linked native code via NGEN.

It is also the only deployment option on iDevices and Windows Store since Windows 8.

Also if you prefer other native examples, Ada, Eiffel, Sather, D come to mind.


> Generics aren’t Turing complete, unlike C++’s templates.

They are actually, but only anecdotically (like PowerPoint is also Turing Complete).


Below, steveklabnik claims these things are miniscule. Are you really sure you know what is actually slow?


In my experience (with Rust programs that take 20mins+) it is all about the monomorphisation of generics. The great majority of the time is spent in LLVM, because Rust hands over many GB of LLVM IR for optimization, having done not very much (edit: in the way of code transformation) other than direct instantiation of the generic parameters.

Rust does have the opportunity to do more, and I suspect they are hoping to do more with MIR, but at the moment they rely on LLVM to optimize out all of the zero-cost abstractions that they introduce.


If you're referring to the comment starting with "It's not a big-O thing," I think that comment says the opposite. "Generics, lifetimes, and type inference" aren't "static checks and optimization passes" (within rustc)—he says the bulk of the work is LLVM dealing with the quantity of IR rustc generates, which is exactly what you'd expect from having heavy source-level abstractions like generics and type-heavy patterns like libstd's approach to iterators that all need to be compiled out.


Edit: so, in the bit below about MIR vs non-MIR borrow checking, I went and asked niko. And he told me that -Z time-passes is pretty much misleading now. Gah. I'm leaving the comment below because I put a lot of work into it, but apparently it may not be correct.

https://github.com/rust-lang-nursery/rust-forge/blob/master/... is how you're supposed to do this these days. It's a ton of work that I don't have time to do right now, but if you look at the example HTML output linked there, for a hello world, my broader point still stands, which is "translation 78.1%". That is, taking the MIR and turning it into machine code, first through LLVM IR, is the vast, vast majority of the compile time. Not the safety checks, not some un-optimized algorithm in them.

-------------------------------------

To make this concrete, here's a project I'm working on, with -Z time-passes: https://gist.github.com/steveklabnik/c2646b209debf1f66355343...

Some annotated bits of the larger passes:

  time: 0.646; rss: 91MB expansion
This is for expanding out macros, over half a second of time.

  time: 0.338; rss: 169MB coherence checking
I'm sorta surprised this takes even a third of a second; it's never come up when I've looked at these results before. Coherence is the "you can only implement a trait for a type if you've defined at least one of them in your crate".

  time: 0.596; rss: 205MB item-bodies checking
It's early so maybe I'm slightly wrong, but I believe this pass includes the type inference stuff, because that's only done inside of function bodies. 6/10ths of a second isn't nothing, but...

  time: 0.588; rss: 219MB       borrow checking
  time: 2.566; rss: 220MB MIR borrow checking
This is actually something I'm quite surprised by. Right now, we borrow-check twice, since we're still working on the MIR-based borrow checker. I'm not sure that MIR borrow checking is supposed to be this much slower; I've pinged the compiler devs about it. Regardless, before this was introduced, we'd have shaved 2.5 seconds off of the compile time, and after the old one is removed, even if the borrowcheck doesn't get faster, we'd be shaving off half a second.

Then we get into a ton of LLVM passes. As you can see, most of them take basically no time. Ones that stick out are mostly codegen passes, which is what I was referring to with my comments above. But then we have:

  time: 5.625; rss: 413MB translate to LLVM IR
Even just turning MIR into LLVM IR takes five seconds. This completely dominates the half and third second times from before. In all:

  time: 7.418; rss: 415MB LLVM passes
So, all the other passes take two seconds out of seven, combined. Ultimately, none of this is Rust's checks as a language, this is burning away all of the abstractions into lean, mean code. Finally,

  time: 9.437; rss: 414MB translation
this is the "turn LLVM IR into binary" step. This also dominates total execution time.

Note that all of this is for an initial, clean build, so a lot of that stuff is setting up incremental builds. I deleted src, git checked it out, and then built again, and got this: https://gist.github.com/steveklabnik/1ed07751c563810b515db3f... way, way, way less work, and a faster build time overall: just five seconds.

So, anyway, yeah.


> Ultimately, none of this is Rust's checks as a language, this is burning away all of the abstractions into lean, mean code.

Right - my understanding is that Rust generates very large MIR and also LLVM IR because the "zero-cost abstractions" aren't yet zero-cost, and compiling them into zero-cost machine code is inherently expensive.

So it's not the safety checks per se, but it's things like "there are three wrapper objects here with various generic parameters where C/C++ would have started with just a pointer from the beginning". Those three wrapper objects couldn't have existed without the safety checks, and rustc very quickly identified that everything is in order, but then it monomorphized the generics into lots of tiny functions and LLVM has to optimize all of that into machine code that resembles what the (unsafe) pointer approach would have generated. (Which explains why it's not time taken in rustc proper, but also why we don't see LLVM being equally slow when compiling C/C++.)

Is that interpretation correct?


Yes.

I might split out C++ from C here though; template-heavy C++ is known to be pretty slow to compile too, for what's basically similar reasons: you end up generating a bunch of code that needs to be crunched down.

There's also some possibly deeper reasons around heavy use of generics: if your code touches something that is generic, and you change the type, it's going to need to recompile that function too. And then anything generic it touches. Etc. Trait objects can help compile times for this reason, but they aren't considered idiomatic, so most Rust code tends to stress the compiler in this way.


Most Java compilers are quick because they compile down bytecode, leaving virtually all optimization to the JIT.

AOT Java compilers certainly exist, but I don't recall Microsoft having one -- I fully admit my recollection of their Java dev environment is quite foggy at this point, though.

Regardless, the Java AOT compilers I've tried didn't seem particularly fast in relation to their level of optimization.


Have there been any thoughts around making the compiler something akin to a stateful server with an embedded datastore for the caching rather than using the filesystem?


Not directly, but there's been some work on changing the compiler's interfaces to ones that would work with that model, in my understanding. It's not out of the realm of possibility.


What would be the benefit of this approach over using the filesystem?


Any time you put data onto the file system there is overhead (eg. Memcache, Redis, most databases).

Simple things like parsing the bytes into the binary representation go away. You can have more granularity of the cache because you don't need to worry about creating too many files. Not only from a performance perspective but also from a code perspective.

More nuanced things like cache locality can be improved with an embedded data store rather than creating files and hoping the OS understands they will always be used together.

More advanced things like retaining inverted indexes becomes possible and search performance is dramatically improved.


The shortest path to something like that (assuming it's a good idea) would probably not be writing an in-memory incremental compile state server, but storing intermediate artifacts as very cheap-to-serialize files (flatpak or just mmap and a cast), and then storing the files on /dev/shm or some other memory filesystem. That would give you most of the benefits you discussed (unless you clear out the cache/restart the machine) without requiring a huge rewrite of how state is stored.

Most of the assumptions I'm making here are unverified, though, so take it with a large grain of salt.


I'd imagine it would be a benefit where, let's say, every project has `byteorder` in its dependency tree somewhere. If you had a local, stateful, build server that processes every build on your machine, you could re-use the object file from the last time you compiled that library regardless of if it was for this project or not. That's probably not a big gain for just `byteorder`, but if you multiply that by many libs or take into account some of the much bigger ones it could make quite a difference in build time. Even "first" build time for new projects you're building.

IIRC, some group in mozilla has already has something working that works like ccache that works (or rewrote ccache to work) with compiling rust. (Which I think is what steveklabnik is alluding to in his neighboring comment.) Although I guess that might only work within a single code base, not sure about that.


You're thinking of https://github.com/mozilla/sccache which is not quite the same thing.


Man, I bet that would be blazing fast!


I think this is something that C# and VB.NET do.


It would recompile foo if you changed something in bar that caused a generic function in foo to monomorphize differently, AFAIK.


I'm not sure what this is referring to. Generic types don't get monomorphized when exported in libraries, they get compiled to intermediate-language metadata that is then used by downstream consumers to monomorphize as needed. Furthermore I'm pretty sure Rust doesn't allow cyclic dependencies between crates, so if foo somehow depended on bar then bar couldn't depend on foo.


Good to see rustfmt arrive.

Sad that it is configurable, though. The biggest benefit of its predecessors such as gofmt is that they are not configurable, leading to a much more uniform formatting style and avoiding endless discussions about whitespace layout.


I don't agree. There is a default configuration, and that configuration is very good (I've been following the process of nailing it down, and IMO it's been done very tastefully), and is the formatting used by official Rust codebases, lending it the weight of authority. I have no reason to configure it to do anything differently. But I also have no reason to forbid anyone else from formatting code as they please. What's the possible harm? If you're on a team, and you want code uniformity, you require the default configuration. If you're worried about pulling some random code and having it use {tabs|spaces} instead of {spaces|tabs} (which, btw, is, without hyperbole, the dumbest argument in the entire human history of programming) then you set your editor to run rustfmt on files before opening them.

The gofmt argument isn't the final word here. There's plenty of Go users who don't like gofmt's style, so they just don't use gofmt. It's easy to imagine other users who don't like gofmt's style, and don't like being pressured to use it, so they just don't use Go at all. How is that any better than just having a configurable tool with sane defaults? It's one thing for a tool to be opinionated; it's another thing entirely to be dictatorial and stubbornly inflexible.


>But I also have no reason to forbid anyone else from formatting code as they please. What's the possible harm?

Obviously the proliferation of coding formatting styles. It's not we didn't have "official styles" and formatting tools before for other languages.

The greatness of gofmt, and what people love it for, is how it killed all other options.


Not sure if it was actually that, many don't seem even aware of indent, whereas I used to work in places that executed it as CVS pre-commit hook.


"And guess what?" Richard angrily replies, "That's never going to happen now. Because there's no way I'm going to be with someone who uses spaces over tabs."

Man, there's something cathartic in seeing your industry and self-identified group's ridiculousness aired for all the world to see.


If rustfmt were not configurable, it would get a lot less use. Rust has been around for quite some time now, and there's a lot of code out there. We can't force anybody to run rustfmt, and if they don't like the results, they won't use it. It's not a choice between different code styles and a single code style; it's a choice between different code styles with a tool to keep things tidy and different code styles with a tool nobody uses. Things were different for Go for a variety of historical reasons, one important reason being that gofmt existed early on.

To take a concrete example, adopting gofmt's choice to not have a line length limit would have been a nonstarter, because lots of code out there uses long expressions manually wrapped to be readable. A lot of the work in rustfmt has gone into playing nicely with community conventions like these.


Agreed - I am an example. I had vim format my rust code on save, but there was one single bit of formatting which drove me nuts, so I turned the plugin off.


gofmt choice of not breaking lines is due to the fact that new lines in Go mean an implicit semicolon unless there is a comma or another continuation.

To break lines, the formatter would have to refactor code, which is not desiderable.


That would be an enormous problem for us. We need rustfmt to be configurable because it let's us use it to make MISRA-like style guidelines automatically enforceable for our codebase, which would be likely summarily rejected without them, so then we'd be forced to either do it manually or reinvent most of rustfmt ourselves.


Yeah, I'm not psyched about it either, but that ship has sailed.

The defaults are what's considered the "official" style; we expect the vast majority of people to use it. Defaults are powerful.


The only reason I don't use gofmt is because it is unconfigurable. And some of its defaults really stink. Kudos to Rust for getting this right.


I often have to zoom into code a lot more than others. This means that I will often want to format my code differently. I change the settings and I can use defaults before committing the code.

Programming causes me a lot of eye strain as it is, a configurable syntax formatter really does help.

I never see anyone discuss syntax the way you're describing. Not at work, not online. Actually the only place I see it is when discussing what the default is, so I think having a default is far more likely to cause those discussions as people will have more reason to complain.


I've never once seen or experienced any discussion about whitespace layout in Rust. OTOH, I've had several discussions about whitespace layout in Go in less than a year of programming in it due to gofmt not enforcing a maximum line length. Previous to writing Go, I worked on a C++ project that used clang-format (with a custom configuration), and we never had to have any formatting discussions because it also handled maximum line lengths.


Yeah, I guess that's the Rust way. The more options/configurations the better! Simplicity is not on the top list for sure.


We have a strong culture of convention over configuration; these two things are not inherently at odds.

To get the default formatting with rustfmt, you just run it. You can configure it via some file but I've never needed to. I don't even know what the options are.


Ok so what happens if I contribute to a open source project with my default fmt configuration if that project doesn't use the default configuration? Will people yell at me to "fix" the formatting?


That'd depend on the project, but even then, you'd just run `cargo fmt` and be done with it. That project would have a rustfmt.toml in the repo, so it would just make the changes for you.


Using default == Simplicity. You don't have to use the options.


tokio-minihttp is 1 in plaintext benchmark in TechEmpower Web frameworks benchmark

https://www.techempower.com/benchmarks/#section=data-r15&hw=...


Ugh. Every time I see tokio or futures-rs mentioned, a part of me dies.

The syntax and ergonomics of rust futures ATM is insanely painful and slows down development 100x in some cases (not exaggerating) due to the current lack of async/await combined with hard typing requirements coupled with some very unreasonable design decisions (namely, each future generates as the result of a computation on a previous future has a different type, even if they resolve to the same types upon evaluation of the future - basically abusing the type system to store the futures chain. Combined with the very strongly typed nature of rust, even with experimental language features like “impl future<xxx>” the code becomes impossibly hard to reason about.

For example, the result of future.or returning a future bool is not the same type as the result of future.and similarly returning a future bool; and a two-level future evaluating to a future bool is not the same type as a one-level future also evaluating to a future bool.

async/await cannot come soon enough.


I've never quite understood the benefits of async/await over go-style csp. It seems to me that its far better to just write sequential code all the time then mark all the points where you add parallelism with spawn/go instead of layer abstractions to approximate the same result but with added (in my view unneccesary) keywords. Can anyone provide some insight?


With async/await you can tell from the type signature whether a given function might suspend, and then reading through the body you can see whether each function call is a non-suspending one or a possibly-suspending one. It makes it really obvious what's doing what. If suspension is your only effect then it probably doesn't matter (one unmanaged effect is ok), but when you have other effects that might interact, having all the suspension points be explicit is really useful. https://glyph.twistedmatrix.com/2014/02/unyielding.html gives one example.


Thanks, yeah, that's a useful insight. I guess it's rather a moot point with Go anyway since you have multiple goroutines executing on multiple cores at once.


There are technical benefits and drawbacks to both approaches.

I think the main reasons people perfer one over the other are philosophical: basically, some people prefer managing the executors of concurrent code (goroutines/threads/whatever); others prefer managing the points of dispatch to executors.

You can do both in either paradigm, but those are what I see as the default modes of thinking. Neither is inherently better; it just depends on how you personally think about concurrency, and what concurrent tasks you need to model.


Async/await can be cheaper given other constraints, e.g. Go ends up using a lot of GC infrastructure to keep stacks small/minimize memory use, which makes FFI to C-style languages expensive.

If syntax is all that matters, then sequential code is great, but there's more than that for many tasks.

(In any case, Go is adding layers of abstraction too: it is exposing a sequential interface over the OS's async APIs, whereas async/await is typically exposing them more directly.)


Go has a lot of complexity and slowdown around FFI due to all the internal context switching, locking, and stack management. Even if there are benefits for this use case, for such a drawback is not acceptable for a systems language intended for embedding and low level libraries.


go-style csp is just async/await hidden in the syntax.


It's not, because it's not built on promises at all. Go uses separate stacks for every goroutine.


> some very unreasonable design decisions (namely, each future generates as the result of a computation on a previous future has a different type, even if they resolve to the same types upon evaluation of the future - basically abusing the type system to store the futures chain

If you don't need the performance, you can get back to the dynamic-language world by boxing everything (.boxed()). The generics approach lets everything be compiled down to a single state machine, exactly like Iterator.


hyper is 6 and actix is 7

https://github.com/actix/actix-web

Rust is represented well!


I tried Actix a while back and it was very intimidating compared to some other frameworks. I'd like to jump back into it at some point, but right now for my prototype I got lazy and went with Rocket.rs.

Any suggestions of Opensource projects that use Actix?


you should check it again :) actix-web now has user guide https://actix.github.io/actix-web/guide/

here is irc bot (wip) https://github.com/DoumanAsh/roseline.rs


Is there a guide just for using the actix framework that explains what are actors, how they are different from other abstractions and the pros and cons?


There is no guide for actix but I am planing to add one


Ah, I didn't check your username. So, its your project, awesome work and thanks for doing it. :D


Thanks fafhrd!


I wouldn't post Rust benchmarks in here it's behind Java in every scenarios.


It's not in every scenario, notably, the plaintext one.

One reason why things are behind in some of the other scenarios is because our database driver stuff is synchronous at the moment, and that really hurts on these benchmarks. We'll get there!


Maybe in web, but overall I don't think this is true [0]

[0]: https://benchmarksgame.alioth.debian.org/u64q/compare.php?la...


Which says more about Java than Rust. Despite all the "Java is slow" boohoo modern Java is a blazing fast language for many problems. Rust is young, still much potential.


stable rust is only two years old. we just need some more time


Performance is one dimension of many.


> If you’re a fan of str::find, which is used to find a given char inside of a &str, you’ll be happy to see this pull request: it’s now 10x faster! This is thanks to memchr.

Wait, how is that safe? Aren't Rust strings UTF-8? Even if you search for a ASCII character, couldn't it conflict with the second byte of a different unicode codepoint?


Sure, and the code deals with that case.

https://github.com/rust-lang/rust/blob/5cf55165fae5c8538db5c...

This does mean you have some somewhat pathological cases, but they're relatively rare because most strings come from only a couple unicode blocks, so all bytes other than the first will be from a really small set. A further optimization could be to fall back to the original find code if we realize we're hitting a pathological case.

(Also, FWIW, searching for an ASCII char can never cause this because of how UTF8's streaming property -- https://en.wikipedia.org/wiki/UTF-8#Description -- but given that we only search for the last byte, searching for a multibyte codepoint can lead to false positives with a shared last byte. We handle that.)


https://github.com/rust-lang/rust/pull/46735

TL;DR, it's not a naive straight `memchr`. `memchr` will generate false positives, then they use knowledge of UTF-8 to weed out the false positives.

The pathological case is worse, but the overwhelming majority of cases are better.


If I'm not wrong, UTF-8 continuation bytes always start with binary 10, where ASCII always starts with 0 and multi-byte starts start with either 11 (2 bytes), 111 (3 bytes) or 1111 (4 bytes)


That doesn't solve the problem of searching for multibyte character AB and finding multibyte character CB instead :)

Or, searching for multibyte character ABB (like U+A041 YI SYLLABLE PA) and finding the first "B" instead of the second "B".

Sadly there's no memchr for consecutive sequences of characters. memchr2/memchr3 let you search for multiple needles in the haystack, not a bigger needle.


There’s memmem...


memmem is substring search. You could use it for char search, but does memmem know about UTF-8? If, say, memmem uses memchr internally in a skip loop and it happens to look at the first byte in the UTF-8 encoded codepoint, then it is going to perform a lot worse in most cases involving the search of text that is in a language other than English (because it will wind up searching for a very common byte).

Or maybe memmem does something else clever for shorter needles. Dunno. Best to benchmark it. But it's not an obvious win a priori.


I do want to see if a 2/3/4-byte memchr can be written that uses the same bitmasking as regular memchr but extended to more bits (perhaps via SIMD).

That would be pretty neat.


Yeah that's what I was thinking memmem might do. But yeah, I bet you that Hyperscan has a vectorized algorithm for this somewhere. It is just a matter of finding it.


Yeah, I suspect it wouldn't actually be a win, though of course it will depend on the implementation. Just pointing out that "there's no memchr for consecutive sequence of characters" isn't exactly true.


I mean, you can count a simple string search loop as "memchr for consecutive sequences" as well, what I meant was a technique that uses the same bitmasking trick as memchr, applied to multiple bytes (perhaps via SIMD). It's doable.


Correct. This is what makes UTF-8 is "self-correcting", in that you can always find which code unit you are at for a code point.


UTF8 streams are self-synchronizing: an ASCII byte starts with 0, a most-significant byte starts with 11 and a continuation byte starts with 10.

You can memchr for the last byte (either 0xxxxxxx for only byte or 10xxxxxx), then in the latter case go backwards to the MSB to see if the entire codepoint matches.


An ASCII char would have its MSB 0 in UTF-8, and any byte in UTF-8 which is not a single ASCII char has its MSB as 1.


You can use strchr() for ascii, or strstr() otherwise. See utf8, searching section in http://www.pixelbeat.org/docs/utf8_programming.html


Can someone sell me on using rust over python? I am just gernerally curious as to the advantage beyond rust being compiled.


When I use Python I miss Rust's enums, the pattern matching of those enums, and the static types that help refactoring (the compiler spots where one change has knock-on effects in the rest of the project..).

Especially how Rust enums and structs make it easy to define new types to guide your programs are a highlight for me.

I don't think Rust is better than Python for every task. Python is a lot simpler if you can get something done with its built in types (dicts, sets and lists). Python's dynamic types are also a great benefit for some tasks, where Rust's dynamic dispatch support is very limiting.

Crazily, handling dependencies and building a project is way easier in pure-Rust than pure-Python since it's standardized.


Go is good middle ground between simplicity and being fast/dynamic.


Walk middle, sooner or later get the squish just like grape.


> Avoiding both these extremes, the Tathagata (the Perfect One) has realized the Middle Path; it gives vision, gives knowledge, and leads to calm, to insight, to enlightenment and to Nibbana.


But I bet Mr. Miyagi could beat up your Tathagata in a coding contest.


> Especially how Rust enums and structs make it easy to define new types to guide your programs are a highlight for me.

In recent versions Python got some nice improvements in this area, with the new way of creating NamedTuples, Data Classes and the typing module. I wrote a stackoverflow answer recently that sums it up, if anyone is interested:

https://stackoverflow.com/a/45426493/1612318


Those artifacts in Python don't really compare with what the parent laments. Rust's sum types are one of my top reasons for preferring the language.


There are not sum types though.


While there is overlap between problems that might be sanely solved using python and problems that might be sanely solved using rust, the languages target very different problems.

You could write parts of a network server or parts of an OS kernel in python. You could write nearly the entirety of it in rust.

I would say that you need not use rust if python works well for you, but rust is probably better than C for implementing libraries that might be called from python.


Python is great, and getting better.

Just religiously use mypy. (Gradual typing!)

And profiling, and when something is reallly slow, bring out the Cython.

And when something is still not fast enough, well ...

Release the Rustacean!


They really very different but...

+ Static typing (extra safety, robust refactoring, code completion etc.)

+ Much much faster and less memory use

+ Compiles to a relatively standalone binary (not as good as Go though)

+ No Python 2/3 nonsense to deal with

- Much more complicated. You have to deal with lifetimes and borrowing and so on. It's really very difficult and we still don't know how to write some types of programs nicely (e.g. GUIs)

- Slow compilation times

- Can get pretty verbose and full of type boilerplate

Honestly if I was coming from Python I think I would switch to Go first. It is still way faster than Python, has a very nice "batteries included" standard library, static typing, very fast compilation and makes nice static binaries. The downside compared to Python is that it isn't very expressive at all - you'll find yourself writing out loops where you might have used a one-line list comprehension or something in Python.


> No Python 2/3 nonsense to deal with

Instead you have the stable/nightly nonsense to deal with, like Clippy and Rocket only working on nightly for example.


> Compiles to a relatively standalone binary (not as good as Go though)

Whats the difference?


We use glibc by default, so while Rust statically links all Rust code by default, that's still dynamically linked. You can use MUSL to remove that, where appropriate, but it's not the deafult.


To add an additional clarification here for others (I know Steve knows this :P), by default, at least on Linux, whether Go produces a dynamic executable or not depends on which things you import. For example, a simple hello world program will produce a static executable:

    $ cat main.go
    package main
    
    import "fmt"
    
    func main() {
            fmt.Println("Hello, world!")
    }
    $ go build
    $ ldd scratch
            not a dynamic executable
But anything using the network, like a simple HTTP server, will cause the Go tool to build a dynamic executable:

    $ cat main.go
    package main
    
    import (
            "log"
            "net/http"
    )
    
    func main() {
            // Simple static webserver:
            log.Fatal(http.ListenAndServe(":8080", http.FileServer(http.Dir("/usr/share/doc"))))
    }
    $ go build
    $ ldd scratch
            linux-vdso.so.1 (0x00007fff161f0000)
            libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f796b0e8000)
            libc.so.6 => /usr/lib/libc.so.6 (0x00007f796ad31000)
            /lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f796b306000)

Now of course, you can set flags to force Go to always produce a static executable, but the difference here isn't too much different with Rust. With Rust, you do need to install MUSL, add the target and then rebuild with an extra flag, but it's all very simple.

(To be clear, I am not criticizing Go here! There are good reasons why they link to libc by default when network stuff comes into the picture.)


What's the problem with that? Do you use any of glibc's private API?


The problem is that you have to think about it at all. If you compile on a machine with a new libc, and then try to run it on a machine with an old libc, it won't work. So you end up doing what most people do, even outside of Rust, which is take the oldest CentOS box you can stand and do builds on that.


The Autopackage project had a solution for this problem years ago called apbuild. It would search for the versions of glibc symbols and use the oldest possible ones, which resulted in portable binaries. With some quick googling I found the code here: https://github.com/DeaDBeeF-Player/deadbeef-plugin-builder/t... Probably doesn't work anymore though :/

It's a bit sad that this is still unsolved, probably due to the GNU people's hatred for closed-source distribution.


It's the GNU way:

1. Refuse to do something properly because it will make it too easy... or something.

2. Wait until someone else gets fed up enough and makes a better version.

3. Fade into irrelevance.

See: GCC/Clang, glibc/musl, Bash/??? (someone please make something sane)



Python:

Think about your domain problem, don't worry about CS and perf

Use the REPL to "touch data with your own hands"

Rustlang:

Fast, fast, fast

Everything bundled inside a single binary that you can just ship - no dependency hell for your users


Rust can give you speed and safety in one package. But you sacrifice ease of coding, especially as the IDE ecosystem is not there yet in terms of ease of use.

I've taken a few apps from Python to C#/F# and seen speed increases around 50x. With Rust I could increase the speed from C#/F# by around a factor of 10x for string heavy applications processing large files. So the speed difference might be 500x between a Python and Rust application (depends HEAVILY on the problem you're solving though, a lot of the problems that can be vectorized easily work pretty fast in Python).

I'd say I'm faster at writing F# than at Python, because the type system helps me build more easily maintainable code, and the functional style is better for the way I think (I grew up with structural, learned OO in uni and thought most of it is bananas and more obfuscation than helping, and I'm happy the world is now slowly getting to something sensible again). It takes me a bit longer to write Rust programs though.

Cargo is the best build system of the 3 languages though, you can easily add crates (imports) and the build system handles everything, even unit testing. I gotta say on the project setup front Rust with Cargo is much better than any other language I know.

And I think once we get good IDEs with robust code completion and error detection, I can become quite fast at Rust as well.


My Rust dream IDE would visually display lifetimes and how they relate to each other. :)


That would be a really cool feature.


RLS works just fine on my Linux box, error detection and completion and formatting. Debugging is so so, not very good. But on my Mac box, it's always crashing.


For me it's a) crashing quite often on Windows and Linux and b) doesn't really autocomplete anything beyond really simple cases where I don't need it like "Vec::<String>::" will give me new. But on most things it's just really awful.

Error detection, at least in VSCode is only displayed after compiling, that might be the plugin though.


I have to make a correction, the Rust (rls) plugin now works much better. I updated my rust distribution.


Just my favorite single reason: I love ownership. I love knowing what a function is going to do with the HashMap/dictionary I pass into it. I love knowing when I'm the only piece of code that can touch something. Sure, there are benefits in terms of not needing a GC too, but I just love how clear it makes everything.


Yeah, I vacillate between judging algebraic data types and ownership as the thing I like most about Rust over other languages (formerly Python, but I do much more JavaScript now), but I think ownership wins most of the time. Strong ownership protects from vast swathes of really annoying bugs that are unpleasantly easy to unknowingly inject and particularly hard to track down. Certainly I believe that Rust’s strong ownership model is the defining feature of the language—most of the language’s features come about as a consequence of it (though this dominance has slowly been diminishing since 1.0).


Performance is quite a bit higher for rust. And it's a much safer language by design (and forces the programmer to be as such).

That said, development time in Python is much faster.

Honestly, it depends on what you're building.


Rust is not safer than Python in Rust's own terminology. It takes more work than in C, but you can write memory bugs in Rust, by design. Python will catch those damn near every time, and if it doesn't you file a bug. Sure, there's less serious classes of bugs that Rust is likely to catch, but that's not really safety except in a definition loosened to near uselessness.


> you can write memory bugs in Rust, by design

Unless you're talking about unsafe rust, that's wrong. And if you ARE talking about unsafe rust, well you can do the same in Python.

I do agree that python and rust are both "memory-safe" languages. I'm not sure about the other kind of safety rust provides though: thread-safety. That's really where Rust shines: you can write concurrent program and be statically guaranteed to have no data-races. I'm not sure how this transposes in python.


You can describe "safe" along different axes and python is not completely immune to memory issues. I guarantee that if you start using ctypes you'll manage it at some point.


I'm curious what parts of Rust you think are safer than Python.

Edit: The only big thing I can think of is the safety of compile-time type checking to ensure you won't end up with some kind of runtime error from mismatched types. Is there something I'm missing?


Immutability by default, exhaustiveness checks (make sure that you match over all variants of an enum or cover all valid values in an integer), return flow validation (did you forget a return somewhere? Is it the right type?), and, as you mentioned, type checking being mandatory, all libraries define their accepted types. Not only that, being types "cheap", it is customary to wrap base types in explicit types that won't be compiled:

    type Point(u32);
    fn foo(p: Point) {...}
    foo(0u32);
        ^^^^ expected Point, found u32


When you work with the type system it checks a lot more than you check in Python. In Python you pass around untyped tuples and maps because defining classes won't gain you anything (and is surprisingly cumbersome and unpythonic), whereas in an ML-family language like Rust you use lightweight, fine-grained types to check that every function call is correct and you can refactor fearlessly.

(I'd recommend using a language with garbage collection unless you really need to not though; OCaml is quite Rust-like but means you won't have to worry about the borrow checker)


> I'd recommend using a language with garbage collection unless you really need to not though; OCaml is quite Rust-like but means you won't have to worry about the borrow checker

OTOH, Rust arguably has a better story for tooling (build system, dependency management, etc.), a more full-featured standard library, and better concurrency support. Depending on your priorities, some or all of these might be worth having to learn the borrow checker.


Well, O'Caml and Haskell predates Rust by decades. I was delighted to see Sum-Product types in Rust when I played with it. They go a long way to cleanly model the problem domain. Obviously you can simulate them, but it's much easier to get wrong or "cheat" (say having a bunch of fields, only some of which are valid depending on a tag).


> I was delighted to see Sum-Product types in Rust when I played with it. They go a long way to cleanly model the problem domain.

Umm yeah, they've been a part of every ML-family language since the '70s, OCaml, Haskell and Rust included.


(FullyFunctional seems to understand this, as far as I can tell)


I do, but I wasn't aware that Rust was considered in the ML family (ML as Meta Language, like SML/NJ). I'm somewhat skeptical of that.

What I see from Rust is a lot of the really great things from functional languages (esp. strong typing) and beyond, but applying them to a language aiming as low as C. That's certainly interesting.


Rust also enforces deep immutability by default, which makes resource sharing bugs much less common.


the type-checking catches lots of memory errors (irrelevant in a GC language like Python, but wouldn't be caught by the compiler in C or C++), but it also catches data races when writing concurrent code, which is definitely something that could happen in Python.


> The only big thing I can think of is the safety of compile-time type checking to ensure you won't end up with some kind of runtime error from mismatched types. Is there something I'm missing?

It's this but designs in statically typed languages with more expressive type systems tend to push a lot of the logic into the type system so you can't use APIs incorrectly. This isn't exclusive to Rust but I do have a few examples:

The Glium OpenGL wrapper puts a lot of effort into moving errors to compile time. The overview[1] explains a bunch of them.

[1] https://github.com/glium/glium#why-should-i-use-glium-instea...

The Servo project uses the type system to tie rust code into the spidermonkey garbage collector so it can't be misused[2].

[2] https://research.mozilla.org/2014/08/26/javascript-servos-on...

More abstractly, one of the more unique features of Rust's type system (linear types) is the ability to be sure a reference is destroyed. This makes Rust particularly good at describing state machines that are compile-time checked. Using standard OOP terms, each state is a class and its methods are the valid transitions. You can't take an invalid transition because the method is missing. You can do this in any language, though it tends to be only done in statically typed languages and it's awkward in Python. What makes Rust unique is that calling the method will consume the instance so you can't accidentally make a transition twice. Trying to call the method again is a compile error. A concrete example of this is state_machine_future[3], which generates an asynchronous state machine using macros.

[3] https://github.com/fitzgen/state_machine_future#example

In a slightly different use of the type system, the Rocket framework has a concept called Request Guards that allow you to map from something in the Request to a parameter in the request handler function. The overview[4] has a simple example on the "Dynamic Params" tab that maps url pieces to a string and an int. This mapping, however, is extensible and you can map to anything. If your handler needs an AdminUser, you can have the request guard pull the user id off the request, connect to the db, retrieve the user, verify the user as an admin, and only enter the handler if all that is successful. This means you can move all the logic and error handling around this into a single place that can be reused just by putting an AdminUser parameter on a handler. I've seen this done with middleware in other frameworks but in Rocket it's on-demand per-handler. As a result, the handlers only have to implement the happy path, which keeps them more compact than I've seen in dynamic language frameworks.

[4] https://rocket.rs/overview/

So the general idea is to use the type system to help you use stuff right. As with anything, it's possible to overdo it and get crazy boilerplate heavy code where you have to convert/cast all over the place and unanticipated use cases can't be done but it tends to be helpful, particularly with autocomplete.


That's a big one.


I pretty much can't stand to write python after learning and using Rust. The static type system in Rust is just so great, and easy to use once you get the hang of it.

This especially shines through if you have to use code someone else wrote.


I'm happy there are others seeing the benefits of strong static type systems (strong is important, the C type system which is rather weak in many instances is not as helpful).


> Can someone sell me on using rust over python?

I can't. But I can try to sell you Nim [0] as a compiled/faster addition to your Python, from my own experience of using it in last few months and coming from Python.

It uses significant whitespace and the syntax will surely look familiar to any Pythonista, and the entry barrier is much lower (i.e. learning curve is much shallower) than Rust.

Speed improvements I experienced are in the range of 10-100x, while the code doesn't look that much different than Python. I have solved Advent of Code in both Nim and Python in parallel [1] so you can compare the solutions/syntax yourself.

[0] https://nim-lang.org/

[1] https://github.com/narimiran/AdventOfCode2017



That's a bit like asking someone to sell you on using a plasma cutter over a dremel tool. It can be done but it's not particularly useful or sensible.


Everything you can write in Python you can write in Rust, and everything you can write in Rust you can write in Python.

The main differences are found in tooling, library ecosystems, development speed and runtime overhead.


Everything you can write in Python you can write in Rust, and everything you can write in Rust you can write in Python.

That really isn't true in any practically meaningful sense. 'The main difference is everything is different' is not a very strong counter-argument.


> "'The main difference is everything is different' is not a very strong counter-argument."

That wasn't my counter argument, and I can't work out what you misinterpreted about what I said to get that impression.

My point was, unlike a Dremel and a plasma cutter, Rust and Python can be used for the same tasks. As they're both Turing-complete, anything you write in one can be written in another. The differences I suggested were to highlight the relative strengths, or in other words how much work you'd need to put in to get the desired result.

To be clear, if you hadn't tried to dismiss the GP who requested information about how Python and Rust compared to each other, I wouldn't have replied.


As they're both Turing-complete

The moment you trot that out, you lose your 'but your analogy is terrible' privileges by default, however terrible my analogy is.


How so?


Because more or less everything is Turing-complete. Your mom is probably Turing-complete. 'Your mom' is a pithy but lousy argument.


> "Because more or less everything is Turing-complete."

If you understand what it means, then you'd know that it means that all programming languages are comparable, from assembly to Haskell, in the sense they can all be used to do the same job. Therefore, requesting a comparison of the strengths and weaknesses of two programming languages is not a foolish request. The point of such a request is to find out where it makes sense to use a particular language. To give another example, let's say someone asks if it's a good idea to write a web server in assembly or in Go. It's certainly possible in either, but in order to explore what the best choice is then further discussion is required. Comparing a Dremel to a plasma cutter is an attempt to shut down this discussion, which doesn't help in furthering the knowledge of the participants.


If you understand what it means, then you'd know that it means that all programming languages are comparable

I think this is where we strongly diverge and I resent, a bit, your implication that because I don't buy into this I somehow 'don't understand what it means'. I understand what it means. I just think it's plainly ridiculous.


> "I just think it's plainly ridiculous."

You compared a Dremel to a plasma cutter as a way to explain the differences between Python and Rust, how is that any less ridiculous?


It’s an analogy and like most analogies it’s imperfect. My point was and remains that a really bad argument (“everything is Turing-complete!”) is not much of a critique of that or really any analogy. Because it’s so obviously shallow and bad.


> "My point was and remains that a really bad argument (“everything is Turing-complete!”) is not much of a critique of that or really any analogy. Because it’s so obviously shallow and bad."

You're getting hung up on a minor detail. My main point was not about Turing completeness. My main point was that requesting a comparison of programming languages is a legitimate request. Turing completeness is just one angle by which to see this. As you seem to object to that suggestion, there are plenty of other ways to explain it.

For example, one way to compare languages is to look at the key libraries and frameworks that have been built up around them. So for example, Rocket vs Django, Diesel vs SQLAlchemy, etc... If we're comparing languages, we should compare what the languages makes it easy for us to do, and libraries are a big part of that.

Another way to look at this suggestion is that "writing performant code" is something that's easier in some languages than others, and the libraries built using those languages are likely to reflect that. However, performance is just one metric by which to compare languages/libraries/frameworks, which is another reason why these comparisons can help in building an understanding in when a language is likely to be the best one for the job.

Lastly, to make this as clear as possible, I'm not advocating for Python or for Rust, I am only advocating for language comparison as a helpful approach when building familiarity with programming language strengths and weaknesses. Both Python and Rust have niches they excel in, but there's also a large amount of overlap. As an example, game frameworks exist for both Python and Rust, and discussion can help others find what's best for them.


Python does'nt give you the low level control needed for.. low level stuff. And even if it can be implemented does'nt mean it will be fast enough.


If that's your impression, then I'd suggest Python is used for more than you're currently aware of. Here are two examples of Python being used for "low level stuff":

https://micropython.org/

http://www.myhdl.org/


Nothing against Python (I'm using it) , but those solutions have huge performance impact comparing C/Rust on bare-metal. Interrupt handling, GPIO operations are orders of magnitude slower in python. Take such simple board and run basic bit-banged gpio PWM. You need "beefy" Cortex-M family to get similar performance that you can get with 8/16-uC with C. Nice for hacking stuff but I would't trust my AC conditioner firmware running months without power cycle on interpreted language :) C/Rust gives you static analysis of memory usage so you can predict sw memory usage. With Python VM single bug in it can shutdown your code or eat all heap. Maybe now maybe after six months running your code. Nice hacking tools thou.


> "those solutions have huge performance impact comparing C/Rust on bare-metal"

I didn't suggest Python is the optimal solution for low level coding, I just suggested it is an option.

> "You need "beefy" Cortex-M family to get similar performance that you can get with 8/16-uC with C."

In the case of MicroPython, people find it usable on platforms like the ESP8266, which is less powerful than a "beefy" Cortex-M. As before, I don't deny that there is a performance overhead compared to C, but it clearly has some traction in the embedded space.


Uh, your examples of low level python are full of C.


If you understood what MyHDL is doing, you'd know that doesn't matter.

Furthermore, most Python implementations are built using C, including the canonical one (CPython). It is possible to have performant Python implementations without using C, if that's what you're getting at.


No I am not simply talking about performance. I see you keep refering to Turing completeness in other replies, even condescendingly implying that your HN peers does not understand it, when you are clearly the one whose understanding needs to mature. A turing complete system is one where the rules are powerfull enough that you are able to implement "anything" INSIDE of them. Does'nt mean you are able to break out the system and manipulate the underlying environment, or even infer anything about how it's implemented. Just to bring my point to the extreme: Minecraft is turing complete. Does'nt mean you can bootstrap your cpu with. You can in theory emulate your cpu on top of it though.

Actually you seem to suggest that not even performance is of any hindrance to python. In which case I'm lost for words. You win.


> "Does'nt mean you are able to break out the system and manipulate the underlying environment"

I didn't think I'd have to spell things out so excessively, but... let's extend the description then... Any Turing complete language where you can write files to storage. To give an example of why the storage is relevant, let's look at Java. Java's sandboxed in the JVM right? However, if you think about it more broadly, as long as you can freely write files to disk you can write machine code to disk. In other words, as you can write a compiler in Java, you can break out of the sandbox.

As for Minecraft, I don't know enough about it, but if it's possible to write a compiler in Minecraft then that would have the same escape hatch too. It might not be a tool you choose to code a BIOS, but we're not looking at whether something is optimal, we're looking at whether something is possible.

> "I see you keep refering to Turing completeness in other replies"

> "Actually you seem to suggest that not even performance is of any hindrance to python. In which case I'm lost for words. You win."

Perhaps you overlooked the following quote as it didn't fit with what you decided you wanted to say...

"I didn't suggest Python is the optimal solution for low level coding, I just suggested it is an option."


Python and Rust exist in very application domains, so I don’t think they are very interchangeable with each other. Now if you are writing C++ code...


What do you use Python for?


I would say types. I think types have been really a lot of help to me, coming from mostly dynamic languages. It forces you to think about your problem more than just throwing some code together. Another thing I enjoyed is easier concurrency. But apart from that, I feel like its a matter of choosing the right tool for the right task.


Instead of learning weird new stuff like rust or go, I find that c/c++ and python cover the entire range of problems you will ever need to solve. It takes a lot of time to start being productive in rust/go, and the benefits are vague. Instead I'd rather learn to use C++ and python better.


Learning weird new stuff can have a positive impact even though you don't end up using it as your workhorse though.

After coding in Rust for several weeks I went back to C++ for my day-to-day work since I'm still more productive with it and the code I produce has no safety/security implications. The knowledge gained in those few weeks of Rust readily translated into C++ skill improvements, even though I already considered myself a solid C++ programmer before. Being forced to think about ownership, borrowing, move semantics, etc. constantly gets you into a mindset that is very helpful also outside of Rust.



you can always try rust in parallel with python code with libraries like pyo3


Satisfying to see rustfmt hitting stable, even if only in preview form.


Agreed!


Incremental compilation turned on by default :-)


Hmm... when I read that, I got excited thinking it would be like Common Lisp's, where the idea is to incrementally compile in changes to a running program, to the effect of being able to see the effects without having to restart it. This just sounds like what Makefiles do, but at a sub-file/AST level.


It's an overloaded term, it means both, basically.

Your analogy to Makefiles is right in a big-picture sense. In the small, it's different, but you've got the big idea correct.


Is rustfmt fast enough to reasonable be put on a save hook?


I've had it as a save hook and never noticed a slow down from it.


I went into one of my projects and did a `cargo fmt`, took 2.2 seconds. Dunno if that's within your tolerances or not. I personally have CI check it, rather than on save.


I know it's fairly pointless performance bashing but I just couldn't stop myself :)

    # In the crystal-lang/crystal repo
    $ find . -name '*.cr' -print0 | wc -l --files0-from=- | tail -n 1
    252745 total
    
    $ time bin/crystal tool format
    Using compiled compiler at `.build/crystal'
    bin/crystal tool format  0.62s user 0.04s system 106% cpu 0.625 total
and only 48ms to formal the longest file in crystal (the parser)!

Performance makes little practical difference as long as formatting a single file is fast enough to be on-save, but it does highlight the very interesting way that the crystal formatter is implemented. I'm not sure how rustfmt is implemented but crystal's formatter parses the file into the AST, and then does a single visitor pass of the file, in order, with a lexer in tow. It then basically reconstructs the entire file from scratch using the data both from the AST and the lexer (the AST visitor pass and lexer position have to be kept in sync). And surprisingly enough, this doesn't even make the formatter "too strict" in the way that it wipes out all existing style information as one would expect. It's a really cool - if a little messy - tool and one of I think my favorite parts of crystal (it doesn't have any config options either).


Hmm I assume that cargo fmt does it on all the files, When I wrote a lot of go I had gofmt on the write hook of my editor. It was fast enough for single files that there was never a problem. 2.2 seconds sounds rough, but if that's for lots of files it's not too much of a worry.


commit or push hook is nice in my opinion. Seems like a middle ground to on save and in CI.


Been using it as a save hook in vscode (with RLS, too)... never noticed any slowdown because of it.

The bug where it _refuses to run_ if lines are > 100 chars is annoying


I've been running the nightly version with "error_on_line_overflow = false" and it's been a fairly smooth ride. It's worth mentioning that I also haven't seen it fail to format a line within 100 characters since switching to the nightly version a month or so back.

https://github.com/rust-lang-nursery/rustfmt/blob/master/Con...


Yeah, I am also using nightly and manually making my lines < 100 characters so they can be formatted. It works, but it's a little annoying. :)


I recently rewrote a few small c scripts in Rust, (mostly to try the language features), the experience was really good. I don't think I will ever start a new project in C again.


rustfmt looks very cool. I look forward to using that partcular feature.


Is Helix still best Ruby-Rust way?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: