Go is in a sweet spot where it is often used to compete with both groups: [Rust, C/C++] and [Node, Python, Ruby, etc]. The reason GP said it is probably because of Garbage collection.
I've done a bit of Rust in my job, and there are some basic things that Rust doesn't have going for it:
- steep learning curve (this means for the first 6 months, you or your colleagues are unproductive, write bad Rust which your company then builds upon over time).
- bad error messages (even though that was a focus for the rust team!)
- frustratingly complex for setting up test coverage
- Slow analyzer speed (*super laggy* on Clion, though this might be a jetbrains issue)
- Slow compilation times (I heard somewhere that "Go just goes". I've also written some Go in my free time, and compilation is fast. Well IMHO, "rust will rust" - it's very slow. Generics can make compilation event slower.)
- Verbose. I've seen a just few lines of JS get replaced with hundreds and thousands of Rust.
Rust lives in this interesting spot where, on paper, it should be superior to anything... but in practice, it's not a good choice in most cases.
It's very easy to ramp up someone in Go that's had a standard CS education and written C/C++ before. It's also simple enough syntax-wise for someone who knows python well enough to understand references, etc. Its stylistic restrictions and not being OOP-first also mean that codebases are generally readable. Compilation is also extremely straight forward.
With Rust, I've found even very experienced C++ folks have a long ramp-up period, the development toolchain is slow, and the ecosystem is limited.
Sure, for example there are projects to enable Rust usage with CUDA. But few are inclined to actually bother implementing a new BLAS and GPU accelerated tensor library with Rust.
I do think 10 years from now Rust will start getting more adoption as the ecosystem and tooling improve.
But it's hard to argue with Go where you'll typically get results that are faster or at worst comparable to Java without the OOP design pattern gobbledygook, simple concurrency model, and simple build process. It's "good enough" for 99% of use cases.
> With Rust, I've found even very experienced C++ folks have a long ramp-up period
I've found C++ folks especially have the hardest time with Rust, because they approach it using C++ idioms and habits, then get frustrated when they can't do things the way they're used to.
I've had better success teaching Java people Rust. They find it much easier to learn than C++, and I can get them writing idiomatic Rust code quickly, while C++ devs are still trying to get their coding habits past the borrow checker.
Go has bad interop with C/C++ and languages that use the C/C++ ABI (including Rust). You can use cgo as a workaround but it's clunky. So that makes an interesting case for other high level, novice-friendly languages like Nim, Crystal or Val/Vale/Vala.
Not the fault of Rust compiler per se but in my case the errors that mismatching trait impls from async libraries yield can be downright suicide-inducing.
esteban of course gave you an excellent response already, but just another bit of like, context here: while it may not be on rustc, the developers want to go beyond the norm here. rustc understands if you write async/await syntax like JavaScript, and then directly proposes that you switch to the Rust syntax:
pub async fn foo() -> i32 {
unimplemented();
}
pub async fn bar() {
let f = await foo();
}
gives
error: incorrect use of `await`
--> src/lib.rs:6:13
|
6 | let f = await foo();
| ^^^^^^^^^ help: `await` is a postfix operation: `foo().await`
It isn't on rustc to understand this either, but it helps a bunch of people, so the team does it anyway.
Check sibling reply. Sadly I never wrote those down. :(
I'll definitely do so going forward.
The problem is with that is the impostor syndrome: I legitimately can't tell if I am an idiot and skipped some basic Rust training, or the error messages are truly confusing and unproductive.
But your messages help. I'll just write those down and send them to GitHub's issue list.
I’m not working on Rust anymore, but the previous stated position on this, which as far as I know is still the case, is that if it’s confusing, you should file. Worst case scenario is “sort we can’t fix that” but it’s not an imposition to file issues. More is better. Because exactly as Esteban said, information is valuable. Even duplicates are valuable: they indicate that more than one person has run into this, and therefore it’s more valuable than an obscure issue only one person sees.
> I legitimately can't tell if I am an idiot and skipped some basic Rust training, or the error messages are truly confusing and unproductive.
It doesn't make a difference. The compiler can't assume any level of proficiency. If a topic requires you to read the docs, it should tell you so. There are some "basic" things it relies on, but for anyone with any level of experience programming should be able to read a diagnostic and either understand it outright or have enough clues about what to search for. So "this error is confusing because I didn't read chapter N of The Book" can be simplified to "this error is confusing."
Having examples of these is useful to see what we could get rustc to do. The general case might be impossible to deal with in a generic way, but we can target specific patterns libraries use and emit custom errors for them. The problem with these is we have to be reactive: if we don't see a problematic patter ourselves (or it isn't reported to us), we can't do anything about them.
Unfortunately last I tried these code snippets was months ago, was rushing like mad because it was a startup and I couldn't afford to just stop and write everything down and... yeah, priceless info was lost.
Just recently I am making a comeback to rewriting a tokio 0.1 library to the latest version so I'll likely have a few examples that I can post... where? In GitHub issues?
Even if it is an "it hurts when I do this" without more context it can be useful to bring the problem to our attention (but the more context you provide the higher the change we'll fix the problem).
The verbosity and complexity of `wasm_bindgen`/serialization between JS and Wasm (written in Rust) is primarily the thing I am frustrated at here when I see hundreds and thousands of Rust code. A concrete example: creating a Websocket client in Javascript/Typescript vs in Rust/Wasm.
In general though (outside of Wasm), Rust is less readable.
And with regards to Rust errors, I've found Rust errors related to Tonic and Diesel to be quite annoying/unreadable. The Diesel docs seem to blame Rust for this (can't find the docs for it right now).
Isn't this due to wasm having to access browser things mostly through browser JS interfaces?
eg Browsers provide JS functions that are intended for JS, which are not directly exposed to Wasm. So when your Wasm wants to access DOM things, access DOM functions, (etc) it needs to go through a JS shim layer instead of being able to call them directly.
If the browser dev's (or some W3C type of body?) introduced those same functions, but had them be directly accessible from wasm, then the JS shims wouldn't be needed.
The JS dev's for each of the browsers though would probably try and stop it ("security risk!" excuses, etc) though, as that would potentially cut into "their territory" and allow other languages to compete. :(
Expanding WASM to the DOM is definitely part of the roadmap. The problem at the moment, as I understand it, is that the DOM was designed with Javascript in mind, and figuring out how to translate that into something that works well for lower-level access is difficult, particularly in regards to getting garbage collection to work properly between WASM-land and JS-land. There are some alternative solutions that are being explored, but none of this has anything to do with security risks.
I hope you're right, and it does end up happening. That would very much enable many languages (LLVM based ones anyway) to become practical alternatives to JS for web dev, whereas now they're more like "can be sorta done with significant effort". ;)
Btw, my impression that it would be blocked by JS people involved in the process, was from a conversation some time ago with one of the JS people themselves.
They said (from their point of view) there's no need for WASM to directly access things instead of going through JS, and they'd block it themselves if things went that direction. Security was the lever they mentioned they'd probably be able to use.
For server binaries they're typically being dropped into Docker containers not scp-d to servers directly, and the moment you go there you can just use jib and get a JVM container easily so there's no difference in logistics.
For CLI tools whilst a single binary can be convenient, native-image lets you get those for JVM programs too these days. But it's not always the case that it's enough. In practice you will often hit the need for:
a. Cross platform / cross builds.
b. A way to easily update them for your users (that isn't "everyone mount this NFS/SMB drive")
c. Ability to ship other files e.g. third party libraries written in other languages, config files, data files, readmes ...
d. Possibly, avoiding virus scanners and Gatekeeper if you have users on Windows/macOS.
Conveyor [1] does support distributing CLI tools (in any language) that can then be updated via apt-get, the Windows package manager or Sparkle on macOS. If your language/runtime supports cross-building then it can do it all from your developer laptop, you don't need each OS to build for. The resulting artifacts are single files (deb, msix/exe, zip) and it supports both self-signing and regular signing if you want that.
It provides a few other neat features on Windows:
• One click install that immediately adds new tools to every single terminal session without needing restarts.
• If you want, silent background updates Chrome-style. If you don't, manually triggered updates.
• For JVM apps specifically it automatically configures the Windows terminal to support ANSI escapes, Unicode and other modern features so you can use all the same stuff as on UNIX without needing to futz around with win32 or wrappers.
Unfortunately the little default GUI that lets you trigger updates and add CLI tools to your path on macOS isn't officially launched yet, because it only works for JVM apps and not other types of program. But if anyone wants to try it just let me know, it's easy to activate.
If you don't need any such features then yes, a single binary can be a bit more convenient than a zip. But the number of situations where it breaks down is pretty high and it's not so hard to handle multiple files.
I want to be careful not to recapitulate every conversation I've ever had with a JVM person about this. I'm not claiming it's impossible to deploy JVM applications; obviously, tons of people do. I'm just saying people use Go and Rust because they work well in situations where you want to distribute and directly run a simple binary without additional tooling. That's not every situation; obviously, if you can use Docker, there's not much difference between a JVM app and any other kind.
Your comment is super interesting, don't let me sound like I'm trying to shoot it down. I'm being deliberately terse to avoid creating receptors for language war antigens to bind to.
Sure. Given the choice of one file or 50, one file is clearly better all other things being equal.
My feelings on this changed over time. About 10-15 years ago I thought single executable output was a critical feature for a language, because everywhere I went I saw people saying how important it was for them, how much simpler it made deployment. I figured, OK, people know what they want so that's what they should get.
Then Docker came along. Docker images aren't single files, they're the polar opposite. They aren't even things you directly manipulate using the filesystem at all. Yet people loved it and it took over the world. Clearly what all those people demanding single-file executables were actually wanting in 95% of cases was simpler deployment, and they were phrasing it as single executable because that was concrete and understandable whereas simpler deployment is a very vague concept so who knows what you'd get if you asked for it.
For people who are selecting Go or Rust or Graal native images primarily because of single-file output, I'd actually really appreciate a chance to ask a few questions or interview them quickly to learn more about the deployment context. Conveyor is all about deployment and it's good to understand more about how people are doing things and what could be better.
> I’d love to use C# without having to deal with distributing the runtime, so I’d like to hear more!
In the later versions of DotNet there are a couple of common ways to distribute (I'd suggest either DotNet 6 [LTS] or preferably DotNet 7 [current]).
You'd usually use the dotnet publish command, which ideally produces one of three things, all of which are self-contained and can be deployed to a clean server without any framework. Ordered by worst-first (in relation to your requirement):
1. The halfway house from dotnet publish is a folder with your app/site/api alongside all the dlls needed from the standard library and/or nuget. This is a standalone folder, though messy.
2. With an extra couple of options on the dotnet publish command you get it all as a single binary which is exactly what you say: a big executable.
3. There is another option available on the dotnet publish command which will use magic (probably tree-shaking but I can't remember) to produce a smaller single binary by removing unused code.
As an aside, it's also worth noting a couple of extra points:
* The dotnet publish command can cross-compile ready-to-deploy outputs for any supported platform (eg Mac or Linux, using x86, AMD64, or ARM64) just by specifying the combination of platform and CPU as command line options.
* Within C# you can mark your assets as embedded resources (like an embed FS in Go) and they will also be included in your output.
The final result varies in size depending upon what your code does (and hence included libraries), and some code (eg reflection) may interfere with the tree-shaking (or whatever) of option 3 - but it warns you whilst it generates the output, and you can either ignore it or use option 2.
Generally speaking the option 3 builds are between 1.5 and 2 times the size of a Go one, but you're looking about 20MB to 30MB for useful stuff. Not tiny, but still quite small these days. Option 2 builds probably double.
In use (and this is subjective) they consume a bit more memory than Go, but seem more consistent/stable in that usage.
Also note that within that 20MB-30MB build, for an api or a website you get a built-in web server that can sit behind nginx etc as usual, but is also good enough to expose directly.
Yes, "dotnet publish" is what I was thinking of. Wow, I didn't realize it could cross-compile!
I can't quite tell from the .NET SDK repository -- any idea if this stuff works on Linux (i.e. building on Linux, perhaps for Windows)? I see mention of MSBuild, so I'm guessing maybe not.
I love C#, but I abandoned it a while ago because I wanted to only rely on open source tools (just to ensure my code is usable in the future). Then, of course, they open-sourced a bunch of stuff (including the compiler). If I at least had the option to develop C# on Linux (with support for cross-compiling to Windows), that would be great (and honestly something I would have never expected 10 years ago).
> Actually, it does work from Linux ... cross-compiled from Linux to Windows.
Here's a link to the commands I use to generate my cross-platform builds [1]. They are easy enough to stick in a shell script or batch file so you get all the builds with one command. These produce single executables, trimmed for size.
Indeed (I mentioned reflection above). It only affects the trimming though - you still get (larger) standalone builds that need no framework installed.
Java, Go and C# (and node) have very similar performance, e.g. https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... For all of them, the key to writing high performance code is avoiding allocations and boxing. Go and C# both do this slightly better than Java, but in most domains where these languages are used, this is not a big difference (and this is where you might use C/C++/Rust instead). I've found Go to be more verbose than Java, but I haven't used Go much since generics were released.
Not really, just do a rolling deployment like you should be doing anyway. No one cares if the new version takes 1 millisecond to start up or 3 seconds because they literally won't notice.
If your Java app takes half a minute to initialise it's the app's problem, not Java. Modern Java frameworks have moved from a dynamic deployment model to statically compiled and can start in milliseconds.
(for example, see the benchmarks on https://quarkus.io/blog/runtime-performance/)
Hardly, it's a fantastic guardrail when combined with health checks. You can say "you don't need it", but everyone makes mistakes sometimes. Make those mistakes not matter. You also take backups, right? Same idea.
It has a non-zero cost though, which is why I don't like it.
Things go wrong with the "roll" for example.
You have potentially two versions of your code running against the same DB for some time.
Stop -> deploy code -> start is simpler and less likely to go wrong.
JVM startup times make using it in Lambda or scaling container clusters awkward. Scaling can't happen fast enough for traffic spikes when the startup time and cold start performance is crap.
You exec a process expecting it to begin operating, providing some networked service, in a reasonable time. Instead it doesn't do that. It spends tens of seconds, sometimes minutes, running JIT and other sundry startup overhead.
You may not have seen this if you haven't used Scala...
Genuinely curious, what kind of application are you running? Which JVM are you using? Are you aggressively GC tuning? Very low on memory? I've used Scala from 2.8 up to 3.0, for microservice systems, monoliths, data pipelines, machine learning (way back), desktop apps for research using Swing, an Android app (worst idea ever), highly imperative to very functional, and I don't think I've ever seen anything remotely as bad as that even on genuinely big codebases. Hundreds of ms, sure, but minutes just getting the JVM up and running? I can see how that would be problematic.
Ok, so to expand: applications that I've been responsible for from the beginning have not had long start up times. Where I've seen it is with other folks applications where I was hired as a consultant to look at performance.
The most recent example was a Scala monolith. It had to use JVM 1.8 because <reasons> prevented migration to 11 (tried quite hard, but never succeeded). GC tuning doesn't really apply when considering start up delay, but yes it had been tuned over the years. Memory was not limited. The application, mainly due to Scala, had tens of thousands of classes. They all seemed to get JIT'ed on start up, which was the primary reason for the slow start up. People involved (who had come from heavy Scala shops like Twitter) seemed to think it was normal.
Yeah, Scala is absurdly bad for startup time because of poor modularization of the standard library. It's a decent language with a terrible standard library.
Whatever the absolute merits (or lack of them) of Go, the fact is that if it's a good (enough) option for you, then it's almost certain that some language will fit your problem better than Rust.
I think if you would choose Java or Python or C#, then Rust might not be the right choice.