Hacker News new | past | comments | ask | show | jobs | submit login
I love building a startup in Rust but wouldn't pick it again (propelauth.com)
253 points by aisrael on Feb 17, 2023 | hide | past | favorite | 484 comments



If you're thinking about building something in Rust, a good question to ask is, "what would I use if Rust didn't exist?"

If your answer is something like Go or Node.js, then Rust is probably not the right choice.

If your answer is C or C++ or something similar, then Rust is very likely the right choice.

Obv, there are always exceptions here, but this helps you work through things a bit more objectively. Rust can be a fantastic language for many purposes, but it has a very high development cost.


> If your answer is something like Go or Node.js, then Rust is probably not the right choice. If your answer is C or C++ or something similar, then Rust is very likely the right choice.

I've been saying that for a while. If you're building web backends, Rust is not a good choice. Go is so much easier. The green thread system gets rid of the thread/async distinction, garbage collection means you don't have to obsess over memory management, and the libraries for web backend stuff are the same things Google uses internally, so they're well-tested.

A big problem with Rust, long-term, is that the kind of programs that really need it are somewhat out of today's mainstream. It's not that useful for webcrap. It's not that useful for phone apps. The AI people use Jupyter notebooks and Python to drive code on GPUs.

Where do you really need Rust? Heavy-duty multi-threaded programming. Operating systems. Compilers. Routers and network infrastructure. Robotics, maybe. Hard real time. It ought to be used more for high-performance games, but the game infrastructure isn't there yet. Unreal Engine is C++ and Unity is C#. Rust has Bevy and Rend3, but they're not AAA title ready.

Perhaps Rust is fighting the last war - the mess inside C++.


> A big problem with Rust, long-term, is that the kind of programs that really need it are somewhat out of today's mainstream. It's not that useful for webcrap. It's not that useful for phone apps. The AI people use Jupyter notebooks and Python to drive code on GPUs.

One thing this is missing is that Rust is useful for libraries callable by many different languages. You may or may not want to use it to build an actual Web app (I personally think it's a solid choice, but reasonable people can disagree). But for building, say, the Python cryptography library [1], which is used as a part of "webcrap", Jupyter notebooks, and in many other domains, Rust is clearly an excellent option. Nobody is going to build core Python infrastructure in Go or Node, and without the plumbing libraries none of the higher-level applications can function.

[1]: https://github.com/pyca/cryptography


Except that is the exact reason why I would pick C++ instead of Rust, the Java, .NET and JS ecosystems are written on top of C++, and the libraries I might want to write bindings for, are also written in C++.

Adding another language in the middle will only complicate our toolchain.

Regarding Python, with the pressure from Ruby, PHP, JS and Julia JIT compilers, they will eventually get more serious about JIT adoption, and then there is ctypes of OS APIs.


You mean to link against Rust binaries or can you make library files too? Compared to, say, SQLite as a single C file, I thought that Rust projects are not super easy to use as a dependency


Rust can output library files that use the C calling convention, either static or dynamic. Doing this entirely by hand is pretty annoying, because your API surface has to be C-compatible (so can't contain a lot of Rust's useful language features) and because you still have to do the other half of the FFI to use the library from the other language. However, it's possible to develop automated language-specific tooling to make this easier, with PyO3 being a particularly impressive example.


Depends on how you link rust, using PyO3 makes it arguably easier to do link python code to rust than any C construction I could think off. Linking a single C file into python is quite difficult (if not using JIT like cppyy) because you need to make bindings & conversion often on both sides for each exported function.


> If you're building web backends, Rust is not a good choice

This heuristic wouldn't work for my department because we build web backends in C++. I keep telling the most senior devs here that nobody does this and for good reasons (velocity etc), and their response is "who cares what the rest of the world does, they're just bad at C++."


Google developed Go specifically so they didn't have to use C++ in high-volume web backends, which is what they did previously.


Google absolutely uses lots of C++ in high-volume web backends today. Go didn't replace it.


Nope, Google is fine with Java and C++.

Go authors always disliked C++, and got a manager that allowed them to work on their toy language, which eventually became Go.


Another (maybe even more) important motivation was to have simple, statically-typed language so that new hires can contribute to the codebase faster and the code itself is more standardized and easier to maintain at large scale.


They basically got there with Java + some frameworks. It's not the best (I'd like NodeJS), but it works and has a huge ecosystem already, and anyone can use it. Some people complain that Golang is designed specifically for its original use case of specialized web backends and isn't great otherwise, or that its main design goal was being "not C++."


A few people at Google did, as far as I'm aware it didn't start as some officially sanctioned project at the company. golang has its own sets of challenges due to the limitations of the language and its error prone bare bones threading model.


These aren't high-volume, they're just web interfaces used internally by hundreds of people. Golang wasn't well-received either in our dept. Java would've saved us a lot of headache, a phrase I never thought I'd say; adjacent teams use it for similar things.


You're right. I think that it replaces C/C++ for many use cases. I'm a quant, and I use it to write fast algos for research. It won't be long until Rust has good high level SIMD primitives like the faster crate offered. I can't see myself using anything else thereafter for performant code.

You may be underestimating the amount of need there is for performant code though. Its everywhere.


>I've been saying that for a while. If you're building web backends, Rust is not a good choice. Go is so much easier. The green thread system gets rid of the thread/async distinction, garbage collection means you don't have to obsess over memory management, and the libraries for web backend stuff are the same things Google uses internally, so they're well-tested.

The thing here is that most web backends are basic CRUD bullshit, and you fundamentally will not hit the thread/async distinction, nor need to care about whether there's garbage collection or not. Microsoft has used actix-web in production as far as I know, it's not like the Rust web stack isn't battle tested.

>Where do you really need Rust? Heavy-duty multi-threaded programming. Operating systems. Compilers. Routers and network infrastructure. Robotics, maybe. Hard real time.

These are all the things that sit adjacent to a web framework, and once you step outside that CRUD happy path, Rust fits very well - and in this case just "bolts on".

>It's not that useful for phone apps.

This is more tangential but I know of more than a few companies who have written or are writing their cross-platform logic in Rust instead of C++. This is really akin to how some projects back Python modules with Rust: it's easy to have it backing things and it beats the hell out of dealing with C++.

And look, I'm not even saying don't use Go/Python/JS/<insert your preferred language here>. You do you, your startup will live or die by so many other things than choice of programming language.


Exactly. And for phone apps, using Flutter with flutter_rust_bridge is a great choice for FFI.


They are out of the mainstream, but only because "systems people who have exchanged any hope of losing their virginity for the exciting opportunity to think about hex numbers and their relationships with the operating system, the hardware, and ancient blood rituals that Bjarne Stroustrup performed at Stonehenge" "SOLVE THE BEAR MENACE" [0].

[0]: https://www.usenix.org/system/files/1311_05-08_mickens.pdf


There is probably a good space for Rust in writing the databases, caches and all sorts of proxies as well. I agree though, for most of the stuff I need for $DAYJOB the speed is nice but hardly required. It doesn't really matter if I can generate a HTTP response 50 microseconds quicker if the response then has to travel over the internet for 20+ milliseconds.


It does matter if you're using cloud autoscaling FaaS (previously known as CGI) and paying for those HTTP responses by the microsecond (and by RAM usage as well, which is also typically quite low in Rust).


Not really, the connection won't close until you are done sending it all and have received the relevant TCP ACK messages from the other sides. Being on autoscaling FAAS or not does not matter for that.

In any case, if you are trying to optimize microsecond usage in AWS lambda I hope you have a truly gargantuan amount of traffic and/or have incredibly cheap engineers or the money saved will barely match up to the cost of the engineering hours sunk into them.



I appreciate that there might be use cases where performance absolutely does matter, and log parsing is indeed one of the sweet spots for that. But offline analytics is something quite different than HTTP request generation, even if you sometimes do both in an AWS lambda.

Additionally for the case mentioned, the log parsing apparently already fit quite comfortably in the free tier of AWS lambda. Rubygems.org seems to get its developer time for free, so Andre can keep tuning this log parser as a hobby experiment (in the best sense of the word, nothing wrong with having fun). But TFA was about building a startup and most of those definitely don't manage to get their devs to work for free.


Rust has been around for so long these domains should have long since been covered. Quite telling they aren’t yet, particularly considering the momentum in publicity. I wonder why.


> Rust has been around for so long

Rust async support was only released in late 2018. Not very long at all, especially if compared to Golang.


I’ve been writing data ETL pipelines in Rust and having a fantastic time.

Most of my data, cost and performance requirements aren’t suited to the likes of FiveTran/etc, and whilst I could theoretically use Python, it’s far too slow, and the propensity to crash unexpectedly (meaning I have to spend extra time debugging) means the development velocity is actually worse than just writing it in Rust. Plus, most of the libs and tooling I’d use from Python are available in Rust.

I’ve also written Rest and GRPC API’s in Rust (mostly for serving up data and being a query layer), and found it a far more pleasant experience than Python/Typescript/C#. Admittedly a somewhat ”smaller” use-case than a lot of what people would consider web-API’s but still.


How much of your ETL is rust vs SQL? I agree pandas for ETL isn’t the fastest but why not isn’t SQL faster still? Or don’t you find yourself in the situation where there is so little logic in the ETL that rust performance/safety isn’t that useful?


> It’s not that useful for phone apps.

If you’re building a large complex app that needs to be shared across multiple platforms, C++ is a common choice. Not talking about games here, think Office, Facebook, Zoom, WebEx etc. It easy to see Rust replacing C++ in that stack.

But as Rust is so much nicer that C++, I could see it becoming popular for smaller projects that want to share common logic, with a native UI on top.


> and the libraries for web backend stuff are the same things Google uses internally, so they're well-tested.

Internally Google uses no web and most of the backends are in C++.


Oooh Rust gaming engines. Hadn’t thought of that. One of my favorite games is coincidentally named “Rust.”


I think it's dismissive and overly simplistic to say that Rust is almost always a better option than C or C++. They're different languages with different strengths.

One strength of C++ is that it is far more established than rust - and that comes with a lot of advantages:

* It has a larger number of people who know how to work with it

* It has a huge catalog of established, fully functional libraries for everything you could imagine from UI to game development to embedded systems to simulation to anything else

* It has broad support in developer toolsets in general like editors and IDEs, static analysis tools, formatters, pre-commit hooks, etc

If I'm starting a new project with C++, I can immediately know that there's a huge landscape of programming already carved up and ready to work with. I can't do that as easily in Rust. The language, the libraries, the tools are all younger. Some of it isn't as fully featured, some of it isn't nearly as stable.

That will improve with time, but it's a huge advantage to C++ right now.


That's not the argument the GP is making. The GP is basically saying that if C/C++ aren't your second choice of language, then it's a sign your reasons for picking Rust are suspect.

They didn't say Rust is almost always better than C/C++.

There's perhaps the implication there, but it's certainly not explicit in the GP's comments.


> One strength of C++ is that it is far more established than Rust - and that comes with a lot of advantages:

I'm painfully aware of this. Typical Rust problem, from a reply I made to a posting on Reddit:

* WebGPU dev: WGPU updated to 0.15!

* Me: Might want to hold off on upgrading for a bit. See (bug report on related package)

* WebGPU dev: Good to know. I'll keep this in mind if someone has any issues when following my tutorial

* Me: I'm using Egui/rend3/wgpu/winit/vulkan cross platform on Linux and Windows, with cross-compiling. Getting all those crates to play well together is not easy. Every time something in that stack changes, it's days or weeks of trouble.


> That will improve with time, but it's a huge advantage to C++ right now.

Sadly, it's not really an advantage anywhere that hasn't already eaten the grief and doesn't already use C++.

Both C and C++ infrastructure are so horribly terrible that Zig is gaining traction simply by creating a better compiling infastructure totally indepdent of whether the language is better or not.

That's one hell of a downvote.


I love Rust but I am still looking for the perfect blend of the two camps.

One the one hand, Go/Node/Python/etc doesn't scratch my itch for the strong type system (sum types/tagged enums mostly) and on the other hand even though I like prototyping in Rust I really miss things like a REPL, more terse syntax, and a bit more expressiveness.

I think OCaml is closer to my ideal but the ecosystem isn't quite there. Maybe all it needs is time.


Pick a problem, not a tool first. Unless you're more interested in the tool than the problem, then it's fine. I don't love Rust, but I can get by. TS+NodeJS is okay. But I'm more interested in the problems that I'm solving so I just use whatever is the most suited.


I do that today, but I believe there is still a lot of opportunity for languages to approach my ideal, and I think I share my ideal with others. And if a language like that did exist, or could be made, I'd love to use it - changing languages every project is workable, but you'd probably agree (I hope?) that NOT changing languages every project would be even better.


Yes. I cringe a little getting emails from startup recruiters saying "join a Rust startup." As if the success of their business hinges on using Rust for something that probably doesn't need it.

I've found that the basic NodeJS or Python works for most problems I've chosen. Not even with strong typing.


its older than rust. its had plenty of time. what ocaml needs is a renaissance and new blood. sadly it doesn't have the momentum of being new but I agree its a compelling language. I see it as a better, typed alternative to go.


Have you looked at Node + TypeScript? I'm just starting to dig into TS, but its type system is one of the better ones I've seen. And Node is nice and mature with good async capabilities.


I'd prefer something with a more sound type system, and something that makes cleaning up resources easier and more ergonomic.

This might help with cleanup: https://github.com/tc39/proposal-explicit-resource-managemen...

But I'm not sure anything will help with the type system. For example, this drives me absolutely insane: https://www.typescriptlang.org/play#code/MYewdgziA2CmB00QHMA...


> makes cleaning up resources easier

I've never really had a problem with it, but I isolate such resources behind a wrapper, which makes cleanup easy. I just create a little higher-order function:

  function doSomethingThatNeedsCleanup(fn) {
    const thing = createTheThing();
    try {
      return fn(thing);
    }
    finally {
      cleanUpTheThing(thing);
    }
  }
> this drives me absolutely insane (console.log(["10", "10", "10"].map(parseInt) outputs [10, NaN, 2])

That's not really an issue with the type system, that's just coincidence and bad luck.

The signature of parseInt is:

  parseInt(string: string, radix?: number | undefined): number
And Array<string>.map(fn) takes a function with the signature:

  fn(element: string, index?: number | undefined, array?: string[] | undefined): any
So parseInt coincidentally matches the signature Array<string>.map() is looking for.

I'm not sure what you expect the type system to do here. It works just fine at catching an actual type error, such as this:

  console.log([10, 10, 10].map(parseInt)
...which correctly complains:

  Argument of type '(string: string, radix?: number | undefined) => number' is not assignable to parameter of type '(value: number, index: number, array: number[]) => number'.
    Types of parameters 'string' and 'value' are incompatible.
      Type 'number' is not assignable to type 'string'.
(As I'm sure you know, this is the correct code: `console.log(["10", "10", "10"].map(s => parseInt(s))` .)


> So parseInt coincidentally matches the signature Array<string>.map() is looking for.

If that were true I'd not mind quite as much... but actually, parseInt takes two parameters and map passes three.


Map’s second and third parameters are optional. Your functions aren’t required to implement them, which is good, because most of the time you don’t need them.

Again, this isn’t a type system issue. It’s both an API design issue and a poor programming hygiene issue. (Don’t pass bare functions if you don’t know what the parameters are.)


> Map’s second and third parameters are optional.

I mean, map always passes them in, so in that sense they aren't optional. Mixing that with functions that take in optional parameters, but aren't usually called with them, gives you a ticking time bomb, IMO. And double that danger when the language's type system allows you to call a function with more parameters than it could ever take.

> Don’t pass bare functions if you don’t know what the parameters are.

This is exactly the kind of thing I want my programming language's type system to catch for me, if I'm working in a language with a static type system like TS.

And even in dynamic languages, this is exactly the kind of thing I want my programming language to catch for me at runtime. Python does, for example.

Stuff like this - while it might fit JS and TS and make sense to some - makes absolutely no sense to me, and is why I simply look to other languages to fit my needs.


How would you change the Type System to fix this particular issue?

>> Don’t pass bare functions if you don’t know what the parameters are.

> This is exactly the kind of thing I want my programming language's type system to catch for me, if I'm working in a language with a static type system like TS.

Hows your compiler supposed to know you don't know what the parameters are?


> How would you change the Type System to fix this particular issue?

In TS you really can't (that's my point and why I prefer to avoid the language) because of JS and API baggage. But just about every other static language that I've worked in can complain for this kind of thing.

> Hows your compiler supposed to know you don't know what the parameters are?

Why does the compiler care about what I know? The compiler itself knows what the function's parameters are and it can tell me that something seems wrong because I'm asking it to call a function that maxes out at 2 parameters with 3 parameters.

TS does catch this kind of thing in a lot of places. It just intentionally decides not to do it for these kinds of callbacks because of the same JS and API baggage.


  > But I'm not sure anything will help with the type system. For example, this drives me absolutely insane: https://www.typescriptlang.org/play#code/MYewdgziA2CmB00QHMA...
You just gave me CPTSD, thank you. This motivated me enough so that I will leave JavaScript and TypeScript for good now.


Oh, god, the “you can call it with one arg, but it really has more, and then you tried to use map, you poor fool” trap.


Yeah I would have recommended OCaml, the language seriously need more developers to contribute to the ecosystem. It could be much nicer.


OCaml with tooling and an ecosystem from the size of Go would be an ideal. Java and C# can more and more be written like MLs too.


I think I would love an ML-like language built on the go runtime and standard library and ecosystem.


If you don't need bare metal something like Scala or koitlin will fit. Scala is extremely expressful and does CRUD very well


Scala is on my list of languages to try. I'm usually not a fan of large runtimes or too many abstractions from the OS, but I do want to give it a fair shake. F# too.


Scala is amazing but the community has been shrinking for the past few years, ever so fewer libraries maintained.


On the contrary. I’ve been recently writing a web service in Go… and omg what a terrible experience. Sure, I could prototype something very quickly, but now it’s almost impossible to refactor any of it without introducing subtle breakage. And it’s sooooo verbose and redundant and fragile.

I’m currently rewriting what I wrote in Rust to see how the prototypes compare and… I just feel so much at peace knowing that the type system and the borrow checker have my back and that I won’t encounter unexpected nil pointers or zero values. And I write this quickly as well.

(Yes, I do have experience with both languages, so none of these exercises were new to me. But I’ve been favoring Rust over the last couple of years.)


Quite the opposite here. Go is very easy to refactor if need be. What kind “breakages” are you talking about? Go being “verbose”? Or “fragile”? I doubt we’re talking about the same Go.

In Go you have a garbage collector, no need for the borrow checker or reference counting on your side. What unexpected nil pointers or zeros? It’s all easy and straightforward in Go. If you’re referring to the handling of materialized interfaces, which one may encounter in form of concrete error types, and which is one of Go’s idiosyncrasies, then it might help to look deeper into learning how to handle Go’s interfaces.


> Go being “verbose”? Or “fragile”? I doubt we’re talking about the same Go.

Maybe compared to C, Go is quite concise, but Go is nowhere near Rust's level of expressiveness, especially with poor generics support, much more verbose error handling etc.

> no need for the borrow checker or reference counting on your side.

I think that people that mention borrow checker in context of backend dev haven't done much backend dev in Rust. The nature of a web backend is to (most of the time) get data from the client, process, return a response. In this context you very rarely have to think about borrow checker and almost never use explicit lifetimes.

> What unexpected nil pointers or zeros? It’s all easy and straightforward in Go

If you forget to initialize a struct, you may end up with a nil pointer and you might not catch it until runtime.


This is really great way of putting it. Node/Python/Go were the obvious alternatives for us.


NodeJS kind of muddies the waters. It ate a lot of use cases that would have previously been done in Java. That created conflict between backend teams that wanted statically typed code and a "boring" tech choice against "full stack" developers creating a backend service.

I think Rust will see a lot of adoption in web services that are glorified CRUD APIs. It would have been a poor choice to do many of these workloads in this in C or C++ (despite the data point of Amazon 1.0 LOLz).


I've really tried to give js/ts in backend a go. Both by nodejs and deno. And by kickstarting my own projects as well as diving into experienced nodejs developers' code.

I really don't see how anyone choses nodejs/deno to anything.

Java imo gives you much less trouble, is more stable, has a working (!!!) Unit testing setup and exceptional runtime.

Next on my list is to give rust a go, since I'm intrigued by its features, but if I was to go fast I'd go with java any day.


"I really don't see how anyone choses nodejs/deno to anything."

This is going to sound mean but I don't really know how to phrase it more nicely. People building backends in js/ts are doing so because either they, or a critical mass of the people they expect to code in it, don't know any better backend languages.

I don't mean for this to be judge-y. People have different skillsets. A nodejs backend can be the right choice if you're a full-stack dev or a solo dev and you mainly know js and you get a lot of agility and correctness by using what you know instead of trying to use a "better" choice but one that you yourself are less likely to use correctly or quickly in your own use case.

It can also be the right choice if you're a big backend java/etc guru but you know you want to set this thing up but then just monitor/oversee frontend/fullstack devs being the ones who do incremental modifications to it.

But yeah - without additional outside constraints and everything else being equal, it doesn't make a lot of sense to use nodejs as opposed to basically anything more boring - be it java, rails, etc.

The thing that surprises me the most about node based backends is that they forego both the platform maturity and static-safety of java AND the ridiculous amount of batteries included into django and rails, just to be able to use javascript, of all things. What you could rails-g in a minute you could instead spend a day looking for an npm package that won't be abandoned in a month or spend eternity rolling your own everything and then maintaining it.


This comment is presumptuous, dismissive, and also wrong. People who write application-like front ends in React (etc.) want back ends that can interoperate with those front ends.

They accomplish things like built-time code generation, static server-side rendering, and other kinds of code transformation that are difficult and flaky without a back end that can understand JS.

I have looked for non-tinkertoy solutions in Java and Go. There aren't any, and senior people in those communities are hostile to the very idea. (Maybe remembering Java Nashorn?) Maybe the only people friendly to the idea of interoperable back ends are Rust people (where you have libraries like Yew/Sycamore/Dioxus/Leptos that actively imitate JS frameworks).

But yes, your claim is that the only reason to write JS back ends is ignorance and enfeeblement is ignorant and repugnant.


To clarify my point, I think nodejs is great for the backend part of the view layer and to proxy requests to another backend service.

But for business logic, database interactions, calculations, etc, I really think the language construct is to clumsy and does not scale nicely.


> want back ends that can interoperate with those front ends.

You can ingest and emit JSON in any language. You can even compile backend-friendly code to run in JS on the frontend, via emscripten (and increasingly, Wasm), which will output very lean and JIT-friendly code. The usual "isomorphic" case for backend.js is no less 'presumptuous' or 'dismissive' than the comment you're pointing to and criticizing here.


  > You can ingest and emit JSON in any language.
How is this relevant? Serving a JS application to a client is not like serving a JSON API.

  > You can even compile backend-friendly code to run in JS on the frontend, via emscripten (and increasingly, Wasm), which will output very lean and JIT-friendly code.
Not in my experience. Compared to front-to-back JS, shipping your applications as WASM ends up with downloads and memory footprints that are several times larger than the same application in ordinary JS, and this matters a lot for slower networks and mobile clients.

Edit: not to mention that Go has a very limited WASM story, so your "realistic" choices for shipping back end code to the front end are C++ and Rust.


> How is this relevant? Serving a JS application to a client is not like serving a JSON API.

In some JS applications, the way the frontend code communicates with the backend is via JSON API endpoints. I believe GP was just pointing out that you can write that JSON API in any language you want.

> not to mention that Go has a very limited WASM story, so your "realistic" choices for shipping back end code to the front end are C++ and Rust.

Can you elaborate on this? How is it limited in comparison to say, Rust.


My understanding is that to get reasonable WASM file sizes, one must use tinygo (a C reimplementation). tinygo is a WIP, although I wish I could find a better updated description of what's missing than this page on their site: https://tinygo.org/docs/reference/lang-support/


This page[0] seems to have some good information(on stdlib support at least).

As far as file sizes go, you're right. It might be possible to get better results deploying a brotli'd or gzipped file instead[1].

[0]:https://tinygo.org/docs/reference/lang-support/stdlib/ [1]:https://github.com/golang/go/wiki/WebAssembly#reducing-the-s...


>>increasingly, wasm

If you are referring to something similar to Blazor (client side), yeah no imo it's pretty awful for anything other than internal enterprise apps. I couldn't imagine running a public facing product on that.


By using Typescript you can share types on the backend and frontend. Java stacks tend to do the same thing but for backend and database by defining data at the ORM level. You can connect to your database in any language, but most people using Spring don't write plain SQL.


> The usual "isomorphic" case for backend.js is no less 'presumptuous' or 'dismissive' than the comment you're pointing to and criticizing here

> You can even compile backend-friendly code to run in JS on the frontend, via emscripten (and increasingly, Wasm)

Lol, you can't be serious


I could understand this reasoning but it is not the typical node backend I am seeing in the wild.


Your comments tells me you are in the group who doesn't know better. Data is passed in json which is a format other languages can read and send back. If that's your reason for using it you do have other choices.


Heh heh, well maybe in some cases but since TS/JS is kinda the lowest common denominator in web development, it's just the most convenient choice. We humans are limited in our capacity to learn and take time to get good at things. So it makes a lot of sense to pick a tool that can be used to build the whole app with.

And for all of its faults, NodeJS can be fun compared to something like Java which feels more like doing taxes. Maybe it's the danger. Maybe it's the fact there are so many packages and libraries out there - everyone can contribute! Who knows, but I won't waste my time learning say Rails just to use a "better" backend framework. It's good enough even though I know better. There are way more important things.


I love Rails and if it was continuing to grow even so slightly it will be my go to for sure. But unfortunately it has been slowly declining for a long time… Many moving to Go, Elixir, Rust, and elsewhere. It was by far the best dev experience with a framework I ever had. The last version still looks great. But realistically unless if you plan to do a rewrite, which is never a good idea, it may not be the best option out there. Even more with having to get people onboard for the next years ahead and the ecosystem too. I am looking for a framework to use and see grow a lot for at least the next 4 years with having hiring in mind.

Spring boot is great but it does have its quirks. Config and plumbing for instance compare to Nest or Rails doesn’t makes it the best and fastest dev experience. And all the Java extra stuff and it’s ecosystem makes it solid but also old compare to what you can see happening in other languages.

I have Quarkus in my mind though. It looks amazing in dev experience with lib like panache, resteasy etc… and performance are amazing too which comes as a bonus. But there is no dev out there. And I feel that in Java people want to work on Spring Boot because spending X years working on Spring Boot are more valuable in their career than X years on something else that is not really the de facto standard of the industry. So ya not great for hiring talented people…

But NestJS with the JS/TS ecosystem start to be very close to what Rails was in the past. Stuff like Prisma for instance makes everything a breeze. There is almost always a package for the thing you want. Great at websocket too. Can do GraphQL if it’s your thing. Same language as the front end, Nextjs, etc. One of the most performant dynamic languages. And interesting things happening like Deno or using Rust with it. So there are quite some perks to it.

Can’t speak about Django though. But I am wondering how is the dev experience like when we use it for real projects.

I am starting a new project/startup and wondering which one to start it with between all these technologies. And after looking for a long time I can see that they really all comes with their pros and cons.


> But unfortunately it has been slowly declining for a long time…

That could be a sign of “stability”.

I’m personally using Django since two years ago and have the same feeling as yours, there are nothing exciting happening in this area, but after two years of use I came to the conclusion that it’s features are too stable, so nothing new need to happen in the first place.

It’s a common myth that software should be always updating and introducing new features. No, it’s not, if it can solve the problem great it doesn’t need to change at all.

JS/TS/Rust are still in their exploration phase, new ideas, new exprimentations, new frameworks come and go everyday, and that’ the reason they look vibrant.

Before settle down with Django, for years I’ve used bleeding edge tech like Meteorjs/React/Nextjs/Vue/Nuxtjs/Svelte/SvelteKit/… in real world, or try to rewrite real world app with them, and the biggest problem of all of them is that they all have their quirks and things in JavaScript area are changing too often and too much, I’m so tired to chase all these fancy new ideas.

For 90% web apps Django+Htmx+AlpineJS can serve me good and do all the things that new tech can do easily, at the same time keep incredible stability.

I believe Rails or Laraval are same here, you don’t need to switch in 90% cases, just relex and appreciate the stability they bring.


Thank you for your feedbacks on Django. I agree Django is maybe a boring but solid framework. I never used it so I can’t speak about it’s dev experience but it looks great. I see Python kind of like Java but for dynamic languages. It’s not shiny it’s not the hype. But what can’t you do with it? Usually when a new tool comes out you have Java and Python as clients. Before you used to have Ruby too. Now not so much.

For Rails the thing is that it’s not just one lib among a huge ecosystem like Python but more or less Ruby = Ruby on Rails. Sure you have Sinatra which is the Flask of Ruby but that’s it. While in Python you can do so many other things from devops to ML to eBPF…


Ruby is growing (you can say "growing back") starting at least from 2020/2021 with a lot of things happening in 2022 and looking at a very exciting 2023.

Hotwire is an exciting technology coming out of Ruby/Rails ecosystem. Rails 7 was an important release bringing a lot of great features and Rails 7.1 looks even better.

Ruby has a lot of nice advances as language starting with 2.7 and continuing to the recent release of 3.2. Check the recent changes at https://rubyreferences.github.io

IMHO Ruby and its eco-system is the perfect mix of tech that works while trying to get right some of the most important modern approaches.

Specifically regarding the

> But realistically unless if you plan to do a rewrite, which is never a good idea, it may not be the best option out there

This is not directed to you but more a general rant :)

I hate with passion (and yes I am biased) this line of thinking because it is not true in maybe 90% of the cases. And it is not true when one says that about almost any established language that should be used where it fits (or where it is used the most).

What I write below applies to an average company mostly doing Saas/CRUD/web apps. If you are working at FAANG or FAANG-like companies maybe this does not apply to you.

Be sincere:

- Say that you want to pick something that will increase your hiring skills if you are a developer, or that you want to brag to your friends that you are using what Amazon/Google is using or that you want to speak to conferences about the latest big tech

- Or say that you don't like Ruby and just want to choose something else. That is fine.

- Or if you are a director of engineering/leader/manager say that you can hire JS developers cheaper or that offering a cool/hype technology is the only way to attract new people or that you want to feel safe in front of board members/GM/CEO by saying we failed to solve this even if we use everything that FAANG uses.

But please don't say that you don't choose Rails because you might grow so much that you will need to rewrite.

I don't know one single example where a company died because they choose Rails. Just the decision to rewrite a product on its own is a high risk of failing. It does not matter the from:, to: parameters.

And again I need to say to anyone thinking to start a project in Ruby/Rails but thinking they need to choose otherwise: You are not Google, nor Netflix, Amazon, .... You (probably) don't need anything from their tools or architecture that makes them offer concurrent services to hundred of millions of users.

Try to have 10k users first and then I think you will probably have the mindset + management experience to hire Ruby devs or to help a new hire to learn Ruby.


I agree like I said the last version of Rails is great. I have been doing Rails for almost a decade now so I’m quite familiar with all its new things. But there are also a lot of issues right now if you are planning to start a company.

The total number of job offers in Rails have been going down for quite a long time now.

If you look at the Stackoverflow surveys and compare the professional respondents for Ruby and RoR between 2021 and then 2022. It’s a 20% decrease. It’s a lot.

Critical contributors of Rails and its ecosystem left for Rust, Elixir, etc. For instance the guy that was behind Active Record went to Rust. Or the creator of Elixir itself and many other followed. On the other hand I don’t see prohiminent open source contributors coming to Ruby to at least replace them… I have been doing Rails myself for almost a decade now and I can tell you that many Rails devs want or already switched to Phoenix or other languages. It’s not going up.

Many gems are mature for sure and don’t really need anymore commits. But also many are abandoned with PRs and issues stacking up. The ecosystem is not really increasing anymore even more vibrant.

Now with all that said: is it impossible to find a job in Rails today? No. Is it impossible to find some devs? No. Is it going to disappear in 4 years? No. Are you going to make a big mistake using it today? No. But there are now so many other options out there without any of these cons and some other pros that it starts to be more and more difficult to chose Rails as a solution today to start a company. Unless you want to use Rails for a side project you can’t ignore these issues and all the other options we now have out there.

About rewrites. It’s not just about the technical part. It’s also the human part. Let’s say I hire you and we grow and there is also 3-10 more Rail devs. But a rewrite has to happen. Are you going to be so exited to jeopardize all your experience in Rails to switch to something new? Even if you are the other 2-9 devs may not. And let’s say more or less everybody agrees that a rewrite has to happen, now you still have to agree on which language and which framework you are going to use! Some may really want to do Go, some Elxir, some other TS…

Whatever the solution that will be chosen there will be a lot of frustrations. Plus there will be frustrations on the tech part of the rewrite and they have to start to be slowly getting more and more productive to the new tech where something that could have taken 1 day can take 5 now. All this, particularly the frustrations from switching to a tech where some won’t really like, will slow down a lot the company. When you check postmortems of startups that failed, so many times in the reasons why they failed there will be this “and we decided to do a rewrite…”. But I never saw a company failed and they will still say that the rewrite was a great idea.

With all that said: better pick the right solution today and look at all the options. Not the Google one, not the one that decrease but one that is at least solid to vibrant, not the last hyped one either, but the best one.

I truly love Rails. But I can’t just chose it because I love it. There are now better options out there for the long term future.


That’s an interesting comment. I chose Node not because I love it, but it’s the easiest to hire for as a startup.

I hate configuring typescript and having to deal with bundlers, package.json, esbuild, etc.

But Deno and Bun are just around the corner. What we need is to get rid of this a la carte mentality around tooling and just give everyone Typescript, batteries included, and make it fast.

Then Node really isn’t that bad anymore.


While it doesn't have the batteries, typescript's gradual typing is much nicer than python's, so for some use cases it's a better language.

All languages come with tradeoffs in terms of performance, development/maintenance cost and ease of hiring. Most businesses only care superficially about performance, which leaves developer productivity and ease of hiring. Node/typescript isn't terrible in terms of developer productivity, and it's got the best hiring story of any language. Python is good in both those ways too, but since you already need a javascript hiring pipeline for the client it's easier just to use node and hire full stack devs.


> typescript's gradual typing is much nicer than python's,

It is not Node V. Python, but Node V Java

Python is a terrible choice as a backend, too.

It has a place as a scripting language, but so often it is used to build complete systems. They suck.


Python is suitable if you don't care about performance, type safety, or async programming. That is to say, it's usually not a good choice.


Node runs circles around Django/Rails, and batteries included in Javascript tend to be pushed to either the frontend or the database. Node backend tends to be thinner, compared to Java/Django/Rails. Some people don't know any better, some have good reasons of doing this.


Is there any statically typed HTML templating language, for non-node languages? Genuine question.


crates.io for Rust has 213 crates tagged "template-engine" https://crates.io/categories/template-engine

handlebars, terra, askama, and maud are examples I recognize from high up on that list.

Maud example: (chosen because it's the most "native Rust" in a sense and is designed for HTML specifically so I believe it's the closest to what you're asking for)

  html! {
      h1 { "Hello, world!" }
      p.intro {
          "This is an example of the "
          a href="https://github.com/lambda-fairy/maud" { "Maud" }
          " template language."
      }
  }
https://maud.lambda.xyz/


And the IDEs which practically write your code for you. I mean yeah Java is so verbose but most of that word vomit is me just tabbing through autocomplete

I went back and prototyped a slightly similar app in Python w/ a gui toolkit and I felt like I was driving through exception city. I have made my career writing in dynamic languages so I forgot just how pleasant it was to write in a static one!


> Python w/ a gui toolkit

Sorry, this is always the wrong choice, which is why it felt so bad.

Writing a UI in TypeScript with any of the major frontend frameworks will not require you to wade through run-time exceptions

Java native GUI toolkits are also far more mature

Python and TKinter and similar stuff is always just awful, I've never seen someone put together anything half decent with that stack


Calibre, the ebook client, is a python GUI app. wxWidgets I think?

Theres another app that I use for renaming downloaded video files that uses a GUI toolkit.

I've written GUIs in pyGTK for industrial apps and it was a wonderful experience. The only reason I couldn't get it to work now because I can't get pygobject to compile on Windows (the offered solution of MSYS2... wasnt)

Unfortunately this project was working with Stable Diffusion, so I had to use something in Python

In any case I don't like this idea of a right or wrong answer here. LIke way too many people think Electron is a right answer and I think that's just dumb


I find typescript makes the most common things I do in business app software as straightforward as possible. That is, making copies of objects with slightly different structure to pass them to some other system. Typescript is just fantastic for plumbing.

To do the same in java means making a billion model files with a billion transformation functions on them


Yup. I view TypeScript as the best "get things done quickly without too much code and minimal bugs" language.


You should try NestJS with Prisma or Typeorm and Postgres. Plus all the classic other stuff. It's a great stack.


Your post is a great example why I wish Go lang was named something different. Because you use the lower cased versions java, rust, js, ts, nodejs, deno, etc. But all your 'go' refer to the verb, not the Go language.


Wouldn't you just use go/python/node for a simple crud API? fastapi for python is pretty performative if you use gunicorn as your runtime and time to iterate is must faster than it is in rust.


It depends exactly how simple that CRUD API is. If there's any business logic, I'd rather get all the cheap correctness guarantees that Rust provides. I don't find myself making many truly dumb CRUD APIs.

Time to iterate is also only much faster in certain situations, e.g. local development; if you have to e.g. build a container image, push to a registry, and redeploy to a k8s cluster somewhere, those savings become somewhere between less significant and nonexistent.


> Time to iterate is also only much faster in certain situations, e.g. local development; if you have to e.g. build a container image, push to a registry, and redeploy to a k8s cluster somewhere, those savings become somewhere between less significant and nonexistent.

Can you expand on what you mean here? I know you're not implying Rust is faster to move thru a CICD pipeline, so can you tell me what you do mean? I seem to be unable to make a different reading


I think the point being made here is that all CI/CD/SDLC stuff is effectively slowing down development, so the difference in iteration speed between Python and Rust is less explicit. But I dare to disagree, I just can't connect the dots here, moving code further down the CICD pipeline doesn't mean we can't work on the code itself or think about project improvement ideas.


“Time to iterate” is measuring how quickly you can get an idea, build it in code, deploy it to your customer, and get feedback.

If Rust is helping you make prototypes and iterate quickly, I’d love to hear how you’re using the language.


Not your parent, but I have some ideas on this. I'm not sure how true they are. Maybe I'll write a longer version some day and see what people think. But the summary is this:

I suspect it has to do with how familiar you are with type systems, and the way that you use them. I find that Rust's constraints help guide me towards a solution more quickly, and I spend less time chasing down strange edge cases. Not eliminate! But reduce.


The latter use case is terrible for Rust.


Most of the posts mentioning Rust not being the best choicce for a startup talk about iteration speed and prototyping. In that light I don't think this is the best heuristic. If you need to prototype quickly and search for a market fit, then by all means, choose whatever will make you go fastest (and I'd argue for a lot of use cases it would be Elixir with Phoenix's LiveView if you don't absolutely need an SPA or a mobile app), but if you're building sth that already has a market, Rust might be still a good choice even if it's not a C/C++ fit.


I don't agree with this. If Rust doesn't exist you might have a choice between the pain and endless bugs of using C++ or the ease, but much slower performance, of js. Maybe you end up choosing js.

But if Rust exists, suddenly you have a nicer-to-use high performance language, and you have a choice to use it.


Three months ago I had made exactly similar comment, it felt nice to me to see the same thought echoed!

https://news.ycombinator.com/item?id=33845045


Not sure I agree with Go vs Rust.

I think if you would choose Java or Python or C#, then Rust might not be the right choice.


Go is in a sweet spot where it is often used to compete with both groups: [Rust, C/C++] and [Node, Python, Ruby, etc]. The reason GP said it is probably because of Garbage collection.

I've done a bit of Rust in my job, and there are some basic things that Rust doesn't have going for it:

- steep learning curve (this means for the first 6 months, you or your colleagues are unproductive, write bad Rust which your company then builds upon over time).

- bad error messages (even though that was a focus for the rust team!)

- frustratingly complex for setting up test coverage

- Slow analyzer speed (*super laggy* on Clion, though this might be a jetbrains issue)

- Slow compilation times (I heard somewhere that "Go just goes". I've also written some Go in my free time, and compilation is fast. Well IMHO, "rust will rust" - it's very slow. Generics can make compilation event slower.)

- Verbose. I've seen a just few lines of JS get replaced with hundreds and thousands of Rust.


Unfortunately, I'm inclined to agree.

Rust lives in this interesting spot where, on paper, it should be superior to anything... but in practice, it's not a good choice in most cases.

It's very easy to ramp up someone in Go that's had a standard CS education and written C/C++ before. It's also simple enough syntax-wise for someone who knows python well enough to understand references, etc. Its stylistic restrictions and not being OOP-first also mean that codebases are generally readable. Compilation is also extremely straight forward.

With Rust, I've found even very experienced C++ folks have a long ramp-up period, the development toolchain is slow, and the ecosystem is limited.

Sure, for example there are projects to enable Rust usage with CUDA. But few are inclined to actually bother implementing a new BLAS and GPU accelerated tensor library with Rust.

I do think 10 years from now Rust will start getting more adoption as the ecosystem and tooling improve.

But it's hard to argue with Go where you'll typically get results that are faster or at worst comparable to Java without the OOP design pattern gobbledygook, simple concurrency model, and simple build process. It's "good enough" for 99% of use cases.


> With Rust, I've found even very experienced C++ folks have a long ramp-up period

I've found C++ folks especially have the hardest time with Rust, because they approach it using C++ idioms and habits, then get frustrated when they can't do things the way they're used to.

I've had better success teaching Java people Rust. They find it much easier to learn than C++, and I can get them writing idiomatic Rust code quickly, while C++ devs are still trying to get their coding habits past the borrow checker.


I think everybody has a hard time starting with Rust.

Even when you get confident it is still a longer process writing code than C/C++. It makes you think very hard about what you do carelessly in C.

But having spent the last few years debugging a lot of Swift code, and a bit of Rust code, that time is worth it I say.

Where Rust shines is in the debug cycle. Less of it.

Once you learn to surrender to the compiler, you will find true bliss,....


Go has bad interop with C/C++ and languages that use the C/C++ ABI (including Rust). You can use cgo as a workaround but it's clunky. So that makes an interesting case for other high level, novice-friendly languages like Nim, Crystal or Val/Vale/Vala.


> - Verbose. I've seen a just few lines of JS get replaced with hundreds and thousands of Rust.

I'm curious to see those few lines of JS!


> - bad error messages (even though that was a focus for the rust team!)

I would love to hear more! (If you have the time.)


Not the fault of Rust compiler per se but in my case the errors that mismatching trait impls from async libraries yield can be downright suicide-inducing.

I recognize this is not entirely on rustc though.


esteban of course gave you an excellent response already, but just another bit of like, context here: while it may not be on rustc, the developers want to go beyond the norm here. rustc understands if you write async/await syntax like JavaScript, and then directly proposes that you switch to the Rust syntax:

    pub async fn foo() -> i32 {
       unimplemented(); 
    }
    
    pub async fn bar() {
        let f = await foo();
    }
gives

    error: incorrect use of `await`
     --> src/lib.rs:6:13
      |
    6 |     let f = await foo();
      |             ^^^^^^^^^ help: `await` is a postfix operation: `foo().await`
It isn't on rustc to understand this either, but it helps a bunch of people, so the team does it anyway.


Check sibling reply. Sadly I never wrote those down. :(

I'll definitely do so going forward.

The problem is with that is the impostor syndrome: I legitimately can't tell if I am an idiot and skipped some basic Rust training, or the error messages are truly confusing and unproductive.

But your messages help. I'll just write those down and send them to GitHub's issue list.


I’m not working on Rust anymore, but the previous stated position on this, which as far as I know is still the case, is that if it’s confusing, you should file. Worst case scenario is “sort we can’t fix that” but it’s not an imposition to file issues. More is better. Because exactly as Esteban said, information is valuable. Even duplicates are valuable: they indicate that more than one person has run into this, and therefore it’s more valuable than an obscure issue only one person sees.


I can confirm all of this is still the case.

> I legitimately can't tell if I am an idiot and skipped some basic Rust training, or the error messages are truly confusing and unproductive.

It doesn't make a difference. The compiler can't assume any level of proficiency. If a topic requires you to read the docs, it should tell you so. There are some "basic" things it relies on, but for anyone with any level of experience programming should be able to read a diagnostic and either understand it outright or have enough clues about what to search for. So "this error is confusing because I didn't read chapter N of The Book" can be simplified to "this error is confusing."


Excellent. Keep up the good work, I (and others) appreciate it.


Having examples of these is useful to see what we could get rustc to do. The general case might be impossible to deal with in a generic way, but we can target specific patterns libraries use and emit custom errors for them. The problem with these is we have to be reactive: if we don't see a problematic patter ourselves (or it isn't reported to us), we can't do anything about them.


Unfortunately last I tried these code snippets was months ago, was rushing like mad because it was a startup and I couldn't afford to just stop and write everything down and... yeah, priceless info was lost.

Just recently I am making a comeback to rewriting a tokio 0.1 library to the latest version so I'll likely have a few examples that I can post... where? In GitHub issues?


GitHub works great for it, for diagnostic tickets in particular you can file them at https://github.com/rust-lang/rust/issues/new?assignees=&labe...

Even if it is an "it hurts when I do this" without more context it can be useful to bring the problem to our attention (but the more context you provide the higher the change we'll fix the problem).


> - Verbose. I've seen a just few lines of JS get replaced with hundreds and thousands of Rust.

Please, more detail (=


I should've been clearer, sorry.

The verbosity and complexity of `wasm_bindgen`/serialization between JS and Wasm (written in Rust) is primarily the thing I am frustrated at here when I see hundreds and thousands of Rust code. A concrete example: creating a Websocket client in Javascript/Typescript vs in Rust/Wasm.

In general though (outside of Wasm), Rust is less readable.

And with regards to Rust errors, I've found Rust errors related to Tonic and Diesel to be quite annoying/unreadable. The Diesel docs seem to blame Rust for this (can't find the docs for it right now).


Isn't this due to wasm having to access browser things mostly through browser JS interfaces?

eg Browsers provide JS functions that are intended for JS, which are not directly exposed to Wasm. So when your Wasm wants to access DOM things, access DOM functions, (etc) it needs to go through a JS shim layer instead of being able to call them directly.

If the browser dev's (or some W3C type of body?) introduced those same functions, but had them be directly accessible from wasm, then the JS shims wouldn't be needed.

The JS dev's for each of the browsers though would probably try and stop it ("security risk!" excuses, etc) though, as that would potentially cut into "their territory" and allow other languages to compete. :(


Expanding WASM to the DOM is definitely part of the roadmap. The problem at the moment, as I understand it, is that the DOM was designed with Javascript in mind, and figuring out how to translate that into something that works well for lower-level access is difficult, particularly in regards to getting garbage collection to work properly between WASM-land and JS-land. There are some alternative solutions that are being explored, but none of this has anything to do with security risks.


I hope you're right, and it does end up happening. That would very much enable many languages (LLVM based ones anyway) to become practical alternatives to JS for web dev, whereas now they're more like "can be sorta done with significant effort". ;)

Btw, my impression that it would be blocked by JS people involved in the process, was from a conversation some time ago with one of the JS people themselves.

They said (from their point of view) there's no need for WASM to directly access things instead of going through JS, and they'd block it themselves if things went that direction. Security was the lever they mentioned they'd probably be able to use.


I believe the project is called Interface Types.


JS:

    new HTMLDivElement()
Rust:

    struct WebBrowser {
    ...


Instantiating an object vs defining one? Yeah definition would be longer.


Go belongs in the exact same bucket as Java and C#.


Ergonomically, yes. Logistically, no.


For server binaries they're typically being dropped into Docker containers not scp-d to servers directly, and the moment you go there you can just use jib and get a JVM container easily so there's no difference in logistics.

For CLI tools whilst a single binary can be convenient, native-image lets you get those for JVM programs too these days. But it's not always the case that it's enough. In practice you will often hit the need for:

a. Cross platform / cross builds.

b. A way to easily update them for your users (that isn't "everyone mount this NFS/SMB drive")

c. Ability to ship other files e.g. third party libraries written in other languages, config files, data files, readmes ...

d. Possibly, avoiding virus scanners and Gatekeeper if you have users on Windows/macOS.

Conveyor [1] does support distributing CLI tools (in any language) that can then be updated via apt-get, the Windows package manager or Sparkle on macOS. If your language/runtime supports cross-building then it can do it all from your developer laptop, you don't need each OS to build for. The resulting artifacts are single files (deb, msix/exe, zip) and it supports both self-signing and regular signing if you want that.

It provides a few other neat features on Windows:

• One click install that immediately adds new tools to every single terminal session without needing restarts.

• If you want, silent background updates Chrome-style. If you don't, manually triggered updates.

• For JVM apps specifically it automatically configures the Windows terminal to support ANSI escapes, Unicode and other modern features so you can use all the same stuff as on UNIX without needing to futz around with win32 or wrappers.

Unfortunately the little default GUI that lets you trigger updates and add CLI tools to your path on macOS isn't officially launched yet, because it only works for JVM apps and not other types of program. But if anyone wants to try it just let me know, it's easy to activate.

If you don't need any such features then yes, a single binary can be a bit more convenient than a zip. But the number of situations where it breaks down is pretty high and it's not so hard to handle multiple files.

[1] https://hydraulic.software/


I want to be careful not to recapitulate every conversation I've ever had with a JVM person about this. I'm not claiming it's impossible to deploy JVM applications; obviously, tons of people do. I'm just saying people use Go and Rust because they work well in situations where you want to distribute and directly run a simple binary without additional tooling. That's not every situation; obviously, if you can use Docker, there's not much difference between a JVM app and any other kind.

Your comment is super interesting, don't let me sound like I'm trying to shoot it down. I'm being deliberately terse to avoid creating receptors for language war antigens to bind to.


Sure. Given the choice of one file or 50, one file is clearly better all other things being equal.

My feelings on this changed over time. About 10-15 years ago I thought single executable output was a critical feature for a language, because everywhere I went I saw people saying how important it was for them, how much simpler it made deployment. I figured, OK, people know what they want so that's what they should get.

Then Docker came along. Docker images aren't single files, they're the polar opposite. They aren't even things you directly manipulate using the filesystem at all. Yet people loved it and it took over the world. Clearly what all those people demanding single-file executables were actually wanting in 95% of cases was simpler deployment, and they were phrasing it as single executable because that was concrete and understandable whereas simpler deployment is a very vague concept so who knows what you'd get if you asked for it.

For people who are selecting Go or Rust or Graal native images primarily because of single-file output, I'd actually really appreciate a chance to ask a few questions or interview them quickly to learn more about the deployment context. Conveyor is all about deployment and it's good to understand more about how people are doing things and what could be better.


I wouldn't even say ergonomically. Java and C# are so far ahead it's not even funny.


Why do you say logistically no? The challenges of distributing the runtime?


Yes. It's a reason people pick Go and Rust for things.


There's no runtime to distribute with modern C#.


Are you referring to the “compile to a big executable” feature?

I’d love to use C# without having to deal with distributing the runtime, so I’d like to hear more!


> I’d love to use C# without having to deal with distributing the runtime, so I’d like to hear more!

In the later versions of DotNet there are a couple of common ways to distribute (I'd suggest either DotNet 6 [LTS] or preferably DotNet 7 [current]).

You'd usually use the dotnet publish command, which ideally produces one of three things, all of which are self-contained and can be deployed to a clean server without any framework. Ordered by worst-first (in relation to your requirement):

1. The halfway house from dotnet publish is a folder with your app/site/api alongside all the dlls needed from the standard library and/or nuget. This is a standalone folder, though messy.

2. With an extra couple of options on the dotnet publish command you get it all as a single binary which is exactly what you say: a big executable.

3. There is another option available on the dotnet publish command which will use magic (probably tree-shaking but I can't remember) to produce a smaller single binary by removing unused code.

As an aside, it's also worth noting a couple of extra points:

* The dotnet publish command can cross-compile ready-to-deploy outputs for any supported platform (eg Mac or Linux, using x86, AMD64, or ARM64) just by specifying the combination of platform and CPU as command line options.

* Within C# you can mark your assets as embedded resources (like an embed FS in Go) and they will also be included in your output.

The final result varies in size depending upon what your code does (and hence included libraries), and some code (eg reflection) may interfere with the tree-shaking (or whatever) of option 3 - but it warns you whilst it generates the output, and you can either ignore it or use option 2.

Generally speaking the option 3 builds are between 1.5 and 2 times the size of a Go one, but you're looking about 20MB to 30MB for useful stuff. Not tiny, but still quite small these days. Option 2 builds probably double.

In use (and this is subjective) they consume a bit more memory than Go, but seem more consistent/stable in that usage.

Also note that within that 20MB-30MB build, for an api or a website you get a built-in web server that can sit behind nginx etc as usual, but is also good enough to expose directly.


Yes, "dotnet publish" is what I was thinking of. Wow, I didn't realize it could cross-compile!

I can't quite tell from the .NET SDK repository -- any idea if this stuff works on Linux (i.e. building on Linux, perhaps for Windows)? I see mention of MSBuild, so I'm guessing maybe not.

I love C#, but I abandoned it a while ago because I wanted to only rely on open source tools (just to ensure my code is usable in the future). Then, of course, they open-sourced a bunch of stuff (including the compiler). If I at least had the option to develop C# on Linux (with support for cross-compiling to Windows), that would be great (and honestly something I would have never expected 10 years ago).


Actually, it does work from Linux! I just tried it and "dotnet publish --os win" cross-compiled from Linux to Windows. Nice!


> Actually, it does work from Linux ... cross-compiled from Linux to Windows.

Here's a link to the commands I use to generate my cross-platform builds [1]. They are easy enough to stick in a shell script or batch file so you get all the builds with one command. These produce single executables, trimmed for size.

[1] https://github.com/kcartlidge/Newt#generating-stand-alone-bu...


There are many features commonly used in C# libraries that preclude AOT as I understand it -- mostly heavy use of reflection.


Indeed (I mentioned reflection above). It only affects the trimming though - you still get (larger) standalone builds that need no framework installed.


C# sure, but unless you are doing something pretty close to the core purpose of some giant java framework java is slow and verbose


Java, Go and C# (and node) have very similar performance, e.g. https://benchmarksgame-team.pages.debian.net/benchmarksgame/.... For all of them, the key to writing high performance code is avoiding allocations and boxing. Go and C# both do this slightly better than Java, but in most domains where these languages are used, this is not a big difference (and this is where you might use C/C++/Rust instead). I've found Go to be more verbose than Java, but I haven't used Go much since generics were released.


It's amazing how the "Java is slow" myth survives to this day.


i just read "java is slow" as "java is slow to startup" and that helps.

Java is slow(er) to to startup, but once it's going, it's pretty good.


That doesn't matter if you're writing a REST api. If you're writing a CLI app, then I agree that's a problem.


Sometimes matters, e.g. when you deploy new code.


Not really, just do a rolling deployment like you should be doing anyway. No one cares if the new version takes 1 millisecond to start up or 3 seconds because they literally won't notice.


But 3 seconds isn't on the table. It is more like 20-30 seconds on a medium sized app and 8 seconds for a small one.


If your Java app takes half a minute to initialise it's the app's problem, not Java. Modern Java frameworks have moved from a dynamic deployment model to statically compiled and can start in milliseconds. (for example, see the benchmarks on https://quarkus.io/blog/runtime-performance/)


Rolling deployment is a hack imho. Adds complexity and hence yet more potential failure modes.


Hardly, it's a fantastic guardrail when combined with health checks. You can say "you don't need it", but everyone makes mistakes sometimes. Make those mistakes not matter. You also take backups, right? Same idea.


It has a non-zero cost though, which is why I don't like it. Things go wrong with the "roll" for example. You have potentially two versions of your code running against the same DB for some time.

Stop -> deploy code -> start is simpler and less likely to go wrong.


How would real-world JVM startup times matter in deployments?


JVM startup times make using it in Lambda or scaling container clusters awkward. Scaling can't happen fast enough for traffic spikes when the startup time and cold start performance is crap.


You exec a process expecting it to begin operating, providing some networked service, in a reasonable time. Instead it doesn't do that. It spends tens of seconds, sometimes minutes, running JIT and other sundry startup overhead.

You may not have seen this if you haven't used Scala...


Genuinely curious, what kind of application are you running? Which JVM are you using? Are you aggressively GC tuning? Very low on memory? I've used Scala from 2.8 up to 3.0, for microservice systems, monoliths, data pipelines, machine learning (way back), desktop apps for research using Swing, an Android app (worst idea ever), highly imperative to very functional, and I don't think I've ever seen anything remotely as bad as that even on genuinely big codebases. Hundreds of ms, sure, but minutes just getting the JVM up and running? I can see how that would be problematic.


Ok, so to expand: applications that I've been responsible for from the beginning have not had long start up times. Where I've seen it is with other folks applications where I was hired as a consultant to look at performance.

The most recent example was a Scala monolith. It had to use JVM 1.8 because <reasons> prevented migration to 11 (tried quite hard, but never succeeded). GC tuning doesn't really apply when considering start up delay, but yes it had been tuned over the years. Memory was not limited. The application, mainly due to Scala, had tens of thousands of classes. They all seemed to get JIT'ed on start up, which was the primary reason for the slow start up. People involved (who had come from heavy Scala shops like Twitter) seemed to think it was normal.


Yeah, Scala is absurdly bad for startup time because of poor modularization of the standard library. It's a decent language with a terrible standard library.


You reboot and suddenly your CPU is at 100% for 20 minutes starting every web service...


If your CPU is at 100% for 20 minutes starting every web service, that is definitely your problem, not Java’s problem.


I meant slow to develop in.


With features like records, pattern matching, enums, and many more, I find it much better than the likes of Python or golang.


Slow and verbose compared to what?


Performance wise yes. As for language features - not really. Code in Go would likely have way more LOC


Whatever the absolute merits (or lack of them) of Go, the fact is that if it's a good (enough) option for you, then it's almost certain that some language will fit your problem better than Rust.


I wouldn't group python with java and c# either. Quite different beasts


Precisely. If you like what you see in Rust but you don't need to worry about extremely strict hardware/memory/realtime constraints (i.e. you could use a memory-managed language), consider Haskell instead.


That really is a great way to think about it, and my previous experiences with the "wrong choice of Rust" seem so obvious when filtered through this lens.


Rust is a high development cost compared to node.js python or julia, but I'd say it is about the same as go or c#. Maybe a little better than all of those if you consider time getting test coverage. But if you are prototyping you probably aren't doing that. I'd say rust is a much lower development cost than c++ or java.


Golang absolutely has a faster development cycle than Rust. Unless you’re a Rust expert who never touches Go, but then it’s an issue of familiarity. Go devs hit the ground running and the ergonomics are streamlined, and the borrow checker and other rust restrictions don’t get in the way.


Given my downvotes you seem to be with the consensus on this one. I had 30 years of c++ development going into rust and I never found myself fighting the borrow checker. Golang seemed the same, just started coding and stuff worked mostly. But, the complexity of rust was familar coming from c++ with generics and macros but without some of the footguns, so it just seemed like power, not clutter.

I like go, but I wouldn't prototype in it, I'd pick julia or python. And if I'm familiar with the problem and want to code for production, rust really doesn't seem any slower to develop in than go to me and perhaps a bit less verbose. But in retrospect I think that is because of the direction I came at rust from means my habitual coding style lined up more closely with what rust expects. Its not a harder coding mindset, just a different one than someone who came from java would be used to.


30 years of C++ development experience means you aren't going to find any PL difficult and therefore your views on easy/difficult a PL is to learn and get going cannot be taken seriously. :)


While my overinflated ego enjoys your assessment, I want to reiterate that the patterns that minimize friction with the borrow checker aren't harder than standard patterns you'd see in garbage collected OOP or Functional focused languages, just different. Anecdotally here on HN it seems like people who came from ocaml or f# find rust even easier to adapt to than c++ people.


30 years of C++ can inoculate you to the idea of hideous amounts of boilerplate, or worrying about memory allocation timelines and such. I'm a 25 year C++ veteran myself.

I recently wrote an asynchronous microservice in C++, because it needed to use a C++ library I'd already written. Took about two weeks of effort, and clocked in at 1500 loc, not counting tests or the half-dozen external dependencies. I rewrote it in 100 lines of idiomatic Go using nothing except the standard library a few days ago. That's a 10x reduction in both lines of code and development time, although it's a bit unfair because I had already done the C++ version first. That is after giving up on a Rust version that was already weighing in at ~1000 LOC in an unfinished state.

After this experience, I don't think I'd ever use C++ or Rust again for a concurrent web service, unless there was hard real-time latency requirements. Golang is just so much better streamlined for that application, and the standard library is batteries included.


Go was designed from the ground up for that sort of thing, so that makes sense. And anecdotally asymc/await is a pain in rust. I've only used rust async through other libraries, and not that very often. I usually use plain threads.


Yeah Rust supports the same concurrency primitives as Go, but man are they a pain to use by comparison. With Go it is as trivial as using built-in keywords to spawn and merge lightweight threads, and the lightweight thread is the default mode of execution when main is invoked. Structs automatically serialize to JSON. The standard library’s HTTP server is fast, concurrent, and safe for use in production. Summed together that makes use case like mine a breeze to implement in Go.

But yeah, if I was writing an embedded firmware or something I would absolutely prefer Rust.


I use it for numeric heavy stuff and I'm going through tonic for any service connections. That abstracts me from any async wierdness and for the dsp code I don't care about lightweight threads. I don't want many/any more threads than I will have cores.


I feel like this is probably on the money, at least when it comes to building a startup. I often use Rust where before I would use Go/NodeJS/Python, but mostly because I like the type system and velocity on side-projects isn't as important.


Or as Charlie Munger would invert,

"What kind of startup would fail because it chose Rust ?"

Then don't be.


I would differ that Rust would not be a better choice than Rust.

Seeing Rust as a system programming language only is missing the bigger picture (IMO).


Oxide is a startup and we use Rust for everything except the front end of websites (where we use TypeScript.) In some cases that’s due to hard requirements (embedded) but we use it for web backend cases as well.

Iteration time hasn’t been an issue, but compile times can be annoying. Though obviously compile time is related to iteration time.

Of course, all of these things are anecdotal. Collecting anecdotes is how you develop evidence, of course…


Tangram Vision [0] is also a startup and we also use Rust. We're using it to develop robotic / autonomous sensor calibration tools that would normally be written in a variety of C / C++ libraries.

For context: most if not all of our team has developed calibration tooling similar to what we're doing now in the past, just at different startups and very specific to certain robotic or sensing configurations.

If anything, once we got CI sorted and started using our own internal registry I would argue that we are significantly faster in terms of iteration time. This is partly because the team is small, but also because most of our tooling is consistent and easy to keep in lockstep. Pulling libraries is done uniformly across platforms and architectures, and our CI runs (through GitLab) stay up-to-date with the latest tooling without issue. Having a stronger type system to detect errors early and a compiler that actually tries to give human-readable messages (looking at you C++ linker errors) using that type system makes everything so much easier.

Compile time seems like it would be an obvious bit that slows one down, but in practice sccache [1] does what it ought to and we barely notice it (at least, I don't and I haven't seen team members complaining about build times). Mostly I'd argue that the real thing holding us back is tooling extant to the rest of the wider Rust ecosystem. Debugging and perf tools are great in Unix land, but if you're making anything cross-platform you need to know more than just perf. That might just be my opinion though, I'll admit I'm still learning how best to apply BPF-based tooling even in Linux alone.

I also realize I'm responding to steveklabnik, so I suspect most of what I'm saying is well-known and that this comment is really more directed at TFA.

[0] https://tangramvision.com

[1] https://github.com/mozilla/sccache


Very cool!

And yeah while I've had similar experiences, I'm glad to hear that it's so good for others. And, for example, sccache isn't something I have much experience with, so repeatedly hearing "it works well" is nice.

(Also, secret of replying to people on the internet: you are often replying to the anonymous reader of the discussion as much, if not more, than to the person you're replying to.)


I'm also at a startup using rust. It's true that the feedback loop can sometimes feel bit slower. One thing that helps here is being test driven so that you're not waiting for compiles just so you can click through to confirm a small change in behavior. But in general I think my velocity is not much slower than it was at a previous company using a node backend. The hardest part of software isn't typing in the code and making sure it runs. It's thinking about what it should do and how it should do it in the first place. And really, Rust's type system is a force multiplier. We can prototype really fast, throw out things and rework them with confidence because with well thought out typing and tests to confirm logical behavior we have strong safety guarantees. It lets you experiment and investigate alternative implementations fearlessly.


With all due respect, you’re Steve Klabnik.

That makes “Rust at a startup” a different equation.


I am one of roughly 50. I'm pretty sure the rest of them are not Steve Klabnik.

I also just work there, the all-Rust bit started well before I joined!

That said, you are right that it's a point worth considering.


>> I'm pretty sure the rest of them are not Steve Klabnik.

We have only your word for this.


Steve, I just want to say I like you a lot. :)


Thanks, that's very kind.


Do you mind my asking what frameworks you're using in Rust on the backend, if any?


We ended up building one https://crates.io/crates/dropshot

Happy to talk about anything!


Oh this supports OpenAPI! That's exciting. I'll definitely check this out when I have a chance. Thanks!


Yes, that was a primary motivation! We end up consuming it via typescript, and so you get typing info the whole way up and down the stack, which is very nice :) You’re welcome!


I like this quote from 'The art of Unix Programming' published in 2003

"While it still makes sense to write system programs and time-critical kernels of applications in C or C++, the world has changed a great deal since these languages came to prominence in the 1980s. In 2003, processors are a thousand times faster, memories are a thousand times larger, and disks are a factor of ten thousand larger, for roughly constant dollars.

These plunging costs change the economics of programming in a fundamental way. Under most circumstances it no longer makes sense to try to be as sparing of machine resources as C permits. Instead, the economically optimal choice is to minimize debugging time and maximize the long-term maintainability of the code by human beings. Most sorts of implementation (including application prototyping) are therefore better served by the newer generation of interpreted and scripting languages. This transition exactly parallels the conditions that, last time around the wheel, led to the rise of C/C++ and the eclipse of assembler programming."


In a benchmark of how many fortune responses are returned by various web frameworks[0], nodejs returned 80k odd fortunes per second. The fastest c++ framework compared here returned 616k odd fortunes per second.

Assuming that my application scales by the same amount (big assumption, yes), I could cut AWS costs by 7.7 times (!!!) by using the C++ implementation.

I'm pretty sure that maintaining a C++ codebase is less than 7.7 times more expensive than Node, even if you throw in extra development time etc. This also ignores the decades worth of excellent tooling we've built up for C++ (static analyzers, fuzzers, etc).

At a startup, when building things fast matters more than costs, sure. I buy the argument for Node or Python or any other interpreted backend. But once you start to scale, things change after some threshold. Unless you're facebook[1].

[0]. https://www.techempower.com/benchmarks/#section=data-r21

[1]. https://developers.facebook.com/blog/post/2010/02/02/hiphop-...


> I'm pretty sure that maintaining a C++ codebase is less than 7.7 times more expensive than Node, even if you throw in extra development time etc.

The problem here is assuming that a 7.7x less expensive AWS bill is the same value as a 7.7x more expensive development time. Imagine an app that is maintained by one programmer costing $10k a month and running in AWS for $1k a month. It is not worth dividing that AWS cost by 10x if it means a 10% development time hit. Actual numbers may vary, but development often costs much more than running in AWS. (For startups it's even more of a difference, because you pay AWS as you get more traffic, but you pay developers before you have an MVP.)


FWIW, AWS is expensive... but mostly for networking and then second to that storage, with the cost for actual computation usually being a pretty small percentage of my overall spend. The only time I have seen computation itself be an interesting expense is when I am looking for some complicated hardware, such as their GPU or FPGA instances. That said, money is money... but it is much easier to figure out fun ways to save storage or bandwidth when the code is a bit more nimble, so I'm not sure I could justify larger teams or slower iteration times just to save on CPU.


> but mostly for networking and then second to that storage, with the cost for actual computation usually being a pretty small percentage of my overall spend.

What portion of the bandwidth is end-user facing and what portion of it is connecting all the servers? Another way of asking this is if I need fewer servers because code runs faster, how much less bandwidth do I need?


AWS doesn't charge for internal server-to-server transfers, so the answer is precisely 0.


That's only true for transfer in the same availability zone.

https://aws.amazon.com/ec2/pricing/on-demand/#Data_Transfer_...


Availability and horizontal scale-out are separate axes, so that isn't relevant for this question.


"AWS for $1k a month" hahaha. People spend $90k on AWS by accident.


The speed of your web framework is, in reality, often irrelevant. Our slow nodejs app handles millions of customers with a few cheap and simple caching layers. Our salaries are a lot higher than the infrastructure costs.


Except … for spewing out fortune responses, most of your cost by far will be in bandwidth charges. Those won't change with the implementation language. If anything, the C++ version will just be able to burn your dollars faster.

(Yes, I know, it's just a silly example, but compute costs are only one aspect, and the bandwidth pricing in the on-brand public cloud actually tilts overall costs towards being bandwidth-dominated for many applications.)


That's a great way of looking at it. Languages all have benefits and drawbacks, but you have to consider whether they help you for your problem.

One time, I met a guy who wrote firmware for Seagate hard drives. Any new feature he added had a budget measured in microseconds. Obviously he wrote nothing but C++.


in 2003 you hoofed your servers to a colocation center and plugged them into the internet yourself. Deploying was something you did over `scp`. Buy a big server and you're good to go. That can still be the case these days (see stackoverflow which at least until recently was not using cloud deployment). But for the most part people use cloud platforms that charge by fractions of a second of cpu time. It's never been more cost effective to use a compiled language than it is today. And that would also apply to bare metal bring-your-own-server deployments. Want to host a monster site on just a pair of beefy servers? The less resource intensive your backend, the more requests you can serve per second.


I disagree, I believe that Rust is a fabulous language for early prototypes.

Sure, if you're going to throw away your early prototype there are better languages. But nobody ever does that.

Instead your prototype evolves into your product and early expedient decisions you made that were appropriate for a prototype aren't appropriate for your product and you have a significant refactor.

And Rust is the best language I have ever encountered for refactoring. Just bang away changing the code until it compiles, and it's quite likely that once it compiles it actually works. That's not an experience I've had in any other language. Usually a significant refactor exposes some foot-guns that don't fire until significantly later. So you avoid refactoring and your code ends up a right mess.


> I believe that Rust is a fabulous language for early prototypes.

The problem is that TypeScript is an even better language for early prototypes.

zeroxfe has the right answer:

> If your answer is something like Go or Node.js, then Rust is probably not the right choice.

> If your answer is C or C++ or something similar, then Rust is very likely the right choice.

The only reason one would choose Rust over TypeScript is if one would have chosen C/C++ instead.

Then if you need higher perf/throughput: Go, Java, and C# in particular are all options that I'd consider before C/C++ or Rust. C# in particular is highly congruous to TypeScript [0].

JavaScript, TypeScript, and C# have been converging, IMO (and that's a good thing). Seems really natural that if you're a startup finding PMF, start with TypeScript for iteration speed. If you need higher throughput, C# is a stone's throw away from TypeScript syntactically and it's pretty easy to hire for (compared to Rust). [1]

Pick Rust if you're building something highly performance and memory sensitive. Pick TypeScript and C#/Go/Java for almost all other cases.

[0] https://github.com/CharlieDigital/js-ts-csharp

[1] https://raw.githubusercontent.com/CharlieDigital/js-ts-cshar...


I think it depends on what you're doing. I'd argue statically typed Python (ie. with type hints) is also good for an early-prototype language and has the benefit of being able to swap out parts at a time via C FFI with Rust or something like PyO3. Pypy with asyncio (so FastAPI?) is what I'd choose for a web framework these days, personally.


I agree; Python is a great choice especially if whatever you're building is heavy in math and/or ML. The ecosystem is just better so it makes sense then to build your backend stack with Python.


Plus with Django you can build an API, storefront, and admin area in like a day. It would probably take a couple weeks with Node/Java and React etc.


> But nobody ever does that.

Please don't report your own experience samples as plain universal facts. It is one of the more common means by which falsehoods spread. I have no doubt that's not your intention, and you believe what you write, but you cannot have a basis for such a bald statement, and it happens to be false.

I have seen prototypes built and discarded very frequently. Indeed in an earlier incarnation it was my own professional focus.

I have no idea how common it is in your country, or globally, or in specific industrial sectors. Usable stats in tech are hard to come by (in large part because of its ubiquity). But your statement is just false, and commonly repeated.


> Sure, if you're going to throw away your early prototype there are better languages. But nobody ever does that.

You can’t be serious. I’ve seen prototypes get thrown away all the time, especially at startups. In fact, I’d say that’s more common than not.


One of the things I’ve vowed to do for my next interview cycle is set myself up an empty project with test, and it is appalling how many interviewers assume that implementation without any tests is something a senior developer won’t simply laugh at the suggestion and leave the room.

It takes too long to do these basic steps and then copy them to new projects. And generators only work once, if then, which means they are most useful in languages that have stopped iterating, which is not that compelling.

I’m hoping the next version control system solves the rerere problem and we can build our projects by forking a template project, then pull commits from upstream as necessary.


You have to remember though that coding interviews != production code. Taken to the extreme, are you also adding logging, metrics, performance benchmarks, etc?

TDD is great if you can get it working in a tight interview time schedule - they can also reveal any misunderstandings of requirements before the actual solution is implemented!

On the flip side however, many interviewers have experienced countless folks who spend the majority of the interview time on tests and setup, only to run out of time on delivering the solution to the actual presented problem. When I feel like somebody's spending too long on these things I'll try to nudge them towards wrapping up the tests and moving onto the solution. You would be surprised though by just how frequently it's met with open hostility, by candidates with only 10-15 mins left in an hour-long interview and no solution started!


And in a real situation I have all of these tools set up and running, so I shouldn't need to spend more than a minute faffing about with them. I'm just writing exploratory tests to make sure that I've got the bits right before I put them together. Even in an hour task those tools can be a force multiplier, once you figure out how to make them work instead of fighting them constantly.

What's happening in interview loops is that we're implicitly or explicitly selecting for people who prefer to YOLO instead of writing tests, and then we are surprised how hard it is to get new and existing hires on board with mature testing processes. The people you need don't work here, they work somewhere else.


Have you tried something like Kotlin? It has the same experience IMO of being very easy to refactor, especially because the IDE can do so many refactorings for you very fast, and because the compile turnaround time is so low.


I can't get it why people would prefer to add "?" to everything instead of just having exceptions which automate that behavior.

In the bad old days of C there were two kinds of programs: programs without correct error handling, and programs where half the loc are unhappy paths that do what exceptions do... with a huge amount of work.

Today people are repeating the same mistakes of the past, putting a "?" on everything is a lot better than what you had to do in C, but why do that when you can just use a language with exceptions?

It is like somebody showed cavemen fire (exceptions) and they decided it wasn't worth anything and went to go screw around with other things.


I prefer having extra work done writing code (adding "?") than having to do extra work reading code. Exceptions are functionally invisible control flow; it isn't clear to the reader that a function may blow up if the exceptions are unhandled.


In Swift, at least, the possibility that a function can throw must be marked as part of its signature, and the exception cannot be ignored if it is thrown so the call requires explicit syntax as well, so there is no way to miss that something could "blow up" when reading the code.


It's a little old at this point, but I find the Swift Error Handling Rationale design doc to be absolutely fascinating. It cites other language’s error handling paradigms (including Rust) if you're curious:

https://apple-swift.readthedocs.io/en/latest/ErrorHandlingRa...


Fascinating, looking forward to reading this later today! Have used a handful of languages over the years, and I don’t have any academic perspective in different error handling techniques, but there’s no doubt that the way swift does it feels particularly natural, safe, but still gets out of your way. I love all the options for handling errors in a meaningful way.


In Java functions declares Exceptions in its type signature, so it does all of that automatically. Then you get a compile error if you don't handle it in the function, or you need to declare the function throws it, so it is type safe.

Note that people now consider that as a mistake, people prefer having Exceptions be hidden instead of explicit and requiring handling like that.


We are in the distributed systems age. If systems are composable, operations can fail for reasons that are completely unfathomable to the client. It's not reasonable to have a SharkBitTheOpticFiberCableException and 100,000 other ones that handle every reason why an operation failed.

What the client should know is how an error affects what it is doing, it wants answers to questions like

  * Is it likely this error will recur if I retry immediately? in 1 minute? in 1 day?
  * What is the scope of this error?  Does it affect the entire system?  Does it affect a particular database record?
  * What do I tell the user?
  * What do I tell the system administrator?
Actual improvement in this area won't come from information hiding but it could come out of attaching some kind of ontology to exceptions where exceptions are tagged with information of the above sort, that it is not about having names for them and a hierarchy, but in about having rather arbitrary attributes that help the exception management framework (somewhere high in the call stack!) do the best it can in a bad situation.


While I think this is a good point, a lot of the answers cannot be determined by the generator of the exception. Your SQL library cannot know what the implications of an error are -- is this a minor part of the system for which the error can be logged but mostly ignored, or is it critical? Etc.

People want error handling to vanish so that they can follow the "normal" flow, but in fact error handling is one of the critical things code does.


This is sort of what HTTP codes get at though.

The server doesn't know what you want to do with a 400 but it knows that the problem is with the request for example.


Actually there is somewhat standardized set of SQL error codes and they can be put into a hierarchy like the HTTP codes.

For instance, you can have a SQL error because the syntax of your SQL is wrong. If you're not doing "dynamic SQL" you know this is a programming error (it doesn't matter what input was supplied to the functions.) One common error is "attempted to insert a row with a duplicate key", frequently you want to catch that SQLException and rethrow all the rest.

The ideal SQL library for Java would expose the hierarchy implicit in SQL errors as a class hierarchy.


I had a similar thought a while back [0]. Developing a sane ontology of error types and their implications is a hard problem, but I think it could be done. The subset of errors that is the most frustrating and hard to deal with are ones where, as you point out, the client will have no way to estimate how long a failure mode might persist, at which point you resort to exponential backoff (actually probably an s-curve).

The issue is that sometimes the solution to the issue would require the client to get up and get out a shovel, and go dig somewhere or something. When the abstractions break down that hard there isn't really a way for the developer of the code to handle that unless they somehow stuff a full blown AGI into their program, and even then it would be a stretch.

0. https://github.com/tgbugs/idlib/blob/master/docs/identifiers...


The mistake was not "explicit errors". It was having a mix of error types, some explicit and some implicit, with no convenient way to combine them, plus the interface complications.

Note that most newer languages are choosing explicit errors. This includes at least Go, Rust, Swift, Zig, and Odin.


The second mistake was only flirting with Bertrand Meyer’s work until the Gang of Four showed up and wrecked Java forever.

Meyer + functional core nets you a great deal of code with no exception declarations and an easy path for unit tests. If it hurts to do stuff it might not be the language that sucks, it might be you. Pain is information. Adapt.


Don’t know Meyer’s work - any suggestions for good starting points?


There’s the Design By Contract work of course, but I’m still trying to cite what I thought was his best advice which is to separate decisions from execution, which is compatible with but I find to be subtler than the functional core pattern.

Often we mix glue code and IO code with our business logic, and that makes for tough testing situations. Especially in languages that allow parallel tests. If you fetch data in one function and act upon it in another, you have an easy imperative code structure that provides most of the benefits of Dependency Injection. Your stack traces are also half as deep, and aren’t crowded with objects calling themselves four times in a row before delegating.

    if (this.shouldDoTheThing()) {
       this.doTheThing();
    }
Importantly with this, structure, growth in complexity of the yes/no decision doesn’t increase the complexity of the action code tests, and growth in glue code (auth headers, talking to multiple backends, etc) doesn’t increase the complexity of the logic tests.

A big part of scaling an application is finding ways to make complexity additive or logarithmic, rather than multiplicative. But people miss this because they start off with four tests checking it the wrong way, and it takes four tests to do it the right way. But then later it’s 6 vs 8, and then 8 vs 16, and then it’s straight to the moon after that.


Learn Eiffel. From their own explanation: https://www.eiffel.org/doc/eiffel/Learning_Eiffel

> Remember that Eiffel, unlike other programming languages, is not just a programming language. Instead, it is a full life-cycle framework for software development. As a consequence, learning Eiffel implies learning the Eiffel Method and the Eiffel programming Language. Additionally, the Eiffel development environment EiffelStudio is specifically designed to support the method and language. So having an understanding of the method and language helps you to appreciate the capabilities and behavior of EiffelStudio.

I read "Object-Oriented Software Construction" to do so, but it was long enough ago that I googled "The Eiffel Programming Language" because my brain had substituted that title instead because IMHO, it's more accurate.

The above link should have many current resources for you.


> Note that people now consider that as a mistake

Correction: Some people. Java's checked and unchecked exception approach is quite nice if used judiciously. It certainly beats checking for error after every function call (default: mostly people ignore error codes) and you even get typed errors so you can trivially incorporate exception handling in the conceptual design as a first class design element.

I am frankly not sure how people get confused about "control flow" and exceptions. (In decades of Java programming the only thing that can still cause minor reading/writing nuisance are generic types and type erasure in over elaborate generic code.)


The greatest secret of exceptions is that in most cases you don't need to catch them. What should really be on your fingertips is

   try {
      ... something ... 
   } finally {
      ... clean up ...
   }
this (plus try-with-resources) is the genius of exceptions. The tragedy of exceptions in Java is that checked exceptions convert the above to

   try {
     ... something ...
   } catch(ACheckedExceptionThatHasNothingToDoWithThisCode x) {
      throw new SomeOtherCheckedExceptionToPleaseTheCompiler(x)
   } finally {
      ... do what has to be done ...
   }
with the variations of

   throw new AnUncheckedExceptionSoIDontVandalizeMyCodeMore(x)
and

   catch(...) {
      // i forgot to rethrow the exception but at least the compiler isn't complaining
   }
as well as

   // i forgot to add a finally cause because I was writing meaningless catch clauses
As much as I think checked exceptions are a mistake in Java, it is not hard to make up your mind about rethrows and apply them in a checked or unchecked form with little or no thought.

The unhappy path that you get for free with exceptions is correct for code with ordinary control flow. Most of the code has no global view of the application and is no position to handle errors. On the other hand, for many simple programs, the correct behavior is "abort the program, clean up resources, display an error message" which a sane exception system gives you for free (except for the finally which cleans up the happy path too)

For a complex control flow there is something high up in the call stack that has global responsibility. Imagine a webcrawler which is coordinating multiple threads that call fetchUrl(url) fetchUrl doesn't need to catch exceptions at all, just clean up with finally. What it may need to do is tag exceptions with contextual information that will help the coordinator make decisions. That webcrawler in particular will deal with intermittent failures all the time and only the coordinator is in a position to decide if it wants to retry and on one schedule.


Java isn't generic over exceptions. You can't write a method that takes an instance of Foo and says "my method throws whatever Foo.bar() throws" or even "my method throws iff Foo.bar() throws".

And this means that your method either always demands to be wrapped in a try-catch, or you migrate to unchecked exceptions.

Rust makes errors a part of the regular type system, so they automatically benefit from all its features.


> Java isn't generic over exceptions. You can't write a method that takes an instance of Foo and says "my method throws whatever Foo.bar() throws"

If you bend over far enough backwards, and squint a bit, you can kind of do that...

    interface Runner<E extends Throwable> {
      public void run() throws E;
    }

    class Test {
      public static <E extends Throwable, T extends Runner<E>> void test(T runner) throws E {
        runner.run();
      }
    }
It compiles. I haven't tried running it though.


> You can't write a method that takes an instance of Foo and says "my method throws whatever Foo.bar() throws" or even "my method throws iff Foo.bar() throws".

If you could it'd mean altering the implementation would automatically alter the API, which would be rather unexpected.

That's why the Java approach is to wrap exceptions and propagate causal chains. The underlying errors thrown by the implementation can change but the advertised exceptions don't, but no information is lost.


> Note that people now consider that as a mistake, people prefer having Exceptions be hidden instead of explicit and requiring handling like that.

Well, Swift is a much newer language than Java, and exceptions in Swift cannot be hidden either. And some people do rather like this.


> Note that people now consider that as a mistake, people prefer having Exceptions be hidden instead of explicit and requiring handling like that.

What people? Please tell me where they’re at so I can tell them they are wrong (lol)

But seriously, I could not disagree more.


The designers of the Java functional and stream library for one. None of the functional contracts have throws. So you are forced to have un-checked exceptions for everything, unless you want a truly mind-boggling amount of try-catch everywhere which will rapidly exceed your normal code by factor of 2x-3x.


... or you can just use a sane FP library like

https://github.com/paulhoule/pidove

Some people don't like the Lispy signatures so I did start coding up a version with with a fluent interface but didn't quite finish.

Overall I would say the implementation of lambdas and method references in Java 8 was genius, but the stream library was a big mistake. Part of it is that has this cumbersome API that in principle would let it optimize query execution by looking at the pipeline as a whole but doesn't really take advantage of it.


In Java, yes. Note that the JVM doesn't enforce checked exceptions. It's a language level thing. So in Kotlin for example, where all exceptions are unchecked, you can use the streams library without needing try/catch.


I get where you are coming from, but imagine if every other "to the human" process description we had was done this way.

I actually think this would be a fun one. How to make scrambled eggs, but where all failure cases are covered. Would be the "Hal fixes a lightbulb" in prose.


That gets to the original promise of computers, doesn't it? That they'd perform repetitive tasks quickly and reliably.

Meanwhile, every time I make scrambled eggs, there is a small but very real chance that my house burns down. And we accept this because to err is human.


Sorta? But a lot can be packed away in "other directions." Most recipes, for example, assume that setup/teardown is intrinsic to the kitchen. As such, to know the procedures to do those things, you would look somewhere else.

That is, you aren't accepting a risk that things will go wrong. You have moved what to do about many exceptions to somewhere else.


Assume all functions can throw and there is no extra work reading. A function that has no possibility of error is so uninteresting in the context of error handling.

Furthermore, handling errors has little to do with where the error is actually caused. In general, you can only do two things with errors: log and kill the operation or retry the operation. Neither of these has anything to do with the leaf function 20 items down in the stack that actually made the network call that failed.


"Assume all things can throw" is what I've seen people do in Java code that adds a million try catch wrappers around everything just in case something may go wrong at some point.

The end result is either completely unreadable or impossible to figure out. "How do I return fallback data for FooBarService.wiggle()" often ends up in digging through (incomplete, outdated) documentation or with code that breaks unexpectedly, sometimes even in production.

Note that Rust has the same issue, any method can panic and allocation may just fail at some point. There are very few ways good ways to handle those problems correctly, which is why this "everything may kill your program" approach is often criticised.


Don't get me started on Java and checked exceptions. If you don't have checked exceptions, you should not have a million try/catch blocks. In fact, just the opposite. Since you only care about errors when you can retry (or ignore) you should only have a small number of try/catch blocks. Ideally one or none.

My best example of this is a UI application that I built that had a single try/catch block around the event loop. It just displayed the error message to the user and returned to the event loop. If they tried to save a file to a network and failed, they got the message, and could just hit save again for somewhere else. No other code needed.


> There are very few ways good ways to handle those problems correctly, which is why this "everything may kill your program" approach is often criticised.

In Erlang, "everything may kill your program" is typical method of operation, and there should always be some kind of path to reset your state from known good values.


> A function that has no possibility of error is so uninteresting that focusing on that is the wrong thing.

I disagree, a function that has no possibility of an error is a proper function, and what we need for performance optimized code.

Proper functions by definition are just mappings from a domain to a range. That mapping really shouldn’t be predicated on any other state, so it should never fail if the inputs are valid within the domain.

We need to focus on such functions if we want performance, because we can only achieve too speeds by not worrying about checking the function result for correctness. Given a proper function, we should just be able to compute the result and move on to the next function.

Therefore it’s of great benefit to us (as authors of performant code) to separate our fallible functions from our infallible ones. Keep the fallible ones iutside of hot loops, only infallible ones inside, and that’s a recipe for mechanical sympathy of the sort that results in great performance.


If I am in the business of writing robust code; then "assuming all functions can throw" means at the very least forcing every function call to be surrounded by a try/catch block? It almost always make sense to handle an error locally if you can; for example if I want to retry the operation (let's say I'm writing a distributed database client), it may make sense for me to retry another node rather than unwinding to the application level that has now lost all context.

>A function that has no possibility of error is so uninteresting that focusing on that is the wrong thing.

I spend a lot of time debugging errors in code that has 0% chance of failing. It tends to involve a lot of matrix math. This isn't something you can say is universally true especially given all the hype around AI now.


> It almost always make sense to handle an error locally if you can

This is highly presumptuous. I have written many programs that did not need to handle errors locally, and so exception handlers were only at the very top level (or, actually, just below the top-level usually - but the point is that there were generally few and I had flexibility to decide where to put them). Perhaps you and I write very different applications. But the fact remains that the "almost always" in your statement doesn't hold.

Alternatively line of reasoning: if this was always true then there would be little point to Rust's ? as it would be so rarely used.


> It almost always make sense to handle an error locally if you can

Yes, but “if you can” does a lot of heavy lifting here. In most cases you can’t, and this is when Rust’s ? is used.


> If I am in the business of writing robust code; then "assuming all functions can throw" means at the very least forcing every function call to be surrounded by a try/catch block?

No, absolutely not! You only care about errors where you can retry/ignore or log and terminate so you only have try/catch in those areas. So maybe one or two.


We have to be deliberate about where we retry, and how quickly. It’s all too easy for layers to create a death ray of n factorial requests.


What you're describing here are unchecked exceptions, which Rust has in the form of panic. There are other kinds of errors that can be handled closer to the point where they occur.


Exceptions always work the same way. You learn how to read code with exceptions pretty quickly.


How does an explicit raise operator like '?' work any different than that? You can learn how to read it pretty quickly.


You may prefer that, but everyone else who has to read your code - doesn't.

Yours is an approach which is likely to ensure your code is discarded and has to be rewritten relatively quickly.


In theory, I like exceptions. In practice, I hate them. Few languages statically check exception handling - e.g. Java, and even then only partially - leading to stability-ruining edge cases leaking into production in the most unexpected of places caught only by QA if you're lucky. Exception handling codegen can also be rather atrocious, leading to unavoidable performance degredation when third party middleware throws unavoidable exceptions, even when you do fix the stability bugs. They're also a nasty and reoccuring source of undefined behavior when they unwind past a C ABI boundary, an issue I've encountered in multiple codebases with multiple exception-throwing languages. In my personal experience, programmers are also rather terrible at writing exception-safe code.

Result and ? force you to think about - or at least acknowledge - the edge cases. For a throwaway script or small scale easily tested program, that might be a drawback. For MLOC+ codebases where link times alone are sufficient to start impeding testing iteration times, it can be a big help for correctness and stability, while still being relatively lightweight compared to other manual error handling.

Finally - Rust has exceptions. They're called panics. They can be configured to abort instead of unwind. This helps set the tone - they're really meant for bugs, and exceptionally exceptional circumstances. They cause all the problems of exceptions, too - unconsidered edge cases, undefined behavior unwinding past C ABIs, the works. Fortunately, it's reasonable in Rust to aim to eliminate all panics but bugs.


See

https://gen5.info/q/2008/07/31/stop-catching-exceptions/

and

https://gen5.info/q/2008/08/27/what-do-you-do-when-youve-cau...

It's very important to minimize the burden of handling errors in code with simple control flow. Frequently I see people try very hard to handle errors with monads in languages like Scala at the micro level and they are so burned out by this that they don't put any effort into handling errors properly at the macro level.

If you make the micro level as automatic as you can it is possible devs will address the macro level, and what is necessary at the micro level is not dealing with a crisis that prevents the compiler from building your code, but rather cleaning up the environment consistently in both normal in error conditions and giving the macro level sufficient context for the error that it can do the right thing.


Micro and macro are both important.

I've built crash collection and deduplication systems, I've heard of triage that helps discount crashes generated by hardware failures or overeager overclocking. I've collected telemetry and setup symbol servers and source indexing to streamline bug squishing, and helped build systems which verify game content up-front to discover even non-code bugs before they're shipped to users, and to properly attribute said errors to the content that generated said errors in an easily navigatable and fixable way. I've helped engineer error-tollerant systems that won't require handholding by engineering to recover from bugs. Plenty of focus on the macro.

But all it takes is a single uncaught exception slipping past QA to cause one to consider a recall of physical product, even in this era of ubiquitous internet, for a handheld console game for something as trivial as a missing or corrupt sound effect. If things at the micro level are neglected too much, and nothing you do at the macro level can really mitigate that in a sane manner... except use tools that check you're doing things right at the micro level. And I have yet to see exceptions handle that micro level particularly well.


Error handling in Rust is actually a lot worse than you think. In fact it may be the single worst aspect of the language.

Fundamentally it is difficult to impossible to fix bugs without knowing what code caused it. Java-style exceptions give you a backtrace for free, which is a huge head start. With Rust you have to do a lot of manual plumbing with something like error_stack to get similar functionality, out-of-the-box Errs do NOT capture this.

Far more productive to work in an environment that does the right thing "for free" vs having to do it manually.


Ugh that’s one of the worst parts of go too. Stackless errors are so useless and hard to debug.


> Java-style exceptions give you a backtrace for free, which is a huge head start. With Rust you have to do a lot of manual plumbing with something like error_stack to get similar functionality

With crates like anyhow and eyre you also get backtraces "for free" nowadays, without needing to do manual plumbing (all you need to do is toggle on a feature flag).


    fn foo () -> Result<(), E1> { .. }
    fn bar () -> Result<(), E2> {
        foo()?;
    }
This requires `bar` to have a function signature that notes it may error, `E2` must implement `From<E1>`, and the caller of `bar` must use the result or explicitly silence the warning. Meaning if a program creates a Result the error must be handled - you can't silently let errors bubble up through the call stack.

`Result` implements some common combinators like `.ok()` to convert to `Option`, `map`, `map_err`, `or_else`, etc to reflect the common cases of error handling.

And finally, since Result doesn't require non-local control flow like exceptions you know that `drop` will run as the functions return back up the callstack.

And if you want to use Result like exceptions... you can. But you can't hide it from callers, and callers are still free to handle them elegantly.


How would an exception automate the behavior of „?“?

What ? does in rust is to unwrap the result check if it is err and return from the function with an error result. On top of that it will auto-convert the error type (if the type has the from/into traits implemented)

So it would do:

try { //the code that may fail } catch (error) { //do we just throw the same error? //or convert the exception to a custom other exception }

If I see an API that throws me an low level exception without context I go mad. Like an file not found exception etc when executing an API that does multiple file IO operations.


The problem with exceptions isn't the syntax, but the hidden control flow (they are essentially a goto in disguise).

Error union return values make a lot more sense, the rest is just syntax sugar details (and that's where opinions differ I guess).


Exceptions are not a form of gotos, they are both less powerful as they are structured and more powerful (as they are nonlocal). They desugar to continuations, but so does rust option type handling and ?. In fact they are pretty much equivalent.

I'm not terribly familiar with either language, but I don't see any particular difference between swift and rust error handling for example, swift will also mark fallible function calls with try, similarly to ? in rust.

For what is worth the author of the swift standard library believes that try is a mistake: as most functions can fail in practice it just becomes noise. It might be more useful to mark can't fail regions.


I was actually wondering (in Zig, which has a per-statement "try" which is essentially the same as the ? in Rust) whether it also makes sense for whole blocks, which would look a lot like traditional try-catch block in languages with exceptions, e.g. instead of:

    try may_fail_1();
    try may_fail_2();
    try may_fail_3();
...this could be grouped into:

    try {
        may_fail_1();
        may_fail_2();
        may_fail_3();
    } 
...but would behave exactly the same as the indiviual trys, if any function in the block returns with an error, that same error is passed up to the caller. But I guess that forcing individual trys makes you think harder about handling individual errors than just pushing the responsibility for error handling up the callstack.


Sure, but we have now (almost) gone full circle:).

When writing exception safe code, for me is more important to know which functions are guaranteed not to fail as they will be called in the commit path. Currently I just comment which operations are no-throw and otherwise assume that everything else can fail, but it would be nice to have the compiler tell me.


> for me is more important to know which functions are guaranteed not to fail as they will be called in the commit path

But Zig allows you to see immediately which functions are guaranteed not to fail because those functions's return types won't include an error type.


I've been pondering the same thing![0] Essentially, one would get (checked-)exception-like behavior, except that performance would be better, and whether a function can fail or not would still need to be declared explicitly in its return type.

> But I guess that forcing individual trys makes you think harder about handling individual errors than just pushing the responsibility for error handling up the callstack.

It probably does but it can also make code much harder to read if you have to check for errors after every other line. I'm a bit divided on this.

[0]: https://news.ycombinator.com/item?id=34856910


If you consider the case where you call a function that throws an exception without you expecting it -- then the control flow will skip your code, and this is indeed not very structured, like a goto, and in fact less local than a goto.


I think the nonlocal part is the scary part: it becomes very scary to figure out which parts of the code can fail and how, especially when failures can come from an arbitrarily deep call stack.

Maybe checked exceptions could be more useful to explicitly annotate allowed failures, but at the same time we all know how that's going in Java world.


My two main languages are C# and Go.

* C# uses exceptions, and when I code it feels exactly the right choice

* Go uses error return values, and when I code it feels exactly the right choice

For some reason both feel very much ideal in use. Maybe it is because in each case the language syntax/ethos fits very well with the choice made, and so is frictionless when developing in the flow (and if used properly of course)? Maybe some other reason. Hey ho.


It's mostly philosophical, are you fine with blowing up with an exception, or would you rather have your functions return known values for the unhappy path? I personally like exceptions in exceptional cases, but much rather having functions with explicit contracts (e.g. "this will return either True or False in all input cases", not "this will return either True, or Exception in all cases when $foo doesn't exist in the database, and woe unto the programmer that forgets to catch this.")


Nim handles that with the {.raises: [].} pragma and the effect system, which is quite a neat approach. It’s like opt-in checked exceptions, but with much nicer ergonomics than Java used to have


Exceptions come with their own weirdness. Usually, if you want to handle an exception, you need to wrap the code that could generate it in a block, which means any variables declared there won't be available in the parent scope. I'd much rather have the ability to just write normal code and deal with the error on the spot, along with some syntactic sugar (such as "?") to return that error to the caller.


IMHO both exceptions and error handling in Rust (and others) have their upsides and downsides.

Personally, I much prefer Rusts solution, being both more up front and at the same time more terse.

The metaphor is kinda stupid though, the "cavemen" in our scenario knows very well that exceptions exist.


> It is like somebody showed cavemen fire (exceptions) and they decided it wasn't worth anything

Oh, it absolutely can not be that the Rust way is more powerful and you didn't understand it yet. No way. It's all those other people that don't understand the old concept that almost all of them know.


Despite what CS and SE classes try to drill into you, null results or failure cases are nearly always better handled right when they happen instead of passing them up with layers of exception handling. Log it, pass null up, and just immediately handle it. Fail early and none of the rest of the function matters.

Even types of exceptions are rarely useful results outside of reading the logs or sometimes in libraries outside of your control.


How do you handle a DB connection time out in your stack? You can log it and retry, eventually the entire call must be terminated though, and the quickest way is through exception propagation.


The most important considerations for errors is whether they can be retried, and who needs to change something to fix it.

The types can be useful for communicating this


I mean, ask the C++ community. They've had exceptions forever, but a large chunk of them forbid exceptions in their codebases.

I think there's a pretty good rule of thumb in modern systems-ish language design: If Go and Rust and Zig all do a certain thing, that thing is probably a great idea. These languages have very different priorities, but often they overlap.


Zig, unlike Go and Rust, provides an error return trace showing how the error bubbled up. This is a really interesting idea.

https://ziglang.org/documentation/master/#Error-Return-Trace...


go and rust both have this, though opt in at each location you're adding context.


Exceptions make it considerably harder to reason about state by reading the program text. As the notion that programmers should have some actual understanding of what they write slowly becomes less unfashionable, language features that make understanding code needlessly harder are losing some of their appeal even though they speed up writing the code.


What really makes code hard to read is having multiple paths to disentangle. There is one little error deep in the call stack but you have to vandalize the 10 functions above it in the call stack to carefully separate the error and non-error paths -- what's the probability that you will end up cleaning up properly in both paths when it isn't done for you with finally? What's the probability that somebody looking at this code is really going to find the subtle error in the error path or an error in the happy path caused and hidden by the complexity of the unhappy path?

I think the first C program I saw was a type-in terminal emulator from Byte magazine around 1985 and I was struck by the akwardness of the error handling in the C stdlib, spent a lot of time looking at the code when I realized the author had "spaced it" at one point such that the error handling was wrong and thought "this sucks" but learned how to write C programs with 3x the LOC because of all the alternate paths I had to put in to handle errors.

When I saw exceptions for the first time I felt strongly liberated because I got for free what I was working for so hard in C so I got to spend more time thinking about algorithms, the needs of the customer, things like that.


> and programs where half the loc are unhappy paths that do what exceptions do... with a huge amount of work.

This made me laugh harder than makes sense. I'm sure I've been guilty of doing said code, as well.


Exceptions make it difficult to find failure-points in the code. The ? annotates that at its call site, which improves discoverability by a lot and reduces readability by only a little.


> Exceptions make it difficult to find failure-points in the code

My experience doing Java, Go and Rust has been completely the opposite. Exception stack traces in Java are amazingly wonderful things - they exactly pin-point the failure points in the code. The amount of hunting I need to do to find out where something failed in the call stack in Go/Rust is tedious. You need a module/crate for error tracing or you up waddling against a strong current of despair.


> Java are amazingly wonderful things - they exactly pin-point the failure points in the code.

Yes.

Once the exception has happened. At runtime. Which is not when I want to be trying to fix things. I’d much rather handle as much as possible statically, knowing that what I push into has every non-panic code path cleanly handled.

I’ve never had the equivalent experience with exceptions, it’s always “well I’ve wrapped everything I possibly can in as much try-catch and handling as I possibly can, and oh look, some random piece of code has still thrown some random exception we’ve never seen before”.


You still need a top level exception handler in your main loop.


After a couple of years of coding Rust, I found the error system, including the ?, well thought out. It is explicit and clear that the error is or maps to the function return error.

The only thing is that Rust rightfully uses the ? to return early system on option as well, which removed the ability to have None coalescing with "?". This was the right choice from a language point of view, but I wish there would be a None coalescing syntax in Rust.


There's some subtlety here:

1. Exceptions have very high performance costs (equivalent to a longjmp which is very slow), so if you expect to have exceptional cases, it's probably a lot more efficient to not use exceptions.

2. Exceptions break the linear flow of the code when you read it, so now you have to read a lot more code to figure out what the exception paths are and where and how they are handled.


1. Exceptions as commonly implemented in C++ have high overhead in the exception path. But that's just an implementation strategy. There is no reason why it wouldn't be possible to generate exactly the same code as for an optional type if desired (and in fact it was proposed for c++ cf. Herbceptions).

2. So do returns, but we have long settled that SESE is undesirable.


> Exceptions have very high performance costs

On the sad path. On the happy path they are faster than explicit error checking.


Kind of yes, and kind of no. On the happy path, if errors are very very rare, the check is also basically free thanks to branch prediction. They start to cost something when you start to add a higher frequency of errors, which incidentally is where exceptions cost a lot more.


The cost is higher because of all the branches that are scattered everywhere to check return codes. With exceptions there's a check at the place the error is thrown, but that's inevitable. There aren't checks scattered throughout the rest of the code, which would otherwise reduce icache utilization.


Arguing about icache utilization is a little silly here - the code will be laid out for you as though the branches are not taken (or you should force it to do so). In that case, the only "waste" of icache is the CMP and JMP, an additional 4-8 bytes per return, and literally 0 cycles.

When you do take an error, each RET takes you 1 cycle, plus the 10-15 cycle mispredict for the CMP+JMP because there's a stack engine in the CPU that tells you the address to return to. It's counterintuitive that doing "a lot" of things is cheaper than doing fewer things, but it's true.

In comparison, an exception involves taking the one control flow break to some cold control code (maybe page faulting), figuring out where to go using a jump table (slow), restoring the old state from that context (slow), figuring out the type of the thrown object (in many languages, also slow), and then handling accordingly. Each of these steps can easily take 100+ cycles, and may be more.

The math does not work out in favor of exceptions. Neither do the benchmarks in most cases. You do 1 slow thing to avoid doing 20 things that are trivially fast.


The checks you're talking about are duplicated more or less per statement in some types of code. Every single call site ends up with an `if err != nil` or moral equivalent. It can add up, also consider the extra register pressure. The return values aren't valuable anymore, they're just error signalling.

The compiler doesn't necessarily know what your error types are, it can try to use heuristics to move those blocks around but it's not like an exception where the types are a part of the language and the compiler can know that. We're talking about startup code here, nobody will be annotating their error branches with manual predictor probabilities, so we're limited to what the compiler can do.

Yes the act of throwing an exception is more work but it's exceptional, so doesn't matter. The slowest part is calculating the stack trace anyway and that's of huge value, which you don't get with error codes anyway.


There's no register pressure - TEST EAX, EAX (or CMP, EAX, $0) // JNZ $ERROR_HANDLER is the instruction sequence we're talking about. Most error types are enums where 0 = "good" and any nonzero value is not good. This is the inverse of the "null pointer check" in C. It consumes no registers and a negligible number of code bytes.

There is obviously a sparsity of exceptional cases where error-handling code like this is worse than using exceptions. I would claim that it's a lot more sparse than you think. Many people use exceptions for things like "file not found" or "function failed for whatever reason," (my favorite) "timeout," or "bad input from the user." These cases are often not that exceptional!


> Exceptions have very high performance costs (equivalent to a longjmp which is very slow)

Could you elaborate on why they are so slow, compared to passing around/returning error objects explicitly?


They are slow because you need to restore context from an unknown/unpredictable place in the code, you have a table lookup (from a very cold table) to get the next program counter value, and you have to save and restore register values, while the callstack and the calling convention handle all of that complexity for you if you don't break the natural flow of the program.


But couldn't one implement exceptions internally by returning error codes? Yes, this would change the ABI but as long as we're not talking about the interface of a library, i.e. are not leaving the realms of our source code, this should be ok, shouldn't it?

In a sense, try/catch would then just be syntactic sugar that frees you from manually checking for errors after every single function call. Instead, you just handle them in bulk in a catch block, potentially a couple stack frames further upstairs.

EDIT: I just realized my suggestion wouldn't exactly be equivalent to exceptions, in the sense that it wouldn't give stack traces but error return traces, like in Zig:

https://ziglang.org/documentation/master/#Error-Return-Trace...


> In a sense, try/catch would then just be syntactic sugar that frees you from manually checking for errors after every single function call. Instead, you just handle them in bulk in a catch block, potentially a couple stack frames further upstairs.

Here someone else had the same idea: https://news.ycombinator.com/item?id=34846550


They are not the same. Errors force you to explicitly handle unexpected conditions. Exceptions don't. And "?" is for making error handling not take up half of loc.

Read up on how exceptions work in C++ implementation-wise. It's not pretty.


That's the problem, though, right? 99.999% of the time you absolutely should not be "handling" an error: you should merely propagate it so it gets closer to code that has actual intent. Languages that force you to try to "handle" errors--which includes Java, due to their botched concept of checked exceptions--both encourage the wrong behavior in the developer and cause the code to be littered with boilerplate to implement the propagation manually.

Meanwhile, they manage to encode the concept of "can fail" into not merely the type signature of a function but into the syntax used to access it, when--like other monadic behaviors, including "requires scoped allocation"--this is the kind of thing you tend to need to refactor into a codebase at a later time: instead, the code should always be typed as if everything can fail and everything can allocate (not just memory, but any resource); languages that get this right--such as C++ and Python--thereby deserve their stickiness.


> you absolutely should not be "handling" an error: you should merely propagate it so it gets closer to code that has actual intent

Why?!

There are 2 types of errors:

- an error in your program logic, which you need to fix

- an error from something out of your control (network down, disc errors, faulty input... Which you certainly must handle

What's the alternative? Let errors trigger undefined behavior and corrupt your DB? Not pretty.


Exceptions are not "undefined" behaviour, and they don't "corrupt the database". On the contrary, they're very often used to abort database transactions cleanly, even in complex chains of deeply nested function calls.

What people mean by "not handling errors" is that the Visual Basic style of "On Error Resume Next" is a terrible, terrible thing to do. The equivalent in modern languages is a try-catch block in the middle of a stack of function calls 200 deep. That function likely has no idea what the context before it is. Is it being called from a CLI? A kernel module? A web server? Who knows!

Just yesterday I had to deal with legacy code that made this mistake, and now it's going to cause a multi-day problem for several people.

It's a ASP.NET HTTP authentication module that simply swallows exceptions during authentication (e.g.: "Can't decrypt cookie"), doing essentially nothing. When deployed "wrong" (e.g.: encryption key is invalid) it just gets stuck in a redirect loop. The authentication redirects back with a cookie, it is silently ignored, then it redirects to the authentication page which already has a cookie so it redirects back, and so on.

There is nothing in the logs. No exceptions bubble up to the APM or the log analytics systems. The result is HTTP 200 OK as far as the eye can see, but the entire app is broken and we don't even know where or why exactly.

That's not even mentioning the security risks of silently discarding authentication-related errors!

This is what people mean by don't "handle" errors. Middleware or random libraries should never catch exceptions. It's fine if they wrap a large variety of exception types in a better type, but even then it is important to preserve the inner exception for troubleshooting.

I've had to tell every developer this that I've worked with recently as a cloud engineer. Stop trying to be "nice" by catching exceptions. Exceptions are not nice by definition and ignoring that reality won't help anyone.


That's C++. It puts the C in Cthulhu.


> I can't get it why people would prefer to add "?" to everything instead of just having exceptions which automate that behavior.

Exceptions? Which exceptions? How do you know which exceptions you're supposed to be handling and where they come from or when they happen?

I prefer the control flow of the program and the exact types of errors I'm handling to be explicit.


I've been writing Rust professionally for a few years now and if there's one thing I've learned it's that if you ever write a function that takes a parameter of `impl Fn(&Vec<&'a str>) -> &'a str` you are going to be in for some pain. Just make it `impl Fn(&Vec<&str>) -> String`. It is highly unlikely that the extra allocation is ever going to be noticed in the performance.

Just because Rust pretty much forces you to be explicit about your allocations doesn't mean you have to avoid them at all costs.


One day I need to get around to figuring out how to detect when people are going in circles with lifetime errors and have rustc open https://keepcalmandcallclone.website/ for them.


> It is highly unlikely that the extra allocation is ever going to be noticed in the performance.

I had almost this exact scenario, and yes there is pain in writing it with explicit lifetimes. But I can't agree the performance improvement is negligible; maybe in isolation, but I saw about a 100x speed increase for my application when I switched away from Strings. For me it was because I was doing many of those extra String allocations in a loop, so it killed my performance.


This is not an uncommon pitfall when working with strings in all languages. In Java for example it is drilled into people to use StringBuilder instead of concatenating with + on String if you do it in a loop, precisely because of this exact issue.


For sure, optimise when necessary. Rust lets you do this.


A lot of programmers will look at '`impl Fn(&Vec<&'a str>) -> &'a str`' and think what the hell is that, alien hieroglyphics?


Loosely speaking, it’s a type signature for a function that

1. takes an immutable reference to a vector of strings and

2. returns an immutable reference to a string,

3. with the constraint that the strings in the input vector must not be freed before the result is freed.

It’s a reasonably concise way of saying all that.


I'm pretty sure I never want to have anything to do with any of these concepts, no matter the syntax.


Same for APL, Perl, Lisp, Haskell and many others, or even C with multiple (double or triple) pointers and C++ with many pointer types.

At the end of the day, you can't understand a language without, well, learning it.


You can still return the str reference; callers can easily do the copy if they need while allowing for zero-copy for simpler usages.


[flagged]


thanks for running this through chatgpt for me


I have the same impression of Rust: great for software that is well scoped/defined and needs to be stable and efficient, not so much for quick iterations (which for startups is important) and software that doesn't need top performance.

I think in general that the Rust hype has outgrown what it's good for. If you're writing a web app in Rust then you may want to ask yourself if you're making the right choice.


For simple applications, Rust is actually pretty easy to work with in my experience. You don't get a lot of comforts other languages provide, but you don't always need those.

The performance difference between a Rust server and other languages are incredible, especially in terms of RAM usage and concurrent connections per second.

That said, if your program is going to need tons of entities stored in a database, I wouldn't even consider a language or framework without a solid ORM. Rust has some ORM-lite libraries but I'd end up picking a garbage collected language in practice just because of the difficulties that low level programming bring to such middleware.

Iterating in Rust isn't that hard as long as you don't try to cheat your way out. Instead of returning null for methods that you haven't implemented, add a todo!, etcetera. You have to do things somewhat right the first time. I think that's good, because there's nothing as permanent as a temporary proof of concept. You can clone/copy your way out of most annoying Rust restrictions at the cost of performance you'd otherwise sacrifice by picking a higher level language anyway.

If your startup doesn't know what it's building, you have bigger problems than the language you choose.


> The performance difference between a Rust server and other languages are incredible, especially in terms of RAM usage and concurrent connections per second.

Really depends on what "other languages" are here. If you're comparing against Python then sure, but if you're comparing against Go then the difference isn't that incredible.

> That said, if your program is going to need tons of entities stored in a database, I wouldn't even consider a language or framework without a solid ORM.

This is actually what I used Rust for recently and honestly the ORM situation is pretty good. The language itself is just too rigid for this kind of work for too little payoff.

> If your startup doesn't know what it's building, you have bigger problems than the language you choose.

That's true at a high-level, but iterating on small features/changes fast is what makes or breaks most startups.


> If you're writing a web app in Rust then you may want to ask yourself if you're making the right choice.

ok, I did ask myself this, still going with Rust there ;-)


> I can't get it why people would prefer to add "?" to everything instead of just having exceptions which automate that behavior.

Good question... Maybe...

Because with exceptions it's easy to end up with missing cases or unhelpful catch-all exceptions.

Typically with optional values I find that this is not the case for some reason.

Other interesting links I've yet to consume that may help us get closer to an answer this:

https://news.ycombinator.com/item?id=22225170 - "You're better off using exceptions"

https://softwareengineering.stackexchange.com/questions/4050...

https://dannyvanheumen.nl/post/why-i-prefer-error-values-ove...


I find it surprising that so many people are arguing about the benefits and drawbacks of `?`, when in my experience the handling of Result and Option haven't been an issue in practice on the consuming side (`?`, `.unwrap()`, `.map()`, `.ok()`, if let, match, let chains, let else, etc. help a lot), but where all the pain comes from is having to declare the appropriate error type itself. Libraries like `anyhow` takes some of the pain away, but declaring an appropriate struct or particularly an enum in the right places, and the boilerplate for all the type conversions (From/Into impls) are where, during development, I have frustration. What I do then is either use Result<T, ()> or a single `struct Error(String);`, and go back once I have all the scaffolding in place and pry the implicit error tree back into the type system. Anonymous enums like typescript (`A | B | C`) could presumably help here.


I've run into similar issues and found that the `thiserror` crate (https://crates.io/crates/thiserror) combined w/ anyhow makes a lot of that pain go away


For those reading this that aren't super familiar, common Rust advice is "use thiserror for libraries and anyhow for applications," as they make slightly different tradeoffs and so are useful, especially together.


`anyhow` + `?` make writing an application as smooth as butter. You won't miss exceptions.

Don't use `anyhow` for libraries, though. You want to provide your consumers the ability to `match`.


here we go again!


I would add `snafu`(https://crates.io/crates/snafu) here as a good alternative to thiserror+anyhow.


All these workarounds for easier Result handling truly make me wonder whether Rust will eventually evolve exceptions as a feature - of-course without explicitly terming them so.


The community would revolt. Not going to happen.

Proposals to make the existing syntax and semantics even look more like exceptions were met with lots of hostility.


Revolt how? Go back to C++ or whatever language/s they were using?


I don't know exactly, but it's never pretty when a project's leadership makes an unpopular decision. Lots of complaining, for sure.


Rust already has exceptions, they just call them panic and offer exceedingly poor language support for them.

Some people simply live in denial, believing untruths that confirm their worldview.

This is the same situation as with Go and generics. The maintainers were in denial for a very long time, before finally admitting what was obvious to some of us long before [1]

[1] https://news.ycombinator.com/item?id=8626978


The implementation, outcome, use-case, and characteristics of panics are all very different from exceptions. The only marked similarity between the two is that they both interrupt program execution (and in different ways). But I feel as if you already knew that.


Panics support stack traces, unwind the stack, releases the memory, and can either be recovered in some situations, or prints an error message. Sounds like Python exceptions.


I write a lot of Rust. I think you can do a startup in rust but you need to explicitly go against the natural inclinations of rust. Rust is a great language partially because it cares about the details. It’ll do stuff like distinguish between Path and String because it treats the edgecases as important. In a startup that’s not really a priority. In fact focusing on edgecases and doing things the “right” way is completely not the point of writing code in a startup. Rust is also a great language to refactor, something that’s also not ideal for a startup to be spending cycles on.

Would I do a startup in rust? Maybe. It’d depend on the idea. But I’d take measures to avoid the natural orthodoxy of Rust.


I worked with a shop that wanted to use Rust for their shiny new MVP. And they did... and yes we were not really good at training nor could prioritize/attract rust-experienced devs. The Lead rust dev left due to personal reasons and then we were left with a codebase nobody really had the knowledge/insights to support while rapidly iterating. We smiled and rewrote it in node.js.

I think devs get burned when the MVP turns into forever code and somehow are not given room to refactor/rewrite once validated... or they are surrounded by devs who are used to pain/bug-cycle and they (or the business) will accept doing things haphazardly as a cost of doing business. Fred Brook's Second-System effect comes to mind.

Ah ha! Rust! Now we HAVE to write good code because rust has so many protections!!


Never combine new tech with new functionality. If you want to learn new tech, use it to rewrite an old project that was due anyway. If you want to build new functionality, use tech that you know.

This has nothing to do with Rust. I've seen the exact same thing happening with golang in a C++ only environment. Long project, took forever, failed slowly, took a week to rewrite in C++.


I can't fathom why a company would write code in a language that most of its developers aren't at least somewhat experienced in.

If you're writing a program in Rust, hire Rust devs or invest heavily in educating the devs you do have first.


So.. i'm going to disagree, Rust (or any language!) is fine for prototyping. The trick is don't experiment when you're needing to rush a product out. Pick what you and your team is most comfortable in.

Avoiding allocations in Rust, on purpose, and being uncomfortable with how to solve design challenges caused by hyper optimizing your code .. is not a Rust problem. Or an any language problem. Rust gave you rope and you hung yourself with it.

If you're stubborn and you want to use Rust but aren't familiar with the ways to avoid this; Allocate. Use Arc, Rc, Clone, etc. It won't hurt, it won't be terribly slow, and it almost assuredly won't be slower than your prototype languages.

Some might reply "Well then why use Rust!?", to which i would reply because i like it! I love Rust, but if i'm prototyping code i'm not going to write insanely abstract generics either. Why would i? I don't know the problem i'm solving yet, so how can i write truly generic abstractions to solve said problems? Performance is similar. I'll use lifetimes will prototyping in the simple cases, which is most to be honest, but beyond that don't hyper optimize.

To summarize: Choose your favorite language at crunch time. Even if your favorite language gives you rope to hang yourself with you probably don't need to.


> Use Arc, Rc, Clone, etc. It won't hurt, it won't be terribly slow, and it almost assuredly won't be slower than your prototype languages.

I very much doubt that. It's likely true for straight up wasteful languages but very unlikely for more optimized runtimes.


> It's likely true for straight up wasteful languages but very unlikely for more optimized runtimes.

What are you comparing it to? I believe parent meant that even if you clone everywhere, your code is still more likely to be faster than Node/Python, and potentially Go/C#/Java.

One thing to note is that C++ code tends to allocate all over the place from copy assignment and constructors, and it's still very fast. Rust only forces you to be explicit when cloning, but memcpy is still a fast operation, unless you're cloning large structs.


> your code is still more likely to be faster than Node/Python, and potentially Go/C#/Java

Python very likely, Node probably too, not sure. I doubt it's true for the others. My guess is you're going to use less resources (memory etc.) but you're not going to beat their runtimes with hand-rolled GC (ref counting) and allocations all over the place.


Really depends on what you're doing.

The thing is if you know enough about why cloning is slow - like a really fat string or a tight loop - you probably also know enough to avoid that. At least in my experience.

There's a ton of rope in Rust with lifetimes, generics, etc. Most if not all generic bounds have implications about the user and caller. Choosing which type of methods you use to avoid allocation while prototyping can be exceptionally difficult to do and be correct.

Refcounting is not that bad imo, and is very very easy. The moment you're at "well we need to beat the performance of X" is probably when you can make better/intelligent decisions about how you allocate, whether to use Arena/Bump/etc.

Also, if you wanted to be in Go/C#/etc you just would. You're using Rust (in this discussion) because you wanted to. When performance is a concern, you can deal with it - but the basic allocation tricks (Arc/etc) don't represent such a significant performance loss that suddenly your Prototype no longer functions.


Right, I missed the broader point. When you choose to use Rust but need to advance quickly, you are going to be fine with some of these techniques.


I don't know, I haven't had a fight with the compiler in a long while now. On the other hand a $42 dedicated box benchmarked my project's REST API at 270,000req/s. I don't even use a DB, just structs serialized in JSON to a disk and a NAS once every few seconds. One pet IPv6 only server to manage + CloudFlare (Domain, DNS, Cache). Beats today's peak complexity setups, hands down.


Rust is a systems language, not a business language.

If you're building an operating system or a software platform, to for it; the robustness will pay for itself in time to fix hard-to-find errors.

But if you're iterating fast to find out what the program should be about to begin with, use a prototype-friendly language instead. With garbage collection.


Well, I had some difficulties writing a tiny init program for a container because process and signal handling are currently in a pretty weird place in Rust.


I think if you've captured the essence of the discussion on the usability of Rust really well.


In startups and projects where Rust is a premature optimization this makes sense. But, some startups and projects are all about the competitive advantage created by optimizing from day one. In these cases, choosing Rust and other early optimizations is the main enabler of a unique product.


We have a fairly complex app with a front end in Typescript and a back end in Rust backed by Postgres on AWS.

My favorite part of the job is coding in Rust and we do a lot of cool things in that backend code. Unfortunately, most often the Rust code is the fastest and easiest part of a change, which means that I spend most of my time solving problems either on the front end with Typescript or on CI and infra type things rather than the Rust part.

It's a bit sad: if something just works, you spend less time on it than on the hairier things.


Backend code is simpler because is has two limited well defined surfaces (API for the frontend on one side and database on the other side). Frontend is harder because it has to interface with those impolite hairy meat creatures ...


Frontend code has two as well: the input methods (mouse, keyboard, screen) and the API surface.

> Backend code is simpler

I hear this every once in awhile and think it's mostly a front end happy hour misrepresentation that makes everyone feel good so it gets repeated. The service layer of an application is very often far more complicated than, or to be fair, at least as complicated as, the user interface. Front end devs just typically aren't good at chopping up their problem into nice interfaces and therefore struggle to test it reliably or make large broad changes efficiently. This is where the complexity comes in. That's not a stab at FE devs, it's just not a skill that often gets rewarded in FE work so it's not very prevalent, which I find sad.

The service layer has to deal with enforcing the correctness of business logic despite the infinite ways the meat monkeys can interact with it. It does this by defining clear boundaries on the outside and by ensuring the transactional correctness of logic on the inside. While front end folks have to figure out the correct UX to use to successfully communicate with with a user, service layer folks have to figure out all the implications of a single action the user wishes to take and make sure it happens correctly. Data validation, data modeling, transactions, errors, queuing, retries, scaling, monitoring, etc. are all things that would probably make the average FE dev explode if thrust upon them.


>> Frontend code has two as well: the input methods (mouse, keyboard, screen) and the API surface.

If you compare these two surfaces, the sequence of mouse and keyboard events is very much unpredictable and the number of possible states you should think of is much larger. It's not by chance you spend more time with hairier things, it's because they are harder. In fact, it's a definition of "harder".


Yeah but your UI framework turns all that stuff into events so you’re not dealing with raw mouse data. I personally think the complexity is similar and have done FE for 10 years and more backend/security/networking side of things the last 5.


If you are writing high performance code use Rust. (Slow development times, high performance)

If you are writing a typical business application use Python. (Fast development times, low performance)

Or if you want to be clever do a hybrid of both. Create Rust modules for your Python code.

This is just about selecting the right tool for the right job.


I would rephrase it as follows,

If you are writing high performance code where a tracing GC isn't an option, and there are SDKs available use Rust.

Otherwise an AOT compiled language is a better option, and if not, and there are only C and C++ SDKs available, also factor in the development cost of creating wrappers in Rust, before doing the actual development activities.

If there is too much money being burned in wrapper libraries, maybe that isn't the best option as well.


I picked Crystal so I didn't have to choose.


That's in the C / C++ / Rust bucket due to the typing system.


So in his opinion, choosing Rust is a premature optimization?


Author here - yeah, that's how I feel about it, at least for startups specifically.


What if your startup is in the embedded systems space, for example? I don't think you'd be doing your MVP in Python.


That's fair, I was definitely being a bit too general. There's another comment in this thread that summarizes it better which is asking "what would I use if Rust didn't exist?" and I think that's a more clear line. All of my embedded work was in C/asm so Rust is actually a great choice there.


Why impose the "embedded systems" space requirement on the OP? The OP does not work in embedded systems space. So it is not relevant to this article. The OP is telling us what they would do, not what you should do and definitely not what embedded systems startups should do.


(Not your parent but how I read it.)

It's not imposing a space requirement. It's a reminder that these generalizations have limitations. The OP isn't in the embedded space, but they do say "a startup" not "a web startup." There are embedded startups.


... then however, how do you feel about tech debt with Rust? My feeling was that go, rust and such left a lighter burden on the future than say ruby.

Do you think you will need a major rewrite soon?


To me, the language agnostic answer to reducing tech debt is having a good test suite so refactoring is easier. We're pretty good on that front.

We have definitely done large refactors before, and I'm sure we'll have more in the future, but I don't think we need a major rewrite or anything like that.


As long as you know what you are doing Ruby won't leave you with more technical debt than Go or Rust.

How many people know how to effectively program in languages like Ruby is another question altogether...


Really, unless one needs deployment scenarios where any kind of automatic memory management is not an option, there are several compiled languages with Rust like type systems and much better workflows.

Go pick OCaml, Haskell, Scala or Kotlin with GraalVM or OpenJ9, F# with NativeAOT, Swift, Nim, D, whatever.


This is something I hear a lot from other founders.

Spinup time of new engineers is like 6mo+, some devs who can churn out normal CRUD product work just fine in Rails or React ~never become productive with Rust, and hiring skilled Rust devs is just crazy hard (though maybe now that Blockchain/Solidity things have cooled there may be more supply).


> and hiring skilled Rust devs is just crazy hard (though maybe now that Blockchain/Solidity things have cooled there may be more supply).

Sidenote, we hire Rust at a small shop (~30 devs?). Ironically i've found it _easier_ to hire for Rust. You're totally not wrong, BUT, the quality of the candidates that apply is quite high in our experience. I suspect it's because we get a lot of passionate people. We don't have to weed out as many candidates.

With that said we don't aim for super senior devs. We're happy to hire a junior, etc. I care much more about the quality of the person than raw experience.

With that said traditional hiring avenues have not been fruitful for us. Word of mouth, Rust community job posting, etc have been most fruitful by far. Probably due to exactly what you said.


> With that said we don't aim for super senior devs. We're happy to hire a junior, etc. I care much more about the quality of the person than raw experience.

Being young and cheap is a good quality, I suppose. Experience is overrated.


Well, hiring only super experienced people in a niche talent pool feels arbitrarily difficult. Young devs can be just as good, and we all need to build experience somehow.

One rule of thumb for me is that the more young a developer we hire is, proportionally we also need to hire an equally senior. Ie we don't want a huge amount of developers lacking experience.

However if you have enough senior developers to mentor the young ones? Seems a net win for all involved, to me at least.


Lets be happy when someone says they're hiring juniors.

The poster sounds like someone who invests in people and doesn't rule them out based on years of experience on their resume. From all of the "where are the seniors?" threads I've seen, the industry could use more of that.


> Being young and cheap is a good quality, I suppose. Experience is overrated It's "overrated" until it isn't. And then you learn very, very expensive mistakes. Unfortunately it takes experience to learn why this viewpoint is nothing more than hubris.


N = 1, but at startup scale we have definitely not had a hard time hiring skilled Rust devs. Just go on /r/rust and advertise your (non-crypto) job and get a lot of inbound.

(We were specifically looking for remote folks near EU time zones, maybe local in SF is harder?)


6mo+ is spot on. This is true for most dev work that deviates from the CRUD path. Like gaming, simulation, robotics. We might as well use the time to train them in Rust too.


The reason why people would like to pick Rust is because of its ergonomic features like sum types, streams and of course the toolchain.

But here is a claim: Most business-level programmers are not ready for dealing with the borrowing and ownership concept. They don't want to care about reference vs. value types. They can't do memory management efficiently, because most of them have never used a language without GC.

With Rust you would need to care more about memory which is not necessary for most use cases in startups.


I find it mystifying that one of the reasons you can supposedly pass over Rust, and avoid performance concerns, is to "take the free money." Literally.

That is, the free money, in credits, that allows you to not worry about performance for a while.

The obvious problem being... "What happens when the free money runs out?"

It's not addressed, but the implied answer seems to be "Well, by then it'll be six months later, and..."

And? And what? It's still a problem! It didn't go away.

The money deflected the perf concerns for some time, maybe a year - but then it's back, bigger and badder than ever.

Rust solves that by not requiring that outlay ever, by being more efficient.

Seems like a very poor reason to not choose Rust, to me.


I am a Rust fanboy but using it to start a startup hmm unless it’s for a specific use case. No. Why? Because the biggest bill you will have to pay at first in a startup and for a long time won’t be AWS but all the (devs) salaries. And compare to what you will pay to AWS that’s a lot of money. So a gain in performance won’t matter as much as in your success (and survivability) than speed in developing new feature every week.

Depending what you do but it’s usually the case that performance won’t matter until mid late game. I saw a startup at +300M valuation still not having to worry about performance for a long time. And a cut in the AWS bill thanks to using a more performant language like Rust won’t make that much a difference compare to how much they had to pay devs. So you just want a language where you can ship features fast.

Also the hiring pool is still very small.


At that point startup has revenue so can pay or is dead so it doesn't matter.


People who are really interested in rust tend to be top-tier developers. I don't think they're consciously lying about their experiences working with the language but they may not hit the speed bumps that normal people would. My personal abilities make me competent in golang, ruby, python, java, c++. I love the quasi-functional styling of rust but whenever I've tried to build small projects in it I've gotten bogged down in fighting with the compiler in ways I never do in the former. It is fast as all get out tho!


Die hard rust fans often minimize the very real developer difficulty incurred by their language of choice. Even major library maintainers in rust have criticisms of various language features because of their difficulty to use. These are real and substantiative concerns that would affect any development team not made of expert rustaceans. Just look at basic dynamic programming implementations in a normal language versus rust for say popular leap code questions and you'll see the difference in basic developer productivity.


My biggest pain point with Rust (in a startup context) is that Rust works really well, until you get to anything related to threading or async.

Yes, the claim is "fearless concurrency" but you'll still deadlocking mutexes and once you're heavily into async you need to start using language constructs that feel REALLY awkward, like pinning, runtime checks like RefCell, and so on.

IMO if Rust could make that whole aspect of the language more elegant, it'd be much easier to scale up to a larger org.


I honestly think Swift nailed it. Swift's async/await is a pleasure to work with and has the required language/runtime/stdlib support to feel natural and empowering instead of ridiculous and suffocating.


For pattern matching and expressiveness in a backend language that lets you iterate quickly, I'd highly recommend Elixir.


I would reach for Elixir, Phoenix, and LiveView for any new web development. We were using Java before and were running into major productivity issues. After extensive testing with Elixir, Node, Clojure, Go, and a handful of other languages, Elixir won out, we rewrote our entire stack, and our productivity skyrocketed. I can’t say enough good about it.


Rust is weird. It has high level features that make is superior to high level languages like go and python. But the low level features like default move semantics inevitably make it harder.

IMO there is merit in making a language that is equivalent to garbage collected rust by default with ownership rules similar to python. Then the classic rust based ownership and allocation schemes are all opt-in syntax-wise in the same way Box is opt-in.


How much more performance do you need to get from Rust over Python (even Cython, PyPy, Numba, etc) to justify the extra development cost?

A 2x gain is certainly not worth it. A 10x gain? Maybe. But that is hard to achieve when much of your "compute" is spent on the DB side of things. How many startups actually scale out of needing a few non-db instances?


Rust easily gets 10x improvement in performance over Python in a lot of applications. This is absolutely my experience.

A better statement is that it doesn’t get 10x improvement over Go, or other ergonomic compiled languages.


Also in the context of cloud spend: way way way less RAM, which can translate directly to dollars.


Yeah, I just wrote a simple daemon in C that had no performance requirements ( listening to a udp socket and dumping stuff into a pgsql db once a minute) because the Python program would use like at least 20x the RAM. When you're running a bunch of things on a resource constrained place (e.g. a single computer that has to do a ton of things sitting on a rack on the Greenland ice sheet), even just the base Python memory usage from a new process adds up...


My team helps run and deploy a python service that is entirely CPU bound. It accepts an input, performs some computation, and returns a result without any sort of I/O outside of the initiating HTTP request. In the past week it's averaged around 144 req/s with a p95 latency of ~1s.

We average ~80 "instances" to maintain this level of performance. I have very little doubt that, if given the opportunity to rewrite this in rust, we could smash 10x perf improvements. Could we also get more perf out of tuning our python code better? Definitely. Do I think there's 10-20x improvement waiting to be uncovered? No.

Unfortunately (fortunately?) we're at a stage that it makes more sense to throw ludicrous sums of money at it than it does to ground up rewrite.


It will be 25x gain on average and your development costs will triple. Is that worth it? Depends on what you are building....


> Perf is easy when you have AWS credits.

I get that the point of this article is that launching > sustaining.

But I have a horror story about AWS credits for startups: a friend's startup got their account suspended for a couple days when their Credits ran out, they started getting billed 5 digits (as they expected), and Amazon's fraud detection detected this as an anomalous billing pattern and suspended them! For transitioning from AWS credits to billing!?

Considering all their operations were running in AWS, and they were providing a HW-critical service to their customers, it was bad.

This was late last year. This was despite getting reassurances from their AWS rep that the transition would be smooth.


If those reassurances are in unmistakable writing, a letter from the startup's corporate attorney may yield a quick refund and compensation.


Sure, but in the life of a startup that's just one extra thing they'd rather not have to deal with, and the damage isn't just financial.


We love using rust on the backend at https://mayhem4api.forallsecure.com/ and in our CLI. If I were the decider, I'd choose it again. Rust is a hurdle to learn, but the confidence you gain from the type system is fantastic. I've worked in other projects with different, looser-type'd languages (no names! no flames!) and despite good testing coverage, the confidence on release to prod is not as high.


Reading this makes me reaffirm my decision to build my startup in elixir. At the time, it came down to 3 compelling choices

1. typescript - I already had 6 years of full stack js experience. The ecosystemis full of issues but its all issues I'm used to. 2. rust - new kid on the block. the type system and speed were compelling but it was still being developed and the learning curve was brutal. Plus I was under the gun to get the mvp up. we needed to get a working piece of software up and test out business assumptions.

In the end I went with elixir. Realtime sync was a huge killer feature we were aiming for and phoenix came with the best out of the box support for it. Overall productivity was on par with javascript while the functional aspects made certain types of bugs non issues in elixir. runtime performance has been more that adequate. after 3 years in production, we have only recently rolled out rate limiting and caching and only as a precaution as we've been expanding quickly.

Echoing the author's sentiments, I can definitely see places where rust could be better and thanks to tools like rustler, we'll be able to bring those in piecemeal as needed.

I'm sure our product would be even faster and more efficient on resources if we did it in rust but I'm pretty sure we would have more likely run out of runway before that happenned.


It's great when wanting to play with cool new technology combines with implementing a viable product to sell to customers. But those two things don't necessarily go together. Quite often they are at odds with each other, and then you have to pick one or the other: either we spend resources playing with cool technology, or deliver a product customers will buy. Neither is wrong if it's your own resources, it's just important to understand that there is a trade-off involved.


Exactly. "Rust wasn't the right choice because we spent too much time and resources playing with it." is not an argument against Rust. It's an argument against learning a new technology while looking for PMF. There's nothing inherent about Rust that makes it a poor choice to build a first iteration of a product with.


I don't agree. I think the author is conflating two things:

1. learning Rust, and 2. using Rust.

If you take away "Rust made us slow because our team had to learn how to use it and thus we had slow iterations and it's harder to find hires with Rust knowledge" from the equation, then you aren't left with much argument against using Rust early.

The iteration time issue with Rust is solved by experience. We use Rust and our iteration times are average. What we gain is not performance, that's not a reason we use Rust (for an early stage startup, totally agree you burn credits until you can afford to care). We gain correctness. And at an early stage, correctness without paying for a massive QA team is a huge boon.

There are definitely more mature tools for quickly standing up CRUD APIs. If you want a framework that can bootstrap you into an OpenAPI with docgen, swagger, all the bells and whistles, Rust doesn't do that. But Rust will help force you to write correct code that never crashes and handles edge cases it's easy to forget about when moving fast in a duck typed language.

The only language we seriously considered over Rust was Swift. But Swift's just wasn't quite there yet. It might be today. If I was starting something from ground zero today, I'd probably lean towards Swift and need to be argued down back to Rust or Python.


The author touches on why it matters in the article - either you have to restrict hiring to folks who are already rust experts (much smaller hiring pool, also usually meaning higher comp expectations) - or you have to consider the cost of training new/existing staff on Rust. Rust has a notoriously difficult learning curve, especially to folks who don't have a background in C/C++. As the author mentions, you may be looking at 6+ month ramp up time until new hires can comfortably write non-throwaway tech-debt free production code. For many startups looking to iterate quickly, that's just too slow.

Conversely if you are looking to hire Java folks, you'll have an enormous pool to pick from, or if you need to train somebody in eg Go - you can do that significantly quicker than you could with Rust.


I was putting that argument aside because that's an argument you can make for any language and it changes depending on who you know, what circles you're in, etc. It's and externality not specific to Rust. You can say the same for Erlang, or Swift, or C#, or ...insert language that isn't JavaScript.

What I'm saying is that, disregarding the externalities, I've found Rust to be quite a boon and not inherently a bad fit for an early stage. YMMV.


You can write your REST API with Rust using axum and it can generate your OpenAPI docs. You can even generate a typescript client.


> We gain correctness. And at an early stage, correctness without paying for a massive QA team is a huge boon.

Compared to what? Java, C# and Go?


Well Python code will on average will be more correct than Rust code.

I'm not sure why you feel Rust code would be more correct than Python code but it certainly isn't true.


I strongly believe that if I would code anything significant my Rust could would be more correct.

The reasons are types and rusts multi threading guarantees, which become even more helpful when doing refactorings


It won't because the number #1 factor in bugs is the number of lines. The Python code will be significantly shorter and thus contain less bugs.


That feels a bit overly reductive. I have a hard time believing GolfScript and APL are among the least error prone languages, for example.


Leaving joke languages aside, it is very much true. If you use less lines of code to implement a software feature, the software feature will on average have less bugs.

Programmers write bugs per line, something like Python which only uses 33% of the lines and thus contains only around 33% of the bugs. On the other hand, if you go down the prove things are correct at compile time you will eliminate around 5% of the bugs.

So you can choice between eliminating 67% of the bugs by making the language simpler to use or 5% of the bugs by doing formal checking.


Has someone done this analysis with Rust? I highly doubt it holds for Rust programs. Seriously.


Just finished my learning Rust project (very basic version of minecraft). It's great for writing highly parallel high performance code.

At no point was the memory of my program ever replaced with the number 53 nor did memory become corrupt due to multiple threads accessing the same areas at the same time. Much better than C++.

Did my Rust project suddenly contain no errors or did it never crash? Well no. How did it crash?

Array indexing (called default instead of new which created an invalid version of the struct)

Infinite loop (logic error causing it to add the result each loop to the input vector rather than the output vector).

That said Rust is almost certainly going to kill C++. It's a lot better.


Nobody claims that Rust code contains no errors, but less.

Rust removes types of errors that can happen. Logic errors don't belong to the types of errors Rust 'removes'


When you are using a duck typed language, the overwhelming majority 90%+ of errors are logic errors.


By definition this is true. Duct type languages don't care about whether your program is expected to work at compile time. They just run it and hope it does so it's all "logic errors".

I've experienced an array indexing issue in Rust exactly once and when it happened the stack trace told me exactly where and then I did exactly what you did: changed default to new or something and called append instead of my_vec[N] = foo. This issue was caught in my test btw.

That leaves infinite loop which is a problem in any language if you aren't intending to loop infinitely. So it's simply not something Rust fixes. I must say the number if times I've had a program infinitely loop and it be a problem that made it through a basic test of the logic is minuscule compared to the number of times I've had a program just shit the bed with poorly managed pointers or corrupt memory.

So you're basically proving our point. Rust eliminated all your bugs except an infinite loop and an array index out of bounds. I bet your mincraft clone is running flawlessly and you have Rust to thank.

What Rust has shown me is that programmers by and far make more unforced semantic errors (like not properly managing a pointer or accessing memory without proper synchronization) than they do logic errors. When you remove all the silly BS errors, you're left with surprisingly few problems to solve and you can almost surely blame yourself and find and fix the issue when one does crop up. It's way different. If you haven't, try writing Minecraft in Java.


> Programmers write bugs per line

Source?

Hack more code into one line -> less bugs for the overall program? doubt

Use Typescript instead of JS (thus add a few loc) -> more bugs? certainly doubt


That's really a question for Google or Stack Overflow.

One source would be: https://www.oreilly.com/library/view/code-complete-2nd/07356...


I think the metric you’re looking for is bugs per line not raw total bugs. Bugger programs have more bugs is trivially true if all programs had the same rate of bugs per line.


Curious: have you written Rust in a serious capacity before?


Yes, many times "typing supremacists" and pedantic folks will recommend this course of action but it is not a great idea much of the time as the piece illustrates. Getting things perfect on a code-level up front is rarely what a new project needs.

A more effective strategy for business is to instead prototype in something like Python—that's what it's for. This was known back in the 90s, and been somewhat forgotten. Django too. Like a flexible clay to rapidly sculpt to a first approximation. Then:

1) Get to product market fit, keep iterating until you do. Do not go to step 2 until that happens.

2) Get the fundamental data models right, get your fundamental software design right. Keep iterating until you do. Do not go to step 3 until that happens.

This stuff is easier in Python as it gets out of your way. Yes use pyflakes, a few tests, and a code formatter to keep you honest. But not much more that that. Pycharm for example, if you need a helping hand.

3) When step 1 and 2 are a looking good, then rebuild the foundation of your gleaming skyscraper with the steel girders of rust, java, and/or other bdsm languages with an already good product and design.

Step three may not even be needed if you have a CRUDdy project. Complete the typing at that time.


That has a catch 22. If you aren't going to start with Rust from the beginning, switching to it later becomes too costly and too difficult, which defeats the argument of it being useful in general. Most often what's used in the beginning as "prototyping" is cemented into the system to the point that it's hard to change it.

So yeah, better to deal with complexities in the beginning and save on switching later, than not to use it all.


Perhaps not the case here, but there are a couple of things I would add to this sentiment.

First, if you love Rust uniquely, I'd argue it might still make sense to use Rust to build your first product iterations. If your initial team commonly loves Rust, and can't agree on "love" for another common language, perhaps Rust is the best language. Burnout will happen exponentially faster if developers are lamenting the language (n.b. Rust is commonly lamented). I think there are enough language/platform options these days for this to be an unlikely scenario, but this is to say don't discount your passion for a language because it doesn't iterate fast enough.

Second, if the problem your startup is trying to solve is solidly in the performance and security realm, it makes sense to start in Rust. If your pitch is something like "pandas but fast and memory efficient" it also makes sense to start your project in something like Rust.


Maybe nowadays we should all use glue-script + compiled-ffi to iterate fast while keep performance under control?

e.g python+cffi, or python+pyo3(for rust), or even lua+capi?

do we really need code everything in compiled language these days? the cold path can be dealt with by scripting languages, and let the c/c++/rust/etc to handle the performance critical path instead.


Nowadays?

That is how AOLServer used to be, and all the other scripting languages developed as Apache plugins, back in the 2000's .com wave, like mod_perl and PHP.


Maybe scripting language was overused down the road? i.e. to use it for everything, and use them like a compiled language(ruby in rails, php framework, django,etc) that made things slow?

point here is that to restrict script languages to glue logic for the most part, and always remember to use ffi for heavy lifting, not sure how to balance both yet.


That is why PHP eventually got a JIT, initially thanks to Facebook experiments compiling to C++, and later the JIT proving being capable to generate similar performance.

The problem was exactly that overused, without JIT/AOT in the box, with many people shying away from writing native extensions, instead adding more boxes.

The difference between doing JavaScript in node with native extensions in 2023, and Perl/TCL with native extensions in 2000, is exactly that, a JIT.


The main reason why server-side stuff is slow is poor use of the database. Doing e.g. nested loops in a compiled language uses way less CPU than a scripting language, but it should be done in the DB in the first place.


> We have extractors for different user requirements to make adding APIs very straightforward. We have middleware for scoping requests per customer. In most languages, this is pretty standard, but in Rust, for our use case, they both require at least a rough understanding of pinning.

This is more about "async Rust" and the way that the most common web frameworks for Rust are utilizing it. It should be totally possible to write much easier to understand frameworks and code - potentially with the limitation of using a classical thread per request architecture which avoids most of the async/lifetime/pinning pitfalls. The main drawback seems that such frameworks seem out of favor in Rust and thereby not available or not very well maintained.


Since safety was the first reason given for using rust, I'll just point out:

There are other safe languages.

I think it's a really useful thing to have from day one, but it doesn't particularly point you to rust.

Also, performance is really about learning what your bottle-necks are, profiling them and optimizing them. You probably have no idea what those are when you start, so it's not really the right time to try to solve it. (There's a decent chance, e.g., that your inner loops won't even be in code you write, like the database.) Probably the best thing you can do for long-term performance up-front is to try to stay flexible, and try to keep your architecture simple.


Rust does a bit more on the safety front than typical programming languages.


Really? I guess if your typical programming languages are C and C++.

Otherwise Rust just has semantics that allow more control over memory, as is often needed in lower level programs, while preventing pointer aliasing. The majority of languages in existence are memory safe--some even more so than Rust. They're just not as flexible.


It's much better than Java, Kotlin and C#.

The borrow checker detects the majority (~95%) of concurrency problems. We don't have that many single core CPUs lying around anymore.

It's got a story on high performance, high concurrency programs which is significantly better than anything else I've seen so far.


I'll give you that.

The reason I questioned is because in my experience with those languages the 95% problem is not the actual data consistency rather it's locking and synchronization hell that results from needing to make your program threadsafe to ensure data consistency. Rust says, don't get yourself in a situations where you need to do that in the first place, it's not safe. Just clone the data or leak it read only or Cow it.

What Rust does is great, it sets you up so you're never sharing references across threads unless you try really really hard. And that's the source of needing manual synchronization the majority of the time. However, when you do need locks, Rust doesn't do anything to help. In other words, if you copied a Java program to Rust with object instance pointers all over the place, I bet it would feel just as bad in Rust.

So I tend to think of that more as "thread safety" than "memory safety". But we might just be arguing semantics at this point. I agree Rust is far more of a pleasure to work in than Java and C#.


It has been a very long time since I’ve used Java. Rust will tell you where you need the locks, at compile time. Does Java? Serious question.


Not since I’ve use it either. I may be missing something since I’ve only used async Rust, in what way does Rust say “you need a lock here”? If it does that then I stand corrected and I may just have to drop async Rust altogether and checkout crossbeam + rayon that everyone raves about.


Rust has two traits, Send and Sync. Send means "this can be transferred to another thread," and Sync means "this can be accessed via a reference in another thread.

Here's some (contrived!) example code (for one thing I'm using thread::scope because I don't want to deal with joining the threads):

    use std::thread;
    use std::rc::Rc;
    
    fn main() {
        let v = Rc::new(vec![1, 2, 3]);
        
        thread::scope(|s| {
            s.spawn(|| {
                do_work(v.clone());
            });
            
            s.spawn(|| {
                do_work(v.clone());
            });
        });
    }
    
    fn do_work(v: Rc<Vec<i32>>) {
        unimplemented!()
    }
This gives:

    error[E0277]: `Rc<Vec<i32>>` cannot be shared between threads safely
      --> src/main.rs:8:17
       |
    8  |           s.spawn(|| {
       |  ___________-----_^
       | |           |
       | |           required by a bound introduced by this call
    9  | |             do_work(v.clone());
    10 | |         });
       | |_________^ `Rc<Vec<i32>>` cannot be shared between threads safely
       |
Rc is not thread-safe. We try to send it into some threads. It doesn't work. Switching to Arc, which does use atomic reference counts and therefore is thread-safe, does. Same principle would apply with Mutex if we were trying to modify the vector, Rust will yell at us.

One really really nice thing about this is that it'll check no matter how for "down" into the details the thread unsafety is. There's a story Niko told in a presentation of his how he was doing some refactoring and added a type that wasn't thread-safe like, four or five layers down from where the threading happened. rustc caught it immediately, and therefore, it was obvious. Would have been a heisenbug in other languages.

Async Rust also uses Send/Sync, for example, tokio::spawn requres a Send bound, just like spawning a thread does. I do know there are some tricky deadlock cases there, if I recall? But deadlocking isn't what I'm talking about, no aspect of Rust statically prevents those.


I understand send and sync. I see what you’re saying. Though note, even if you pass around Arcs the inner value still has to be Mutex or RwLock. But I do see how Rust makes this more structured. Honestly with async it’s usually enough to just make sure your types are Send and Sync and clone them so that’s really the extent of what I normally have to deal with.

Re deadlocking: With async runtimes, since you have a fixed threadpool, if you use the normal locks from the stdlib you can deadlock or more accurately stall your program because all available executor threads are blocked waiting on a lock. If the executor is starved the task that would unlock the stalled threads never gets scheduled. It’s a problem unique to the task executor paradigm (the thread per task version of the program would be logically correct and never deadlock or a version that used yielding locks). Not sure if thats exactly what you’re talking about, but, it’s a part of the language/experience I think could use some work. Would be nice if the structure that exists around data races could also exist around blocking vs yielding calls from async tasks.


There’s Javascript/Typescript.


Question for HN, all things being equal (you are not more familiar with one language/framework) what language would you choose to build a startup in?


Python or Go or a mix of both. I have seen new devs with no experience in either get get up to speed quickly in both. For a startup, velocity is critical.


Java or .NET platforms, hardly anything else comes close in languages, tooling and libraries.


Agreed, those are the ecosystems that "just work".


This is the correct answer. Not the cool answer.


Making a business is not about being cool, unless we are talking about fashion industry.


That's not a good question. There is no reason to pick one language over another if you don't even know what software you will write.


The language you're most proficient in that has a somewhat decent community.


I'm not sure all things are ever equal, but I would choose Rust.


The one appropriate for what you're trying to build. It may mean multiple languages.

Not everything is a CRUD mobile app trying to be the next tinder for cats.


Edit: Had just woken up and didn't notice "you are not more familiar with one or the other". Hm, will have to think on that one.

Haskell. I know it best and know at least 10 people I could hire that also know it.

I can be as safe or as unsafe as needed where it's needed and get good performance without thinking about it most of the time.


Java gives you so much out of the box it's hard to ignore it.


Go. Swift if it was Apple ecosystem. Rust only for very specific tooling as required.


Kotlin/C# backend. React (TS) frontend


Java or .NET


typescript


I remember reading about PropelAuth somewhere and thinking that Rust might slow down development -- something I wanted to be proven wrong about since I've been learning rust off and on, and like some things about it. It seems it's ending up up a mixed bag, and the negatives in the bag are still light enough that you're carrying it forward. Thank you for this blog post!


You can also get match in any variant of Erlang or ML, and those don't force you to jump through hoops for borrow checking.


Also Kotlin: `when` or `with...when` for more complex matches


"I find myself missing match in pretty much every other language I go to."

Python has match, is this the same thing?

https://blog.teclado.com/python-match-case/


In my experience with Palantir "alums," none of the crotchety perfectionists I knew ended up working with Rust after leaving. I guess AI (as the author of this post is affectionately known) beat the trend by starting his own company!


For apps where performance is key, I found you can get by using go for an MVP. Apart from the spikes in cou usage because of the garbage collector, go is fast and allows you to iterate quickly.


Well, we are doing just that, building a new venture with Rust because we think it is transformative in our field. Will post our findings in a year or so.


> Perf is easy when you have AWS credits. One reason that you might pick Rust is for overall performance.

Interesting: Rust save money but that mean effort!


Even on the cloud, Rust will only save you money if you have enough users. But the effort is upfront.

Unfortunately, the cloud isn't a very good environment for mixed-languages deployments (unless you stick to the most basic services), so you have to make a decision on the very beginning and stay with it.


Crazy to think that cloud credits, in addition to distorting the hosting competition, might also distort the language choice competition…


The first hit of cloud is free but once they have you hooked they charge an arm and a leg afterwards.


Performance of language is almost never a big concern, but it’s so interesting for a technical person.


I don't understand the rust fandom. You probably just don't need it.


"Building a startup in Rust".

I know the hype around Rust, but this is really exaggerated: you build a startup for creating a product or giving a service, not to have something written in Rust. Your customers should care about what you are offering, much more than about which language do you use.


The doc about pinning seems really good. But I don’t understand what about it is necessary for something like middleware.

Glad Rust is working for others, and I find it interesting to read about. but I don’t know if I could or would ever use it myself.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: