I tried elixir and enjoyed it. I'm don't have strong feelings either way on static vs dynamic typing, I think the current craze for static typing is a lot because of people's experience with javascript vs typescript. With elixir I of course had runtime type errors but trivial ones that show up on the first run, I don't think I had any hard to find bugs or bugs that surface later that ML typing would've caught. I'd absolutely pick it over rust for web backends, but I do very little of that, I'm mostly writing cli utilities on unix. I've been experimenting with common lisp lately and it's a lot of fun, as fast to prototype with as python (when it has the libraries you need), performance around the level of java or go, great development environment if you use emacs, instant startup time for cli utilities unlike beam or jvm. I think if it was made easier to make and cross-compile fully staticly linked binaries, and it get's something like maturin for python or rustler for elixir to have easier access to bigger ecosystem when needed it'd be great for my uses but at the moment I can't use it for much
In my case, it's because I've been burned by Python's maintainability issues in ways MyPy isn't sufficient to fix.
It's not just Rust's type system, but:
1. Unlike with Haskell, the Rust compatibility promise has set the tone for how the ecosystem approaches API breakage.
2. Go-like "statically link everything into a single binary" compilation means that "just keep using the old build until things are fixed" is a valid answer to "an upgrade broke the buld process".
3. Rust's type system enables design patterns like the typestate pattern, for encoding as many invariants in the type system for proving at compile time as possible.
4. Rust's design prioritizes removing the need for global reasoning about program behaviour.
Same reason I recently spent $10 on a used copy of the O'Reilly lex & yacc book to learn the warts of LR parsing. I plan to write my parsers via something like LALRPOP or grmtools instead of using nom (parser combinators) or pest (PEG parsing), which are currently more popular in the Rust world. (As with borrow-checking errors, shift/reduce and reduce/reduce conflicts aren't the bug, they're the feature. I read an article about how LR parsing allows the most detection of ambiguities in the grammar at compile time. Cry in the dojo, laugh on the battlefield.)
I'd rather pay up-front to avoid the stress of having a Sword of Damocles hanging over my head and feeling satisfied with Rust takes FAR less time than either burning out trying to replicate its type system in unit tests or playing bug whac-a-mole over the lifetime of the project.