I would love to use Rust more, and have enjoyed working with it on some toy projects, but the last time I looked there wasn't a super solid general purpose math library. Is there anything out there for Rust that is of similar scope to C++'s Eigen or Python's NumPy+SciPy? What about SSE/AVX intrinsics from Rust?
I really wanted to use Rust for numerical/scientific computing tasks, but it's kind of miserable for it. I didn't get hung up on the ownership things that all of the Rust zealots talk about (although I think explicit lifetimes are needlessly complicated). I got hung up trying to implement simple things like complex numbers and matrices in a way that was generic and usable. I'm sure some Rust fanboy will argue that Rust has operator overloading through traits, so I'll challenge anyone to make a workable implementation such that zA and 2.0B works in the following generics:
let z = Complex<f64>::new(1.0, 2.0);
let A = Matrix<f64>::ident(10, 10);
let B : Matrix<Complex<f64>> = z*A;
let C : Matrix<Complex<f64>> = 2.0*B;
If Rust can't do scalar multiplication or real to complex conversions, it's really not usable for what Eigen or Numpy can do. Try defining the Mul trait generically for those multiplication operators, and you'll see what I mean.
(yes, I know there are some syntax errors in the type declarations above - It's my opinion Rust got that wrong too...)
Supposedly this kind of thing will eventually be possible with the recent "specialization" changes, but I haven't seen anything that allows operators to work as above...
ps: Last I looked, there was fledgling support for SIMD on the horizon... LLVM supports that, so it could happen.
This is essentially a complaint that Rust went with strongly-typed generics like every statically-typed, non-C++, non-D language instead of untyped templates like C++ and D. I think that the difficulty of reasoning about template expansion type errors, the complexity of SFINAE, and the consequences for name resolution like ADL make templates not worth the added expressive power for Rust, especially when the features needed to support your use case can be added in the strongly-typed setting eventually.
I don't think that's quite right... ISTM that it's more about lack of multi-parameter type classes/traits[1] (+ specialization, I guess) and the fact that it went with the traditional non-symmetrical dot-based notation for dispatch... which means that the first "trait-parameter" would always be "privileged" (at least syntactically).
[1] At least I don't think it has those yet, right?
Rust has had multi-parameter traits for a very long time - possibly they were never not multi-parameter. All of the binary ops are multi-parameter. The first parameter is just an implicit Self parameter, which does preference it syntactically, but semantically it is not privileged in any way.
The issue that operator overloading math crates come into is that Rust's polymorphism is not only type safe but also makes stronger coherence guarantees than systems like Haskell do. You can't implement all of the things you want because Rust guarantees that adding code to your system is a "monotonic increase in information" - adding an impl is never a breaking change to your public API, and adding a dependency never breaks your build. Haskell does not make these guarantees.
There's no way to "solve this" entirely because there are a bunch of desirable properties for type class systems and they fundamentally conflict, but I think with specialization and mutual exclusion (the latter hasn't been accepted yet) Rust's system will be as expressive as anyone needs it to be while still maintaining coherence.
Of course in this context Wadler's law should be taken into account, and the grandparent poster could maybe revise their strength of opinion about syntactic sugar and recognize that this is about solving a much more complex problem than how to make matrix multiplication look nice. https://wiki.haskell.org/Wadler's_Law
I stand very much corrected. I must admit I wasn't sure and just skimmed a little documentation and didn't see any examples of MPTCs.
> The first parameter is just an implicit Self parameter, which does preference it syntactically, but semantically it is not privileged in any way.
Yes, but I think the syntax was actually what bothered the OP who was complaining about linear algebra. At least C++ has free functions for operators. (I assume, again without knowing, that Rust doesn't, otherwise OPs "demands" should be easy to meet, given specialization.)
I mean any kind of "double * matrix" or "vector * matrix" or ... should be easy to support if operators are free functions and there's MPTC and specialization. EDIT: Actually, come to think of it, for this situation (algebra) you don't really need specialization, you just need overloading. (Since none of the types involved are sub-types of each other. It could be argued that a 1xN matrix ~ N-vector, but that's probably not worth supporting.)
Generally, I just think it's a mistake to even support the magic "dot" notation and thus privileging one operand over any other, but I guess we're getting off topic.
Again, the issue we run into is with coherence. The Rust team decided that adding a dependency should never in itself break your build, and adding an impl/instance to a library should never be a breaking change. Haskell doesn't make this guarantee. C++ doesn't even typecheck parameters.
Providing this guarantee means establishing what are called orphan rules: you can only `impl Trait for (Type1, Type2, ...)` if a certain portion of those traits and types are defined within the same library as the impl (the exact rules are nuanced, there's an RFC that defines them). The rules that were arrived at as providing the best guarantees unfortunately make it difficult to implement the operator overloading traits in the way a lot of numeric libraries want.
For example, you can't define a trait `Integer` and `impl<T: Integer> Add<T> for T`.
I'm actually not sure what the OP's specific complaint is, but its ultimate cause is something along these lines.
> This is essentially a complaint that Rust went with strongly-typed generics
It could also be a complaint about how operators are implemented, e.g. in Scala they're just methods (no need for a special trait). That's not to say I don't think Rust made the right choice, but Scala went with strongly-typed generics, and can allow implementing the asymmetric multiplication operators.
If operators are just specially named methods, then with strongly-typed generics you can't write generic functions that work on anything that (for example) implements the + operator, because traits (concepts) have to have a specific signature.
Yes, you're right. The only way I can think of to handle it (in Scala, I suppose you can't do it in Rust) would be via implicit parameters. You would basically have a Times trait, and a TimesOp trait.
trait TimesOp[L,R,O] {
def times(L left, R right): O
}
object TimesOps {
implicit object ScalarMatrixTimesOp[S] extends TimesOp[S,Matrix[S],Matrix[S]] {
override def times(S left, Matrix[S] right): Matrix[S] = {
...
}
}
}
trait Times { self: L =>
def *[R,O](R right, implicit timesOp: TimesOp[L,R,O]): O =
timesOp.times(this, right)
}
I'm sure I got something wrong here, but I think the basic idea works. In any case, it goes to show that getting this behavior in a language with strongly-typed generics is non-trivial.
No. The complaint is that Rust is not good at expressing numeric algorithms. The poster above asked about a Rust library like Eigen (C++) or Numpy (Python)... It doesn't matter - I suspect you would dismiss any criticism regardless.