How does it help? I don't mean "strict typing" in general, but specifically how does making us decorate numeric literals help with "math done at runtime with some kind of input"?
> specifically how does making us decorate numeric literals help with "math done at runtime with some kind of input"
It forces you to be specific about rounding, minimum/maximum, and floating-point arithmetic.
Your compiler doesn't/can't know expected extremes of a value. If it defaults to, let's say, int64, then you're potentially wasting enormous amounts of memory (depending on the size of your data).
Similarly, the programmer needs to be specific about precision. If you know you're dealing with integers, then an integer type is great. If you know you need N digits of precision, you can select a numeric type that fits.
The type system becomes useful if/when you start to mix these numbers together. It can warn you that you're losing precision (or adding artificial precision, by casting an integer to a double, for example).
And that isn't even getting into questions of whether you want the number stored on the stack or the heap, which I believe Rust gives you more control over than most languages do.
> Your compiler doesn't/can't know expected extremes of a value.
Yes, it can! For one thing, it's a literal; it has one value; trivially, that's both extremes. But even leaving aside possibilities for anything new and smart, Rust has type inference so it knows what type a given literal has to be (or it doesn't; I have no objection to making the programmer be specific in that case).
I'm asking what problem you see arising from a policy like "`2u32` means 2 as a 32 bit unsigned integer, but `2` means 2 as whatever type is inferred, no defaulting, and we catch it at compile time when the literal can't be represented exactly in the type." (Ignoring simple path dependence - it would be a breaking change because it would make some expressions ambiguous where they relied on a lack of suffix meaning i32 or f64.)
> The type system becomes useful if/when you start to mix these numbers together.
As mentioned, I'm not objecting to the type system, or asking for any implicit conversions except a lossless(!) implicit conversion from the string the programmer typed to the datatype inferred by the type checker.
This is the situation I was explicitly excluding from my original comment. I was talking about runtime input, which is by far the more common use-case for numbers in code.
A smart compiler will just optimize operations on literals into their result at compile time anyway.
I think that makes your original comment non-sequitur?
Haskell does not allow Integer * Double (or even Integer * Int32), but it does allow `2 * 3.14`, and you seemed to be saying that what Haskell does is somehow dangerously weakly typed.
How does it help? I don't mean "strict typing" in general, but specifically how does making us decorate numeric literals help with "math done at runtime with some kind of input"?