Haskell's number system has its good points, but there are some hidden gotchas there as well. For example, all integer literals are interpreted by starting from the most general type (arbitrary-precision integers) and narrowing them with `fromInteger`, a member of the `Num` typeclass—which doesn't offer any means of handling failure (e.g. `321483209423 :: Word8`) other than a runtime error or overflow. It can also be rather verbose since all other conversions must be explicit, even ones which cannot possibly fail.
IMHO Rust's `Into` and `TryInto` traits offer a better solution than Haskell's `fromInteger`, distinguishing between conversions which cannot fail and ones which may. It also infers the correct width for unsuffixed integer literals from the context—but it draws a sharp distinction between integer and floating-point literals, which is why `2 * 3.14` (integer * float) is a type error while `2 * 314u16` would be accepted without issue. The downside of the Rust approach is that the type inference rules for integer literals are hard-coded into the language and can't easily be extended to cover user-defined types, whereas Haskell's approach can accept integer literals where any type with an instance of the `Num` typeclass is expected. One alternative, combining the best of both worlds, would be to infer the narrowest type which can hold the literal value and add an implicit `.into()` for non-lossy conversion to any compatible type.
IMHO Rust's `Into` and `TryInto` traits offer a better solution than Haskell's `fromInteger`, distinguishing between conversions which cannot fail and ones which may. It also infers the correct width for unsuffixed integer literals from the context—but it draws a sharp distinction between integer and floating-point literals, which is why `2 * 3.14` (integer * float) is a type error while `2 * 314u16` would be accepted without issue. The downside of the Rust approach is that the type inference rules for integer literals are hard-coded into the language and can't easily be extended to cover user-defined types, whereas Haskell's approach can accept integer literals where any type with an instance of the `Num` typeclass is expected. One alternative, combining the best of both worlds, would be to infer the narrowest type which can hold the literal value and add an implicit `.into()` for non-lossy conversion to any compatible type.