Hacker News new | past | comments | ask | show | jobs | submit login
The curse of strong typing (fasterthanli.me)
82 points by jacobwg on June 1, 2022 | hide | past | favorite | 117 comments



It's a delicate balance. Swing too far one way, and you can make it easy to make very difficult to detect mistakes. Swing the other way, and you now have a system that makes it very hard to just get lost in the process of writing, instead forcing yourself constantly to remember basic low-level details when you should be focusing on the problem.

I generally like C++'s approach. Most things that should work together (like doubles, floats, ints, etc) can work together. If you want to have stronger types, you can make your own classes or use enum classes. If you want, the compiler can warn you about different types, or just let it slide.


> Most things that should work together (like doubles, floats, ints, etc) can work together.

Because integer promotion goes every which way, not just strict ext/sext it’s been a regular source of security issues, which is why most low-ish level langages have swung so hard against it.


In the case of arithmetic, strong typing is kind of nice, because math is different dependent on what types you are using. 3/2 = 1 for integers, 1.5 for floats, etc... You really don't want to mix those up in mission critical sections of code.


> In the case of arithmetic, strong typing is kind of nice, because math is different dependent on what types you are using. 3/2 = 1 for integers, 1.5 for floats, etc... You really don't want to mix those up in mission critical sections of code.

While I agree that you don't want to mix up actual division and floor division, I also rather strongly prefer “/” to be actual division, not floor division, irrespective of the operands. int/int -> rational is the most correct behavior. Scheme’s numeric tower is the poster child for getting this right (not just for division but for things involving numbers generally, including decimal literals specifying exact numbers and not approximate binary floats by default.)

If I want an operation that is not actual division, that that is what is happening should be visually distinct in code.


I quite like what Haskell has here. You type in number literals, but they get a type based on context. So 2*3.14 will work there.


Haskell's number system has its good points, but there are some hidden gotchas there as well. For example, all integer literals are interpreted by starting from the most general type (arbitrary-precision integers) and narrowing them with `fromInteger`, a member of the `Num` typeclass—which doesn't offer any means of handling failure (e.g. `321483209423 :: Word8`) other than a runtime error or overflow. It can also be rather verbose since all other conversions must be explicit, even ones which cannot possibly fail.

IMHO Rust's `Into` and `TryInto` traits offer a better solution than Haskell's `fromInteger`, distinguishing between conversions which cannot fail and ones which may. It also infers the correct width for unsuffixed integer literals from the context—but it draws a sharp distinction between integer and floating-point literals, which is why `2 * 3.14` (integer * float) is a type error while `2 * 314u16` would be accepted without issue. The downside of the Rust approach is that the type inference rules for integer literals are hard-coded into the language and can't easily be extended to cover user-defined types, whereas Haskell's approach can accept integer literals where any type with an instance of the `Num` typeclass is expected. One alternative, combining the best of both worlds, would be to infer the narrowest type which can hold the literal value and add an implicit `.into()` for non-lossy conversion to any compatible type.


That is nice and common in high-level languages, but most math is done at runtime with some kind of input. That's when strict typing is helpful.


> That's when strict typing is helpful.

How does it help? I don't mean "strict typing" in general, but specifically how does making us decorate numeric literals help with "math done at runtime with some kind of input"?


> specifically how does making us decorate numeric literals help with "math done at runtime with some kind of input"

It forces you to be specific about rounding, minimum/maximum, and floating-point arithmetic.

Your compiler doesn't/can't know expected extremes of a value. If it defaults to, let's say, int64, then you're potentially wasting enormous amounts of memory (depending on the size of your data).

Similarly, the programmer needs to be specific about precision. If you know you're dealing with integers, then an integer type is great. If you know you need N digits of precision, you can select a numeric type that fits.

The type system becomes useful if/when you start to mix these numbers together. It can warn you that you're losing precision (or adding artificial precision, by casting an integer to a double, for example).

And that isn't even getting into questions of whether you want the number stored on the stack or the heap, which I believe Rust gives you more control over than most languages do.


> Your compiler doesn't/can't know expected extremes of a value.

Yes, it can! For one thing, it's a literal; it has one value; trivially, that's both extremes. But even leaving aside possibilities for anything new and smart, Rust has type inference so it knows what type a given literal has to be (or it doesn't; I have no objection to making the programmer be specific in that case).

I'm asking what problem you see arising from a policy like "`2u32` means 2 as a 32 bit unsigned integer, but `2` means 2 as whatever type is inferred, no defaulting, and we catch it at compile time when the literal can't be represented exactly in the type." (Ignoring simple path dependence - it would be a breaking change because it would make some expressions ambiguous where they relied on a lack of suffix meaning i32 or f64.)

> The type system becomes useful if/when you start to mix these numbers together.

As mentioned, I'm not objecting to the type system, or asking for any implicit conversions except a lossless(!) implicit conversion from the string the programmer typed to the datatype inferred by the type checker.


Actually, it seems to already do this, although it requires the type be integer!

https://play.rust-lang.org/?version=stable&mode=debug&editio...


> it's a literal

This is the situation I was explicitly excluding from my original comment. I was talking about runtime input, which is by far the more common use-case for numbers in code.

A smart compiler will just optimize operations on literals into their result at compile time anyway.


I think that makes your original comment non-sequitur?

Haskell does not allow Integer * Double (or even Integer * Int32), but it does allow `2 * 3.14`, and you seemed to be saying that what Haskell does is somehow dangerously weakly typed.


Go does this too.


Yeah seriously. I'm surprised by the other comments here. It seems like people want loosy goosey typing that magically inserts lossy casts with convoluted semantics, as if they've been brainwashed by JavaScript and C.


This comment section is the most absolute proof I've ever seen that absolutely nobody reads the article before commenting.

Hint: it's not about integer types at all


Then please enlighten us and tell us what you feel the article is about rather than a snarky "rtfm" comment.

Because I started reading. I got pretty far, and it was _still_ talking about integer types. The article is very long, and I gave up, not having any idea why I should keep reading. So saying the article isnt about them "at all" is wrong. Maybe, the int type thing is an intro to some other thing? But thats a long intro. So, tl;dr, at least not all the way. Why don't you tell us the point so we have desire to actually read it.


Conveniently, there's a table of contents at the start that hints at the later topics:

  Different kinds of numbers
  Conversions and type inference
  Generics and enums
  Implementing traits
  Return position
  Dynamically-sized types
  Storing stuff in structs
  Lifetimes and ownership
  Slices and arrays
  Boxed trait objects
  Reading type signatures
  Closures
  Async stuff
  Async trait methods
  The Connect trait from hyper
  Higher-ranked trait bounds
  Afterword
So 17 sections, 16 technical (excluding the Afterword) and only the first two focus on the int/float stuff. So to answer your question:

> Maybe, the int type thing is an intro to some other thing?

No, it's not an intro, it's just the first two sections.


> I hope I was able to show, too, that I don't consider Rust the perfect, be-all-end-all programming language. There's still a bunch of situations where, without the requisite years of messing around, you'll be stuck. Because I'm so often the person of reference to help solve these, at work and otherwise, I just thought I'd put a little something together. [emphasis mine]

As you can also see from the table of contents at the top of the article page, it goes through a great breadth of topics. And you can use that table of contents to jump to the afterword.


Since you don't seem to have commented on the article without reading it, my comment was not addressed to you. I'm not the article police; if you find something too long to read, decide not to read it, and move on with your life, we have no quarrel.

Honestly, I don't even mind the folks who do comment without reading. Comments are free, do what you like. I just find it mildly hilarious that 100% of the comments are about (the first) 3% of the article.


Reading all of it is a big ask for a blog post that is this long


Then why bother commenting?

I don't understand the urge to comment on something one hasn't read.


I did read the beginning of the article and I wasn't too impressed. So I went through the comments, looking for bits of wisdom that might hint reading the whole article was worth it. Still, I wasn't impressed.


I see multiple people commenting on why they didn't read the whole thing, which is a perfectly valid topic of conversation.


It is categorically about integer/numeric types in part. Why do you feel that "at all" is an appropriate qualifier?


I'm sure this is good, like, really really good, but I'm struck by the feeling that it wants to baby talk me the same way Godel, Escher, Bach did.

I mean, it reads like a Homestuck pesterlog.


Yeah the pacing on this one is a bit slow, especially if you already know these things.


Perhaps it's written by GPT-3..


Wouldn't any person with even a passing familiarity with the Pascal-family of languages, immediately recognize the issue?

I think this post is longer than the entirety of the Oberon-2 language specification, which is about 24 pages long (not including details of certain modules etc.) : https://link.springer.com/content/pdf/bbm%3A978-3-642-97479-...


This seems like an article for a beginner programming forum not Hacker News? These concepts should all be understood before I'd consider hiring you as an SE 1.


Who do you think frequents this site? Only the most experienced programmers who all happen to know Rust? I see comments several times a week here from people who self-declare themselves to be high school or college students (or aged at least), I would expect few of them know these things or are experts at them. And then there are all the non-programmers (or non-professional programmers). And for the programmers, there are all the ones who haven't looked at Rust before or tried to but hit a wall for some reason (and quite likely related to the topics discussed in this article).


I think of this as a site for tech professionals. And like I said, I'd expect you to understand the concepts discussed here before I's hire you as an SE1. So like I said, it doesn't fit this site in my mind.


You can be a tech professional without knowing how to program, actually.


Jump to the end of the articles. I'd really hope SE 1 employees would know all of that stuff.


I enjoyed the explainer on higher-ranked trait bounds, personally, even as someone already familiar with the topic.


I know how numbers work in JS but not python or Rust. It was interesting to me.


Floating-point numbers, existing on finite computers, are just erroneous approximations of real numbers, so the same operation on floating-point and on ints means different things, yielding different results. Thus, allowing the programmer to magically unconsciously convert between floating-point and ints is a recipe for disaster arising from hidden/misunderstood floating-point error.

The safe way is to require the programmer intend about the type to use to be clear in the code and to force the programmer to be mindful about whether or not he is using floating-point. This is just what some languages do, including Rust.

In the first example, the author is essentially complaining that he has to type '2.0' instead of '2'. I don't think that's a lot of extra typing.

Perhaps it is good and convenient to have automatic int-to-float conversion, but such conversion involves error if the int values are large. Conversion in the opposite direction is similarly lossy/erroneous, even if the floating-point type can represent every real number perfectly, since not every floating-point value is an int value. Even with a perfect floating-point type, supporting automatic conversion only in one direction is weird/confusing from a user-interface POV, so the normal thing to do is to just not do automatic conversion.

To drive the point about floating-point error, perhaps some people complaining about lack of automatic conversion may be surprised by how the following round-trip conversion between int64 and float64 accumulates error, as demonstrated in a Python interpreter:

>>> int(float(0xffffffffffffff)) - 0xffffffffffffff

1

If you drop one digit, there is no error. A language forcing you to do the conversion explicitly forces you to be mindful of the potential for floating-point error. A language that does automatic conversion promotes blissful thinking and eventually big surprise.


I’ve certainly spent way too much time tracking down and fixing these automatic conversations to bat an eye at any extra verbosity required to make the intent clear.


My best story of this is JS integer parsing - we were passing IDs to the frontend through C# as ints, parsed by JS on the frontend from the JSON HTTP response. We spent at least a day trying to figure out why our links weren't working and it was because our numbers were too big for JS

In a browser console it drops off after 16 digits and right-pads with zeros

> 11111111111111111111

> 11111111111111110000

Our solution was to pass all IDs as strings because they aren't ever going to be manipulated like a number will


While implicitly converting floats to ints is bad, performing a float operation (especially multiplication or division) with an int is meaningful and should not be banned.

I can only have discrete buckets of things. These things have a floating point weight. What is the total weight? Forcing explicit conversion of your int to a float to do the multiplication is semantically wrong.

I could live with banning int + float or int - float.

It's almost as bad as go requiring you to convert a number to a time to figure out the integer multiple of a time interval.


Int-to-float conversion is lossy, as my Python example demonstrates, and thus should not be automatic.

Performing a float op with an int, which you said should be allowed, entails automatic int-to-float conversion before the op, does it not?


Lossy doesn't make sense as a criterion with floats. Float to float addition can be lossy.

Whether there is a conversion is architecture dependent. I believe x86 has an opcode for direct multiplication.


Float divided by int or int divided by float is also an issue. Does 4.25 / 2 equal 2.125 or 2? Does 16 / 3.25 equal 4 or 5 or some floating point number in between?


You do what FIMUL does. The other options are silly.


I don't understand. You want to convert everything to floats all the time? Then just start and stay in float-land and never use ints.


fair warning: "Say, bear, did we just accidentally write a book's worth of material about the Rust type system?"


I admit to being Rust-curious, but the syntactic gymnastics were out there between C++ and Perl.

Think I'll keep giving Python the squeeze.


Strong typing is not a mistake or a curse. Use a more advanced language that supports stuff like generics, or model your problem with the appropriate type. If you need real numbers you need real numbers, not some weird mix. I don't think it is good to just let it arbitrarily switch between int and float, that can hide bugs or mistakes.

Computers are much stricter about what is what. In mathematics people are on the surface less strict, a lot of maths is done up to some sort of isomorphisms/forgetful functor, but we are smart enough to fix up potential type errors. Doing maths, as a mathematician is using a much more expressive language that takes care of some details for you.

For 2 \pi, \pi ( ) is a program, an infinite list, a monad, and we implement it as a lazy calculation sometimes we can extract numbers up to a certain accuracy. 2 \pi is 2 \pi or if you know more, you can write it as another sequence, or pull the 2.

As for integer * float, well what is this supposed to actually mean to a computer?

Integers are not subsets of the reals. Integers are not sets, the reals are not sets. There is a representation of the integers in the reals represented as a set however. There is however a way to act Z on R. It even seems like a very natural operation. However we are doing maths, using our powerful reasoning abilities. Now it is your job to explain all these differences to a computer. Pretty tricky, so how about we just make things more restrictive. If you need real(integer) * real how about you just use that.


Ooh, a new fasterthanli.me article

> 69 minute read

Heh.


10/10


It's our job to explain to the computer what it needs to do. Being explicit is always helpful since our minds aren't computers. I want to be 100% explicit and when I'm not I want the compiler to ask. Not assume. Writing a cast or a clarification of some sort is a small price to pay for a potentially undetectable bug.


I've run into another one of these articles on HN before and both times I was very confused. After about 10-15 minutes of reading I'm still wondering whether I'm reading a tutorial or a discussion about strong typing (or whatever programming concept is being discussed)? Is this article meant to be a tutorial or is it some sort of commentary I'm missing? I'm fine with either one, it just helps me get in the right mindset if that makes sense :)


I experienced the same confusion.

Looking for an opinion about Rust's type system, I am presented with a tutorial on the difference between basic number types. I even think this is a great tutorial, but I wasn't looking for a tutorial. I suppose what I've learned from this is that the type of programmer represented by the protagonist doesn't know about this, and may think it's overkill for their use of numbers. I got tired of reading the tutorial before I became smarter about how to close the gap between programmers who care about types and programmers who don't.


Yeah it could do with a preword like "this is a tour of various Rust type features". The title makes it sound like it gets to some kind of point, which it doesn't. But I had to spend 10 minutes skimming it to figure that out.


Amos does deep dives on subjects. This is one such deep dive on Rust type system, it's limitations and why they exist.


This takes an agonizing amount of vertical space to discuss the fundamental concept of different numeric types and how literals are mapped to them (assuming they do get to the part about literals - I didn't get that far). It's not because the author is long-winded, but because they are pushing the "cute bantering dialogue" style of writing to its logical extreme.


Programmers, much like compilers, read things until they reach a point they disagree with and then stop there to write a comment.


Please don't imply that I stopped reading because I merely disagreed with something the author said or did. As I implied in my post, I stopped because I found the reading itself to be difficult and unenjoyable. Framing that as "a point [I] disagree with" is uncharitable to me and makes it sound like I'm making an unreasonable or pompous criticism of the author's opinion on the topic about which he is writing.


Yet unlike compilers, they don't handle obfuscation well.


This criticism comes up every time one of Amos’ posts shows up here. If you don’t like the style that’s fine but it’s not like you have some novel criticism here.


I haven't noticed, but I also don't assume anything I say is novel. However, being novel isn't necessary for adding information. If a hundred people have the same criticism vs if three people have the same criticism, those two scenarios say different things, and therefore each represent different pieces of information, so the extra 97 people in the former case are adding information on the whole despite not individually saying something novel.


Fortunately it's novel to plenty of other people here. Crisis averted!


I normally don't mind Amos' style but this one was actually a little too much. So I'd consider that novel criticism. The quantity will add into the feedback and provide context.


It is in tldr territory. If I am going to read a book I need to have a careful think and read some reviews first to see if that is worth my time.


In general people post their criticisms here because it’s assumed, perhaps arrogantly but often justifiably, that authors read HN and will see them.


This guy is weird.


I've never wanted to use Rust less than after encountering this article.


I mean this is a super detailed introduction in a weird writing style. Does not really represent the experience of writing Rust and more than a very very long article about channel behaviour would represent how it feels to write Go.


I got a little way in and assumed by the dumb gopher and code examples that the author was pushing Go.



Wow, Go is worse than I thought! Sum types aren’t particularly ergonomic in C++ but at least it has them. And it has operator overloading, and generics, and `const`, and deterministic lifetime. To me complex code can be made way more understandable by using the type system to say “this takes a const ref to a IPv4 or IPv6 IP address and returns a future of either a duration in nanoseconds or an error code”. Just like in mechanical engineering, where the units of an equation tell you a lot, the signature of a function in a language with a rich type system tells you that that’s probably a `ping` function. No wonder I have such a visceral averse reaction to Go.


Yep, same here. I was seriously considering diving into it, but any language that can't figure out how to multiply a float and an integer without further guidance from me is not worth the time. Life is too short for that sort of nonsense.


I personally love that Rust doesn't automatically convert between different types of numbers, with different precision and rounding/overflow behaviors. If I'm multiplying numbers of different types, I want the compiler to force me to convert them to a common type. I've been bitten too many times by implicit and lossy numeric conversions in languages like C.


But that's just because C's automatic conversions are broken. C's automatic conversions are not the only possible design. Common Lisp (for example) has a vastly better design. But regardless, I don't see how a reasonable result of multiplying (say) 3.14 by 2 can be anything other than 6.28.


So when do we find out why Rust can't multiply pi by 2?

I mean, what is the point of making float x int an unsupported operation?

I understand they're different types, but that doesn't seem like a good enough reason. Isn't it obvious that the output type should be float? So why not just cast the int to a float and multiply them and output a float, like other languages do?


> "Isn't it obvious that the output type should be float?"

To you that might be obvious. The compiler can't infer what your intentions were from your code.

> "So why not just cast the int to a float and multiply them and output a float, like other languages do?"

What if the result is being stored in an integer type, how should this be handled? In order to compile the code in the ways your post specifies the compiler has to make many assumptions. Choosing to minimise the amount of assumptions made by the compiler about types is part of 'strong typing'. Being a systems programming language, it needs to be concerned with the underlying representation of the data. What if the result variable only has 8-bits, and can't reasonably represent a floating point type at all?

I do lots of coding in Ada, which has a strong type system. It allows you to specify arbitrary ranged numeric types. This allows you to better semantically describe your problem domain. You could have a specific numeric type representing the number of slices in a pizza (an integer), and one specifying the number of kilometres between two points on a map (a float). Multiplying one by the other might not make sense in your problem domain. The compiler would prevent you making what it assumes would be an erroneous calculation.


In mathematics integers are a subset of the reals (which floats try to immitate) and when using multiplication of an int and a real you would expect the range to be real numbers.

So in a static typing context float * int has to be float. Otherwise it is not * it is some other operations (e.g. * composed with floor).

In dynamic typing you could decide the type at runtime. A JS runtime could in theory do that under the hood although semantically it should behave as a float.


But in software, integers are _not_ a subset of floats. There are numbers that can be represented using an n-bit integer that cannot be represented using an n-bit float, and vice versa. Converting between the two automatically can lose information.


That's a bad argument because almost every floating point operation loses information of some sort of another. Should we ban all floating point operations on those grounds?

Moreover in many cases int * float is an intrinsic, so on the hardware level there is no conversion operation interposed into the operator.


> Moreover in many cases int * float is an intrinsic, so on the hardware level there is no conversion operation interposed into the operator.

Out of curiosity, what instructions support mixed integer/floating-point operands? I can think of integer-/floating-point-only instructions off the top of my head, and I know I've seen conversion instructions, but not mixed ones. I'm hardly an assembly expert, though, so I wouldn't be surprised if I just wasn't aware of such a thing.


FIMUL


Huh, first time I've seen that instruction. Thanks!

Does seem that it converts its arguments first, though?

> The FIMUL instructions convert an integer source operand to double extended-precision floating-point format before performing the multiplication.

(Via https://www.felixcloutier.com/x86/fmul:fmulp:fimul)

In addition, I thought compilers tended to avoid the x87 FPU registers nowadays?


But in the case of multiplying by a float, you will have to lose information by either appending a .0 to your number to floatify it, or lose information by letting the compiler do it.

At the end of the day you still need the float multiplied.


Or I can realize "shit I've got a float and I'm dealing with currency, let me make that an integer of cents instead". Or "wait all of these should be a ratio type". The ways of resolving this situation is endless. Float is far from the only option.


Adding the .0 expresses your intent that it is a float. Expressing intent in the code is a form of documentation and makes the code easier to maintain; someone coming later can tell you meant it to be one way, vs it accidentally being that way. It's no different than adding parens around elements in a math formula when they're not needed; if the formula is complex enough, adding some parens to group things can make it clearer to the person reading it, even if it doesn't actually change the behavior.


IIRc, 23-bit ints are a subset of float32, and up to 50-something-bit ints are a subset of float64s, so on most architectures, int (32) is a subset of double (64).


This is the reason that i32/u32/i16/u16/i8/u8 can be infallibly converted to f64 using the From/Into traits[1].

[1]: https://play.rust-lang.org/?version=stable&mode=debug&editio...


Wait hold on, why are you jumping straight to "what the compiler can infer"? Compilers (and type systems) are written by and for humans, so first tell me the use case for multiplying a float by an int that does not lead to a float and justify why we should optimize for that case.


Sure. Multiplying a float by an int that leads to AN ERROR because you mixed up your types and you are now losing precision somewhere or other.

Avoiding that sort of an error at compile-time is right there at the heart of the core premise of Rust, so Rust should optimize for it.


Except that in the real world people want to multiply floats by ints all the time, and they don't want to go through a lot of type-wankery to do it.


In order to make _some_ things safer, every type system must make _other_ things impossible. There is, as far as I'm aware, no real way around that.


It’s like one cast, yo


It's "one cast" times the total number of times you need to multiple ints by floats for the rest of your career.


You can also make errors when you multiply two floats together. Why then should Rust allow any operations at all?


You simultaneously make a great and stupid point. The stupid part is obvious: operations are allowed because writing programs is useful. Now for the great, nuanced and complicated point: How do we detect errors when manipulating floats while allowing us to write the useful programs?

This boils down to two things: 1. What do we think is correct at any given time for any given program? The compiler can't know so we have to tell it, which brings us to 2 2. How do we write down in our programming language what is supposed to be correct?

The problem with floats is that sometimes you care about precision, sometimes you don't sometimes you care about overflows, sometimes you don't. Sometimes you care about inverse operations, sometimes you don't. Commercial programming language which only expose a "float" types usually are unable to deal with the greater complexity of _ensuring_ that some property you care about isn't broken. That is why compilers let you operate on floats and shoot yourself in the foot when you divide and then multiply back.

On the other hand, one could imagine a future programming language (and some academic experiments already exist) where you can tell the computer "in this part of the code, it's important that I never overflow" or "in this function, I expect multiplication and division to be inverses of each other". In which case the compiler will display diagnostic information if you do something that break those properties.

It's not clear yet what is the most convenient user interface to write down and check those properties. But many believe it by using even more advanced type systems than Rust. Many other believe that we can add static analysis atop existing programming languages to obtain the same result.

TLDR: Rust allows it for now because 50-100years in the future we'll have to tools to tell when it's ok and when it's not ok to multiply floats together. Right now we're still smacking rocks together to make fire.


Most of the work I do in systems programming languages is bare-metal code where semantics for representing and manipulating data at the bit-level are required. I need to have very explicit control over the shape of the data in memory. For starters, many of the systems I work with don't even have the ability to do floating-point calculations at all.

> "Compilers (and type systems) are written by and for humans..."

The abstraction that is the type system might be designed for humans, but the underlying physical layers are totally non-abstract, tangible things with their own constraints. If I'm sending data to a DAC over the i2c protocol, the protocol has very real constraints at the electrical level. In order to interact with such protocols in a meaningful way we need low-level semantics to control the shape of data at the level of individual bits.

Consider the following:

  uint32_t a = 1234567;
  float b = 0.34;
  uint8_t c = a * b;
  fwrite(&c, 1, 1, stream);
What value gets written to the stream here?

> "first tell me the use case for multiplying a float by an int that does not lead to a float"

Should the Rust compiler forbid all multiplication between the two types that doesn't get stored in a float?


One can manipulate arbitrary precision integers at the bit level. Nonnegative integers correspond to infinite sequences of bits all but a finite number of which are 0; negative numbers to infinite sequences in which all but a finite number are 1. There's no need in most programs to insist that variables have some finite (and fairly small) range beyond which integer operations stop acting like their mathematical definition.


There are very real use cases where integer types with specific fixed widths are necessary, on account of very real constraints. A serial interface for instance, which is a very common peripheral outside of PCs, often sends information out over the wire 8 bits at a time. 'Systems programming languages' need to specify how arithmetic using these fixed-width integers should work.


Sure, if you want to use arbitrary-precision integers everywhere. At a significant loss in performance.


I would say the issue is which float you want. If the int is larger than 2^24 it can't be represented exactly as a float32 so you may have intended to widen to float64 which covers int32 and float32. Or maybe not; maybe you wanted performance instead.

That being said, multiplying by 2 not working is pretty funny because that's always exact until you overflow the float. It's pretty annoying the compiler didn't just figure out there was no precision loss anywhere here so just do it instead of complaining.


> I would say the issue is which float you want.

You're getting warmer. The reality is compilers default floating point is wrong for most uses. You should be able to represent things like pi and 1/3 exactly by default. That that's not the case in JS is criminal.


So symbolic math? That sounds like an interesting choice of details. Is it something that would produce tangible better results in a significant number of cases?


You could go there. But just decimal floating point would be better in most cases.


You can't represent π or 1/3 exactly in decimal floating point either…


Generally speaking your argument holds true for addition and subtraction but for multiplication and division there are almost no cases where integer units * (or /) float units are unreasonable, for example "man-hours" (discrete * float) or "pizza slices/minute" or averaging people's heights (centimeters / #raw number) e.g.


It's not that I don't think there's a valid case for this kind of arithmetic, it's just that the compiler doesn't really know what you're trying to accomplish, and in what context the result will be used. All of this is a consequence of a non-perfect world where the compiler can't contextually evaluate what the programmer is doing.

All type systems are just abstractions designed to make it easier to express your intent while minimising mistakes. Storing the result of dividing an integer by a float, or vice-versa, in an integer is totally possible in C, but very likely not the behaviour the programmer intended. Should the compiler just forbid this? I guess what Rust has done is just one possible tradeoff.

For what it's worth, Ada has actually introduced semantics for dealing with arithmetic using units of measurement[1].

1: https://www.adacore.com/gems/gem-136-how-tall-is-a-kilogram


> To you that might be obvious. The compiler can't infer what your intentions were from your code.

How does the compiler know what my intentions were when I multiplied two floats? Maybe I intended to receive a partridge in a pear tree.

Intentions are not relevant here. What's relevant is what makes sense for a multiplication operator to do.


Yes and it makes sense for a multiplication operator to support float * int or float / int. Suppose you take the average height of 10 people. Converting the 10 to a float before multiplying seems wrong because the compiler is implicitly suggesting "I am not letting you take this average until you consider the possibility of non integral human count.


For what it's worth, Rust isn't the only language that does this. OCaml also doesn't allow mixing ints and floats, and actually has entirely different operators for int and float operations [0]. I think Haskell might do something similar [1]?

I don't know for sure why Rust chose this route, but I would guess it's a mix of type inference/type system considerations and a desire to avoid potential implicit loss of data.

[0]: https://stackoverflow.com/questions/64244351/why-does-ocaml-...

[1]: https://stackoverflow.com/questions/19019093/haskell-multipl...


Rust just takes "explicit is better than implicit" further than most. No hidden type casts.

Floating point operations aren't quite the same as integer operations. Float operations aren't commutative, for one. So it's nice to be suae you really intended to use a float operation on an integer by requiring an explicit cast, so you can get the order correct.


In Haskell you use the same operator for multiplying two Ints as you use for multiplying two Floats (you can't multiply an Int by a Float without converting one to the other).


There's also magic for numeric literals.


And considering the influence of ml in rust roots it's not surprising.


Its not obvious at all to me the result should be a float. I think the times I'm going to accidentally multiply a float by an int outweigh the times I want to lazily multiply a float by an int. And, some percentage of that time I would want an int anyway!


> I'm not sure where they got that idea from. Maybe they've been reading propaganda. Maybe they fell prey to some confident asshole, and convinced themselves that Rust was the answer to their problems.

That's a very strong statement and I don't see any back up on this claim.

> At any rate, I now find myself in a beautiful house, with a beautiful wife, and a lot of compile errors.

Sounds like a layer 8 problem to me.

I already questioning everything in this blog post that comes after.


I didn't downvote you, but other people are because it seems you're misunderstanding what the author wrote.

For example, the "confident asshole" link goes to the author's own site. He's referring to himself as the "confident asshole". He's being tongue-in-cheek, funny, joking.

The "propaganda" is also a joke because, well, it's true that memory safety issues lead to security problems.


thanks a lot, that makes much more sense now! I'm just so used to "rage culture" and uninformed complaining that I tend to overreact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: