Hacker News new | past | comments | ask | show | jobs | submit login

You can even take this a step further: using `float` instead of `kilogram` is leaking an implementation detail. It's (probably) too low-level of information to actually be useful.

Luckily, more and more programming languages are making creating these simple types easier. Haskell:

    newtype Kilogram = Integer
Rust:

    struct Kilogram(int);
In fact, this blog post, while a bit outdated, shows an application of this idea, to solve string encoding issues for HTML templating: http://bluishcoder.co.nz/2013/08/15/phantom_types_in_rust.ht...



nitpick - the haskell version should be

    newtype Kilogram = Kilogram integer
The issue with doing this of course, is that it's mostly useless to computation. You lose the ability to use mathematical operators because you're no longer an instance of Num, and even if you create the Num instance, or use -XGeneralizedNewtypeDeriving, you can't multiply a Kilogram by a Meter/Second for example, since the arguments to (*) must be of the same type. One would need to use a generic "Measure" type instead, where the unit is some metadata attached to it, and the Num instance implements the typechecking on units.


You're quite right about the limitations imposed when you make your own dimensional types like that. It turns out the problem isn't trivial. Which is why I prefer to use a library like Dimensional. With Dimensional, you get plenty of units out-of-the-box, you can define your own if needed, and you can perform arithmetic in a fairly natural way.


And unfortunately even with a "generic Measure type" you can't actually enforce anything at compile time. The "Num" typeclass was not very well designed (given the current facilities of the language, some of which didn't exist at the time Num was designed, so no aspersions cast at the designers!).


Whoops, thanks for the correction!


I wouldn't ever say that using floats compared to, say, precision decimals in any language is leaking an implementation detail.

As a user of a library/API I need to know if I can rely on it being precise or not.


Float is an implementation detail. Precision is an API contract.


Right. An interface exposed with integers may wind up using floats internally, or vice versa; numeric algorithms may wind up doing arbitrary things to precision and accuracy; &c. "You are passing around a float" puts some bounds on what can be delivered but that isn't really sufficient information when it matters and is useless fluff when it doesn't.


Floating point has dramatic differences in semantics compared to exact numbers, not an invisible implementation detail at all.


Unless you're counting money or doing long-running simulations it's often more than good enough, especially Double.


Or doing discrete math, though you wouldn't be with a continuous quantity like Kilograms.


or good ol' typedef, no?


With a straight typedef, you can add 3 feet to 5 meters and get 8. Probably not desirable. So you've got to typedef single-field structs instead.


"So you've got to typedef single-field structs instead."

Which, thankfully, don't add any runtime cost (as should be expected).

As an aside, you don't actually need to typedef them at that point, but it saves you having to write "struct ..." everywhere so it's typically worthwhile.


Sure, but typedefs aren't as strong a guarantee. They're more a textual find/replace than an actual feature of the type system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: