Hacker News new | past | comments | ask | show | jobs | submit login

For context, Haskell's story for orphan instances is currently as follows:

- orphan instances are allowed and emit a warning

- duplicate instances are not allowed

- overlapping instances where one is different from the other, are allowed

- incoherent instance use sites are not allowed (where 2+ instances match and neither is more specific than the other)

- but you can enable this by adding {-# INCOHERENT #-} to instances. You shouldn't do this though unless you really know why you need it (and perhaps even then there is a better way)

- a typical library sets all warnings as errors with -Wall, so you'll notice when you're adding orphans

- exceptions in specific files can be made by adding -fno-orphans to the file

- defining orphan instances in executables is not a problem as the only user of them will be the program itself

- this is what you do if you are writing a package which only provides instances: where both the data types and the type classes are implemented elsewhere and you have no other choice. These libraries should not be used in other libraries, but only in executables and tests

- a different instance can also be defined by wrapping the original type with a newtype (thus defining that new instance for this new type, thus not making an orphan)

- since newtypes have no runtime overhead, also, with DerivingVia, syntactic overhead is quite low. This is "the way" to override already defined instances.

IMO, all the above makes sense when you prefer correctness over flexibility. From the post, this appears to be Rust's choice as well.






The newtype pattern is a special case of type composition which is incredibly useful, has low complexity, and if done right almost no boilerplate overhead. It's much dumber and easier to reason about than type-acrobatics with generics, imo.

Do you mean `Generically`[1]? I've only ever vaguely seen its use - perhaps it can do something a `newtype` can't (or can, but with more boilerplate)? But don't have any first-hand experience currently to comment.

[1] https://hackage.haskell.org/package/base/docs/GHC-Generics.h...


I meant in a general context (I know Go or Rust allows it in a decent way), not familiar enough with Haskell..

It's the same in Haskell as in Rust. Using the example from the article:

    struct A2(A);
    impl BTrait for A2 {
      fn random_number(&self) -> usize {
        4 // chosen by fair dice roll, still!
      }
    }
In Haskell that translates to:

    newtype A2 a = A2 a
    instance BTrait (A2 a) where
      random_number _ = 4
Unfortunately, I also share GP's confusion. Can you share an example of what you mean by "type acrobatics with generics"?

If it's the same, what is the blog posts problem then?

It kinda sucks in both! If you want to interact with your newtypes, you need to either unwrap it or reimplement each typeclass/trait. Haskell does make this a bit nicer with deriving strategies, and Rust with macros, but it's a lot of boilerplate. The article had this to say about the example:

> I’m sure it won’t take much to convince you; this is unsatisfying. It’s straightforward in our contrived example. In real world code, it is not always so straightforward to wrap a type. Even if it is, are we supposed to wrap every type for every trait implementation we might need? People love traits. That would be a stampede of new types.

> Wrapper types aren’t free either. a_crate has no idea A2 exists. We’ll have to unwrap our A2 back into an A anytime we want to pass it to code in a_crate. Now we have to maintain all this boilerplate just to add our innocent implementation.


Does the Rust wrap/unwrap come with any runtime cost?

I don't it sucks at all because implementing any type class (or trait, or interface), then if your new implementation is better (more efficient in time or memory) then you should propose to swap the old with the new at its original source location (i.e, create a merge request somewhere). If your implementation has a different output then you should consider whether this thing should actually be a type class at all (as it seems to be arbitrary). Or if your implementation is for a more specific case of the type, then making it a newtype is not only the practical thing to do, but it should actually be a new type.


Wrap/unwrap is free, and methods on the newtype are typically free as well.

I totally agree with your analysis, but in practice it's not always possible to merge implementation upstream and that's exactly what the article is about. Say you're working with a small scientific library and you want to serialize one of the data structures, but the authors haven't provided a Serde implementation. It'd be nice if you could upstream it, but if the authors aren't responsive you're forced to use a newtype. It sounds like this differs from Haskell, which (if I understand your comment) would allow you to implement it directly on the base type (with a warning).


> If you want to interact with your newtypes, you need to either unwrap it or reimplement each typeclass/trait

...or you could just e.g. implement Deref in Rust? In my experience that solves almost all use cases (with the edge case being when something wants to take ownership of the wrapped value, at which point I don't see the problem with unwrapping)


That gets us halfway there. It makes unwrapping easy, but you still need to remember to rewrap if you've implemented anything.

    use std::ops::Deref;
    
    trait Test {
        fn test(&self);
    }
    
    #[derive(Debug)]
    struct Wrap<T>(T);
    
    impl<T> Test for Wrap<T> {
        fn test(&self) {
            ()
        }
    }
    
    impl<T> Deref for Wrap<T> {
        type Target = T;
        fn deref(&self) -> &Self::Target {
            &self.0
        }
    }
    
    fn main() {
        let thing1 = Wrap(3_i32);
        let thing2 = Wrap(5_i32);
        let sum = *thing1 + *thing2;
        thing1.test();
        thing2.test();
        sum.test(); // error[E0599]: no method named `test` found for type `i32` in the current scope
    }
Also using newtypes to reimplement methods on the base type is frowned upon. I believe that this is why #[derive(Deref)] isn't included in the standard library. See below (emphasis mine):

> So, as a simple, first-order takeaway: if the wrapper is a trivial marker, then it can implement Deref. If the wrapper's entire purpose is to manage its inner type, without modifying the extant semantics of that type, it should implement Deref. If T behaves differently than Target when Target would compile with that usage, it shouldn't implement Deref.

https://users.rust-lang.org/t/should-you-implement-deref-for...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: