Hacker News new | past | comments | ask | show | jobs | submit | more pwm's comments login

> Likewise, it's not an audit trail it's a "history" or "undo".

Depends on the industry. The one I work in audit trail is a well-defined and mandatory business concern.


> forever being assaulted by Result<>. Without .unwrap() your progress would be destroyed as you'd have to immediately deal with every unhappy path.

I don't know much about Rust but eg. in Haskell you run a Result-returning computation in its monadic context where success continuation/failure propagation is taken care of by the underlying machinery (ie. the implementation of bind), eg.:

  f :: Either Error Stuff
  f = do
    foo <- getFoo ...
    bar <- getBar ...
    ...
Does Rust have something similar?


Rust has std::Result<>, concise operators that comprehend that type, and various popular Result<> wrapper libraries that make the simple cases concise and pleasant to deal with. I love Rust error handling; it is among the best features of the language.

BUT! When dealing with nontrivial cases, particularly with asynchronous code, Result<> can get complicated. Not everything is Copy. Sometimes you have to type erase errors. That chore tends to be a bit of a hairball, as the many StackOverflow cries for help will attest. Other Result<> hairballs exist as well. For one thing it's a bit anemic in the 'original cause' department and sometimes you need to elaborate on it to capture more information about failures.

Coping with these cases is most comfortably done after you've nailed down your intentions and shown yourself that your code and whatever mass of dependencies you're reusing work as intended. Later, when dealing with the failures you papered over with .unwrap() you might end up doing some refactoring. At that point, however, you are confident that your effort will ultimately work.

The brilliance of .unwrap() is that your technical debt is visible. You can spot it in the dark, as can everyone else. In my mind that's a killer feature of Rust. I can't make you write good code, but at least I stand a chance of spotting it when you don't.


That would be the ? operator, I think. It will return the error or continue execution with the data.


Not OP but they meant https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-...

As you have the normal map function for Functors (using Haskell):

  > :t fmap
  fmap :: Functor f => (a -> b) -> f a -> f b
you can have bimap for Bifunctors:

  > :t bimap
  bimap :: Bifunctor p => (a -> b) -> (c -> d) -> p a c -> p b d
which specialised to pairs is:

  > :t bimap @(,)
  bimap @(,) :: (a -> b) -> (c -> d) -> (a, c) -> (b, d)


It's just semantics.

    data Bool = False | True
    data Maybe a = Nothing | Just a
Bool is a nullary type constructor or simply type. False and True are nullary data constructors or simply constants. Maybe is an unary type constructor taking one parameter to construct a fully saturated type (eg. Maybe Int), but calling it a "Maybe type" is ok, no crazy ambiguity. Nothing is a constant and Just is an unary data constructor taking one parameter to construct fully saturated data (eg. Just 1).


And in the context of Haskell, non-nullary type constructors are still types. They don't happen to be the types of any values, but they can still parameterize types, be constrained by type equality (eg. a ~ Maybe), etc.


So in essence I could think of types as functions (possibly nullary) from types to types?

Bool = sum(False, True)

Maybe = λ a . sum(Nothing, Just(a))


> So in essence I could think of types as functions (possibly nullary) from types to types?

Type constructors, yes. In Haskell the type of type constructors are called kinds:

    λ> :kind Bool
    Bool :: Type
    λ> :kind Maybe
    Maybe :: Type -> Type
    λ> :kind Maybe Int
    Maybe Int :: Type
    λ> :kind Either
    Either :: Type -> Type -> Type
    λ> :kind Eq
    Eq :: Type -> Constraint
    λ> :kind Functor
    Functor :: (Type -> Type) -> Constraint


Sum types are one of those things missing from most mainstream languages that people don't even realise that they are deprived of something fundamental. They naturally complement product types that are ubiquitous in most/all languages.


Specifically, closed sum types. You can usually get open sum types by way of inheritance.


True that! It was eye opening once I understood that! Here's a basic type system I made in Python for a small smart contract language compiler I'm working on. It works with mypy pretty flawlessly! [1] I still haven't figured out how to prevent subclassing with methods that don't comply with a specific type signature, but that's a story for another day...

[1] https://yourlabs.io/pyratzlabs/pymich/-/blob/master/pymich/m...


C++17 has std::variant<> and it's ubiquitous in my code. Catching your type errors at compilation time is something you don't have to give up any more after doing your prototyping in Haskell.


it is very clunky though. there is no pattern matching support — you need to write non-trivial visitors to do any meaningful work and it takes the fun out of it. imo, Rust gets it right and borrows a lot of Haskell ergonomics


It's easy to combine a list of per-type visitors, perhaps with an "auto" catch-all, into a single visitor for use with visit(), though. Not as fun as Haskell but a real improvement over C++14 and good enough for sum types in production C++.

See struct visitors in https://github.com/llvm/llvm-project/blob/main/flang/include...


FreePascal supports them, via variant records. A nice gain in developer-friendliness over C.


In the context of Haskell "ad-hoc" means ad-hoc polymorphism.

See https://wiki.haskell.org/Polymorphism for details on parametric (ie. unconstrained) vs. ad-hoc (ie. constrained) polymorphism. In short the difference is that ad-hoc is parametric + one or more type class constraints. Eg. in:

  λ> :t fmap
  fmap :: Functor f => (a -> b) -> f a -> f b
the type variables a and b are unconstrained while f is constrained by the Functor type class.


For readers wondering why this is, there are a few things at play here. I preface my explanation by saying that the benefit of type classes vastly outweigh issues like the above example. Also if I put the above example in my work project I immediately get a warning about the exact issue, so it's not like it is a foot gun or anything like that. Anyhow:

0. Integer is arbitrary precision while Int is bounded, machine dependent (eg. 32 or 64 bit). They are both instances of the Num type class as well as the Integral type class as we'll see later.

1. Numbers without explicit type signatures are overloaded (aka. constraint polymorphic):

  λ> :t 1
  1 :: Num p => p
where p can be any type that has a Num constraint, like Integer or Int.

2. As per https://www.haskell.org/onlinereport/decls.html#sect4.3.4 we have

  default (Integer, Double)
as concrete types for numbers to default to in expressions without explicit type signatures to guide inference.

3. The type of the list index operator is:

  λ> :t (!!)
  (!!) :: [a] -> Int -> a
where the index is a concrete Int type.

Right, so in the above example if we check the type of 7^7^7`mod`5`mod`2

  λ> :t 7^7^7`mod`5`mod`2
  (7^7^7`mod`5`mod`2) :: Integral a => a
it is still overloaded (Integral), ie. can be either Integer or Int. Now in the first case there's nothing to concretise the type thus we are defaulting to Integer as per the defaulting rule. In the second case the usage of (!!) concretise the type to an Int. As 7^7^7 is big it does not fit in an Int (overflows). Compare:

  λ> 7^7^7`mod`5 :: Integer
  3
  λ> 7^7^7`mod`5 :: Int
  2
The mystery is now solved. Side note: if we do

  default ()
to prevent GHC defaulting we'll get a type error and we will be forced to specific a type. We can also say:

  λ> ( (7^7^7`mod`5`mod`2)==1, [False,True]!!fromInteger((7^7^7`mod`5`mod`2)) )
  (True,True)


Just to preserve the full message :)

"Something’s gone awry and we’re having trouble loading your workspace. Sorry we can’t be more specific - this is one of those cases where we don’t know what’s gone wrong either. A restart of Slack might help, and you can always contact us."



> We do experience reality directly: our nerve endings do.

Depending on how you define "reality" we only experience a tiny subset of it. We can only see/hear/touch/smell/taste a very restricted universe, only what our "antennas" can detect. In fact I would argue that we experience close to 0% of all of reality. On top of this extremely restricted input our processing of it is imperfect which means some of what we experience has nothing to do with reality.


Restricted as it may be, we experience reality. Experiencing reality completely is impossible anyway.

And in the context of hallucination, what we can't experience doesn't matter.


Sure, I don't disagree but (to me anyway) it's important to point out the difference between reality vs. our potential intake from it. The blind men and the elephant parable applies.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: