Hacker News new | past | comments | ask | show | jobs | submit login

Haskell is absolutely fantastic. In C, when I'm given only the prototype of a function like:

> int foo(int, int);

Even if you tell me it is referentially transparent, I have no idea what that bloody function really does.

But with HASKELL, with just the type signature you can write a program that does what you want even though you don't know what the components you used do! Mind/Blown.




I Love Haskell but I think sometimes people give it more credit than deserved. So you say

> int foo(int, int);

Doesn't tell you anything. Ok, you are right! But that code translated to Haskell would be:

> foo :: Int -> Int -> Int

How does that give you more information about foo than the C code above?

Now if you say you use type synonyms in Haskell, I would agree, but you can use typedefs in C as well.


Well it does tell you that there are no IO computations with side effects. So you really are restricted to the Int type and the operations that can be performed on it.

You get much stronger statements by using type variables (not type synonyms). This is actually laid out rather clearly in Theorems for Free by Wadler[1]. The essence is that the more generic a type signature gets, the more you can reason about it's behavior. Since if you want to perform (+) (-) (*) (/) for example you'll need to include a (Num) constraint.

[1] http://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf


> So you really are restricted to the Int type and the operations that can be performed on it.

That's so vague it's almost meaningless, and doesn't support the original assertion, which was "But with HASKELL, with just the type signature you can write a program that does what you want even though you don't know what the components you used do!"

Even given a simple thing like foo:: int -> int -> int, there's really no way to tell what it's doing. It could be multiplying the arguments together, dividing, adding, computing psuedo-random numbers using two seed values, ...

Not only that, but I'm not sure it's true that it's limited to the Int type because there are things like System.IO.Unsafe.


> Even given a simple thing like foo:: int -> int -> int, there's really no way to tell what it's doing.

Ahh, this is an important point. The reason you can't tell what it's doing is because it is quite concrete. One really cool thing about this kind of type system is that the more generic your functions, the more you can tell about what it's doing. Consider this function:

    foo :: (a, b) -> a
There's only one possible thing this function can do! And there's only one possible implementation--the correct one. This is the case because the types involved are completely general. Type signatures can communicate a lot more than one might think.


That's all well and good with a trivial function like foo :: (a,b) -> a, but for most non-trivial functions it won't be so obvious, and that's exactly when it's most helpful to know what's going on.


a.) Many functions in Haskell are this trivial. map, fold, etc...

b.) Functions that are non-trivial will have additional constraints often hinting exactly at what the function does.

Most important is not the fact that you can write poor Haskell code which does not exibit this self-documenting behavior, but rather that if you choose to exploit the type system it has the expressiveness to allow you to do so! (and enforce it)

If you are working with a Haskell library and see a function with the signature (IO Int) you may start to question your choice of library :)

The only problem is, once you get a taste of this power it is easy to yearn for even more such as dependent types (i.e. Agda, Idris...)


> foo :: (a, b) -> a

> There's only one possible thing this function can do!

Ah, so it scales a vector by a scalar returning a new vector, right?


No, it is impossible to write the function you described on that type. There is no function "scale" that works polymorphically on any type 'a'.

The parent you are responding to is quoting a proven result of computer science.


But what percentage of "real world" functions work on completely generic types? In reality most useful functions take more specific types, and those types allow many more operations to be performed on them. In general, it's not really possible to tell what a function does just by looking at the type signature.


You aren't forced to go straight from fully generic types to specific types. That is the entire point of type class constraints which give you fine grained specification of behavior.


Define "real world functions" :)


foo = undefined

And all its brothers :)


>There's only one possible thing this function can do!

That's not at all true... it can do all sorts of transformations on the first value of the tuple. I suppose without type restrictions on the parameters it can't do too much because there are relatively few functions defined on all types, but then, how often are you going to see a function signature that vague?

More likely, you'd see

`foo :: MyClass a => (a, b) -> a`

or

`foo :: (MyType, b) -> MyType`

which opens up the possibilities considerably, because you don't know which of MyType's behaviors is being invoked.


That's exactly it: (a) there are no functions at all defined on all types except id (of type (a -> a)) and (b) you end up seeing signatures close to that vague all the time. This is exactly the power of parametricity.


> [...] you end up seeing signatures close to that vague all the time.

That's not an accident. Factoring out little helper functions like this is considered a virtue in Haskell (and acording to Paul Graham, also in Lisp).

Even if you only use them once---because they make reasoning easier.


Agreed! I think the tendency to highly parametric functions comes from several sources. First, good programming practice allows for decomposing entities and leading to more flexible functions. Second, once you have higher kinded types you realize that 90% of programming is wiring between contexts and is more parametric than people think. Finally, HM typing auto generalizes all of your let bindings which means your compiler can inform you of opportunities for genericity that you weren't even aware of... so long as you ask.


In my mind what I wrote was obviously sarcastic, but apparently it wasn't. Poe's law strikes again I guess.


My C functions look more like

    thing_t *find_thing(thing_index_t, thing_t *things);
Where my indices are wrapped in a single-element struct so that I get some type safety. You can write bad code in any language.

The Haskell things that I miss most in C are the parametric polymorphism and the structural pattern matching. Even so, I've been able to encode some surprising invariants in my C types (with occasionally a little bit of manual help)...


It really is awesome! If you see a value with this type:

  (bool -> bool)
You know it only has 4 possible names, only 4 things it could possibly do: false, true, not, id. (Ignoring infinite recursion or exceptions.)

The list functions are a rich place to explain this to people.

  (a list -> (a -> b) -> b list)
If a declared function has this signature, there's only one reasonable thing it is: map. (It could return the last element only, or an empty b list, but those aren't so reasonable.)

What about this?

  (a list -> (a -> bool) -> a)
It'd seem it returns the first item from the list that matches the predicate. Possibly the last item. (And an exception if the list is empty or item isn't found.)

Other people aren't as impressed by this as I am, though.


Yes but those are trivial examples. What about a math library? The type signature isn't going to tell you which trig function to apply.

I'm a fan of static typing, but this sort of guessing based on type signatures seems like a bad thing, actually. It's like when people use autocomplete in their IDE and pick anything that looks like it might work, instead of reading the documentation (and maybe the source) to understand what they're using. It's understandable with beginners, but for people who should know better it's willful blindness.


It reminds me of dimensional analysis in physics. That works really well when the number of possibilities are low and the system as a whole is contained enough and well-constructed enough that there aren't many pitfalls.

But everyone knows physicists can't manage factors of 2 and pi. :-)


I'm not suggesting that you decide "which function to use" based off a type signature. But that the type signature provides a lot of useful constraints to help you reason about things. And those examples point out how in some cases, just with the type signature you can determine exactly what the function can do.

The examples show the different from non-pure languages or functions, where "someList.Bla()" returns "void" and might have done anything to the list, from removing all items to reversing them, to calling Bla() on each element inside (which in turn might do who-knows-what).

True though, without pure functions then the benefits aren't all there.


It actually can using generic type signatures with constraints. Once you use a concrete type like Int you are severely limiting the expressiveness of the types. Which is fine in some scenarios.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: