Hacker News new | past | comments | ask | show | jobs | submit login

> You really don't sacrifice anything by using it

What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.

> If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language..

Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider:

    import Data.Time.Clock
    main = do
	start <- getCurrentTime
	fact <- return $ product [1..50000]
	end <- getCurrentTime
	putStrLn $ "Computed product " ++ (show fact) ++
	            "in " ++ (show $ diffUTCTime end start) ++ " seconds"
This program appears to time a computation of 50000 factorial, but in fact it will always output some absurdly short time. This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness.

> Turns out that laziness, immutability, and referential transparency really helped this particular case

I don't buy it. In particular, laziness is almost always a performance loss, which is why a big part of optimizing Haskell programs is defeating laziness by inserting strictness annotations.

> Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking

This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that.

> Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer

Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.

Like laziness, immutability is almost always a performance loss. This is why ghc attempts to extract mutable values from immutable expressions, e.g. transform a recursive algorithm into an iterative algorithm that modifies an accumulator. This is also why tail recursive functions are faster than non-tail-recursive functions!

> Referential transparency meant that we could trivially run this over multiple cores without additional work

It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.

Standard C knows nothing of threads, while Haskell has some nice tools to take advantage of multiple threads. So this is definitely a point for Haskell, compared to standard C. But introduce any modern threading support (like GCD, Intel's TBB, etc.), and then the comparison would have been more even.

When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.




"This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness."

Of course, that's not what the do notation specifies, but I agree that's somewhat subtle. As you say, it's a consequence of laziness. Replacing "return" with "evaluate" fixes this particular example.

In general, if you care about when some particular thing is evaluated - and for non-IO you usually don't - an IO action that you're sequencing needs to depend upon it. That can either be because looking at the thing determines which IO action is used, or it can be added artificially by means of seq (or conceivably deepSeq, if you don't just need WHNF).


>That Haskell provides it is an admission that strict R.T. is unworkable.

Perhaps it is, but that doesn't mean it's not immensely valuable as a default. And it's worth noting that in the case of Debug.Trace, the actual program is still referentially transparent, it's just the debugging tools that break the rules, as they often do.

>Haskell's laziness makes the order of execution highly counter-intuitive.

Yes, there are some use cases where do-notion doesn't capture all the side effects (i.e. time/memory) and so a completely naive imperative perspective breaks down. But these cases are rare, and it's not that hard to learn to deal with them.


First up - I'll preface my reply below with a big disclaimer that I'm a relative notice with Haskell, so these are purely my opinions at this point in my learning curve.

> What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.

I'd disagree that this is any real attack on the merits of referential transparency, since Debug.Trace is not part of application code. It violates referential transparency in the same way an external debugger would. It's an out of band debugging tool that doesn't make it into production.

> Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider

I wouldn't say it makes order of execution highly counter-intuitive, and your above example is pretty intuitive to me. But expanding your point, time and space complexity can be very difficult to reason about - so I'll concede that's really a broader version of your point.

> Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.

C uses null-terminated strings, so it order to extract a substring it must be copied. It also has mutable strings, so standard library functions would need to copy even if the string were properly bounded.

Java using bounded strings, but still doesn't share characters. If you extract a substring, you're getting another copy in memory.

Haskell, using the default ByteString implementation, can do a 'substring' in O(1) time. This alone was probably a large part of the reason Haskell came out ahead - it wasn't computing faster, it was doing less.

Obviously in Java and C you could write logic around byte arrays directly, but this point was for a naive implementation, not a tuned version.

> This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that

It would seem counter-intuitive that the standard library would read one byte at a time. I would put money on the standard file operations buffering more data than needed - and if they didn't, the OS absolutely would.

> Like laziness, immutability is almost always a performance loss.

On immutability -

In a write-heavy algorithm, absolutely. Even Haskell provides mutable data structures for this very reason.

But in a read-heavy algorithm (Such as my example above) immutability allows us to make assumptions about the data - such as the fact that i'll never change. This means that the standard platform library can, for example, implement substring in O(1) time complexity instead of having to make a defensive copy of the relevant data (Lest something else modify it).

On Laziness -

I'm still relatively fresh to getting my head around laziness, so take this with a grain of salt. But my understanding, from what I've been told and from some personal experience:

In completely CPU bound code, laziness is likely going to be a slowdown. But laziness can be also make it easier to write code in ways that would be difficult in strict languages, which can lead to faster algorithms with the same effort. In this particular example, it was much easier to write this code using streaming non-blocking IO that it would be in C

> It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.

Except that GHC can do some clever optimizations with referential transparency that a C compiler (probably) wouldn't - such as running naively written code over multiple cores.

> When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.

I completely agree. If you need bare to the metal performance, then carefully crafted C is likely to still be the king of the hill for a very long time. Haskell won't even come close.

But in day to day code, we tend to not micro-optimize everything. We tend to just write the most straight forward code and leave it at that. Haskell, from my experience so far, for the kinds of workloads I'm giving it (IO Bound crud apps, mostly) tends to provide surprisingly performant code under these conditions. I'm under no illusion that it would even come close to C if it came down to finely tuning something however.


A really great rebuttal of his points. I like Haskell, I really do - but I can never get any useful work done out of it. (Note: I am a hobbyist and not a professional programmer)


It's not a great rebuttal, it's just showing why people with imperative mindsets don't really understand Haskell still. The rebuttal rebuttal is good.

Do notation is not specifically a line-by-line imperative thing, and complaining that it isn't that doesn't make it bad. Obviously, the goal in Haskell isn't precisely to do imperative coding. It remains true that you can hack imperative code into Haskell in various ways effectively.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: