Hacker News new | past | comments | ask | show | jobs | submit login

Haskell's rhetoric has always seemed a bit Pharisaical to me: "Though the runtime is impure, Our Code is uncontaminated by the taint of side effects".



Perhaps the best way to think of it is that a Haskell program is a pure function from FFI outputs to FFI inputs. Something like

    H out in = out -> (FFIOperation, in)
The `IO` monad is nothing more than how operating "inside of this function" feels. The runtime thus operates the pure Haskell program evaluating its FFI demands and returning their results.

The AST POV espoused by this article is quite good as well, but a little bit less obvious how to "step" things forward or operate nicely in parallel contexts.

Also, in reality the above perspective is a good way to embed Haskell into other contexts.

http://comonad.com/reader/2011/free-monads-for-less-3/


The way I think of it is that an IO value is a program where all the Haskell code runs in callbacks. This is a bit like a JavaScript program where everything runs in a callback. The major difference is that callbacks written in Haskell are constrained to be pure functions.

The IO type is rather rigid in that there is always a next callback, which implicitly contains the entire program state as part of the function closure. In a reactive programming language like Elm you can have many functions that may be the next callback depending on what the next event is, along with all the callbacks that run as part of the signal graph. Purity is about how constrained the callbacks are, not about the overall structure of the program.


Not really. The point of Haskell is not to avoid having side effects. The point of Haskell is to allow code to be referentially transparent - this makes it both easier to reason about as a developer, and easier for the runtime to optimise.


Yes, that is altar upon which you sacrifice the abilitity to write print statements to do debugging, and lose the ability to reason about order of execution. But does it really result in more performant code? Every benchmark I've ever seen, more practical languages like Ocaml have come out on top.


Haskell has a slight, but consistent, edge over Ocaml in most of the "benchmarks game" tests, so it's certainly not true to say that Ocaml beats Haskell in all benchmarks (although, certainly there could be other benchmarks where it does). In either case, Haskell is very performant and is competitive with any other mainstream language in performance.

You can write print statements to do debugging (with Debug.Trace), and in practice it's not very hard to work IO into your code when you need it (even if only for temporary debugging or development). Crucially, however, it's much harder to accidentally work IO into your code. The few cases where I really miss print statements "for free" are vastly outweighed by the many cases in impure languages where I'm accidentally mismanaging my mutable state.

Whether it results in more performant code? In some cases yes (the restrictions make it much easier to prove certain compiler optimizations), but that's not really the point. Referential transparency is about making your code more expressive, and easier to reason about, to design, and to safely tweak.


Haskell performance is very good when written by people who know how the compiler works, and know the bytecode they want generated. I.e., if you rewrite a recursive function in a slightly unintuitive way and apply the right strictness annotations, it will compile down to the same bytecode as a for-loop in C.

Idiomatic Haskell is not generally as fast as mutable C/Java/etc. Creating/evaluating thunks is not fast and immutable data structures often result in excess object creation. When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.

Haskell is one of my favorite languages, the performance story just isn't quite what I want it to be. I do, however, think that there is plenty of room for improvement, i.e. there is no principled reason Haskell can't compete.


This is exactly where I'm at. My biggest problem is that wrapping non-persistent data structures written in C/C++ never seems comes out right in Haskell. You often have to write them in the IO monad, which is the absolute last thing you want for an otherwise general purpose data structure. I think there may be some solution here using linear types, which enforce that a data type is referenced only once at compile time. This would let you avoid being forced to guarantee persistence when all you care about is speed.

This argument may seem more abstract than what you mention, but in fact it gets to the very heart of why there aren't good unboxed mutable arrays in haskell. In truth, there are. You can convert Immutable Vectors (which are lists with O(1) indexing but no mutation) into Mutable Vectors in constant time using unsafeThaw. The problem is that your code is no longer persistent, and you've risked introducing subtle errors. My biggest problem is that the haskell community seems to look at non-persistent data structures as sacrilegious. As a scientific programmer, that makes me feel like maybe learning haskell wasn't such a good investment after all. But on the bright side, functional programming is on the rise, and I'm confident that all my experience with Haskell will transfer well in the future.


Depends a lot on the libraries, too. I had to scrape a bunch of HTML recently, which I prefer to use XPath for; the library I used -- HXT, if I remember correctly, it was the horrible one that uses arrows -- made my program perform on par with Ruby, and when I benchmarked it, I found it was allocating about 2GB of data throughout the program, while parsing a document that was probably around 100KB.


I believe HXT uses the default representation of strings as lists of chars, instead of more efficient packed representations. This likely contributes to the excessive memory usage.


Sure. As a decidely unseasoned Haskell user, however, it's hard to sympathize with inefficient libraries for something as established as XML.

There may be other, faster libs that I don't know about, but I couldn't find them. I tried HaXml first (from which HXT is apparently derived), but the parser choked on my document and the author didn't come forward with a fix when I reported the problem (by email, the project isn't on Github). There is one called HXML, but I think it's dead. The TagSoup library might have worked, but I don't think so. It's not easy jumping into a new language and then coming up against library issues that prevent you from finishing your first project.


The "String problem" is definitely one of the most unfortunate parts of Haskell. Using a linked list of chars for a string is just laughable from a performance and resources standpoint. The good news is that the problem should be solved now: we have Data.Text for unicode strings, and Data.ByteString for binary/ASCII/UTF-8 strings. Both are very efficient and implement a robust API for common string operations. The bad news is that there are still far too many libraries that use the old crummy data type for strings, including much of the Prelude. And, I guess in the interest of simplicity, many beginner tutorials tend to use String as well. This is quite unfortunate, but it does seem to be changing: Aeson uses Data.Text, the ClassyPrelude ditches String almost entirely (keeping it for Show only), and in general most modern libraries avoid String.

Hopefully HXT will be updated to use modern string types soon. In the meantime, I believe that xml-conduit (http://hackage.haskell.org/package/xml-conduit) might be what is desired.


Why don't you think TagSoup would have worked? I've used it for quite a few use cases.

edit: Then to make things dead simple, add on dom-selector:

http://hackage.haskell.org/package/dom-selector

It enables using css selectors like so:

    queryT [jq| h2 span.titletext |] root


The project wasn't that recent, so I don't quite remember, but I would have wanted something like dom-selector, and that one didn't come up in my searches for solutions.

It's interesting that XML libs have to invent operators and obnoxious syntax (like HXT's arrow usage, or coincidentally the fact that HXT's parser uses the IO type, which is just crazy talk). dom-selector seems to have the same problem. I prefer readable functions, not DSLs where my code suddenly descends into this magic bizarro-world of operator soup for a moment.

Lenses would make tree-based extraction easier, I think, although lenses aren't easy to understand or that easy to read. Tree traversal with lenses and zippers seems unnecessarily complicated to me.

In a scraper you just want to collect items recursively, and return empty/Nothing values for anything that fails a match: Collect every item that contains a <div class="h-sku productinfo">, map its h2 to a title and its <div class="price"> to a price, and then combine those two fields into a record. It's something that should result in eminently readable code, not just because it's a conceptually trivial task, but also because someday you need to go back to the code and remember how it works.


> I prefer readable functions, not DSLs where my code suddenly descends into this magic bizarro-world of operator soup for a moment.

Bizarro world of operator soup? I don't really follow you. That dom selector code just compiles down into functions itself. I don't see how anything could be any clearer than a css selector for selecting an html element.


> performance is very good when written by people who know how the compiler works, and know the bytecode they want generated.

Yes.


The old situation with list processing in which the decision to fold from left or from right can make a big performance difference might be the fundamental example of this kind of problem. It is enough to make me think twice about the wisdom of defining lists recursively. It definitely doesn't feel "declarative", which attribute is surely more important than elegant simplicity of implementation.


>Haskell performance is very good when written by people who know how the compiler works

I know nothing about how the compiler works, and my haskell code still easily outperforms my clojure code. The only optimizations I do are the same as anywhere else: profile and look at functions taking up too much time.

>and know the bytecode they want generated.

Bytecode is not involved. Machine code is, but I don't even know ASM to know what I want generated or if it is being generated that way.

>When you need them, there is no real substitute for unboxed mutable arrays, something Haskell does NOT make easy.

This is simply nonsense. Unboxed mutable vectors are trivial in haskell: https://hackage.haskell.org/package/vector-0.10.11.0/docs/Da... No, there is no substitute for using the right data types. Why do you think haskell or haskellers suggest using the wrong data types?


My goal is "as fast as C (TM)". Clojure is not known for being a speed demon.

I didn't say you couldn't do arrays with Haskell, I said Haskell doesn't make it easy. Here are the actual array docs, BTW: http://www.haskell.org/haskellwiki/Arrays


Out of curiosity, what's difficult about that mutable array implementation?

I'm a relative Haskell novice, but was able to write some mutable array code with only a cursory read through the documentation.

Granted it's extremely verbose compared to most imperative languages.


>My goal is "as fast as C (TM)".

Enjoy using C then. You suggested that haskell was bad because it was not fast enough. If "not as fast as C" is not fast enough, then virtually every language is not just bad, but much worse than haskell.

>I said Haskell doesn't make it easy

And I showed you that it is in fact trivially easy.

>Here are the actual array docs, BTW

That is a random, user-edited wiki page. I linked to the actual docs.


If "not as fast as C" is not fast enough, then virtually every language is not just bad, but much worse than haskell.

I agree. The only languages I've used that are remotely competitive for my purposes are static JVM languages (Java and Scala), Ocaml, and Julia for array ops. Haskell comes closer than many others, but just isn't there yet.

The docs you linked to are a 3'rd party package marked "experimental". I'll also suggest that you are glossing over most of the difficulties in using them. It's trivially easy to call `unsafeRead`. It's not so easy to wrap your operations in the appropriate monad, apply all the necessary strictness annotations to avoid thunks, and properly weave this monad with all the others you've got floating around.

(That last bit is fairly important if you plan to write methods like `objectiveGradient dataPoint workArray`.)


>(Java and Scala), Ocaml,

Except scala and ocaml are both slower than haskell.

>The docs you linked to are a 3'rd party package marked "experimental".

No it is not. What is the point of just outright lying?

>I'll also suggest that you are glossing over most of the difficulties in using them

I'll suggest that if you want people to believe your claim, then you should back it up. Show me the difficulty. Because my week 1 students have no trouble with it at all.

>It's not so easy to wrap your operations in the appropriate monad

You are literally saying "it is not easy to write code". That is like saying "printf" is hard in C because you have to write code. It makes absolutely no sense. Have you actually ever tried learning haskell much less using it?

>apply all the necessary strictness annotations to avoid thunks

All one of them? Which goes in the exact same place it always does? And which is not necessary at all?

>and properly weave this monad with all the others you've got floating around.

Ah, trolled me softly. Well done.


I don't know why you are responding so angrily. The page you linked to explicitly says "Stability experimental" in the top right corner.

I also don't know why you are behaving as if I dislike Haskell. I enjoy Haskell a lot, I just find getting very good performance to be difficult. You can browse my comment history to see a generally favorable opinion towards Haskell if you don't believe me.

I also gave you a concrete example of a reasonable and necessary task I found difficult: specifically, numerical functions which need to mutate existing arrays rather than allocating new ones, e.g. gradient descent. Every time I've attempted to implement such things in Haskell, it takes me quite a bit of work to get the same performance that Scala/Java/Julia/C gives me out of the box (or Python after using Numba).


> "Stability experimental" in the top right corner.

This is a bit of a strange convention in the Haskell world. Libraries tend to be marked "experimental" even when they are completely stable and the correct choice for production use. Note that Data.Text[1] is also marked "experimental", and it is perfectly stable and the correct choice for Unicode in Haskell.

> 3'rd party package

Data.Vector is 3rd party in the sense that it is not part of the GHC base package, but so what? It is now considered the correct library for using arrays in Haskell.

[1] http://hackage.haskell.org/package/text-1.1.1.3/docs/Data-Te...


I stand corrected. My mistake.


An easy mistake to make. The docs could be clearer.


I would like to help you with any Haskell performance problems you had, contact me via the email in my profile if interested.


>I don't know why you are responding so angrily

I'm not. Given that you can't tell someone's emotional state via text, it doesn't make much sense to assume an emotional state for someone else simply because it will make you feel better.

>The page you linked to explicitly says "Stability experimental"

So does every library. It is the default state of a seldom used feature that still hasn't been removed.

>I also don't know why you are behaving as if I dislike Haskell

I am responding to what you say. You said using a mutable unboxed array is hard. That is not a simple misunderstanding, that is either a complete lack of having ever tried to learn haskell, or a deliberate lie. There's literally no other options. I teach people haskell. They do not use lists for anything other than control. They have absolutely no problem using arrays.

>I also gave you a concrete example of a reasonable and necessary task I found difficult

But you didn't say what made it difficult. So a reader is left to assume you are trolling since that task is trivial.


Steady on. I think all relevant points have already been made in this thread, and there's not much more to add.

The Haskell community has historically had a reputation as a welcoming and friendly community. Let's work on presevering that.


Actually, Haskell does let you write print statements for debugging.

If we have the following function:

    foo :: Int -> Int
    foo x = x `div` 0
and we want to add debugging, we can do:

    import Debug.Trace

    foo :: Int -> Int
    foo x
      | trace (show x) False = undefined
      | otherwise = x `div` 0
The above will print the value of x before throwing an error due to division by zero. You don't have to make foo return an IO Int or change any other aspect of your program.


Something I like to do is:

    import Debug.Trace
    wtf v x = trace (show x) v

    someFunction x = anotherFunction x `wtf` x
This allows me to tack `wtf` onto the end of otherwise unchanged expressions to do print debugging


> that is altar upon which you sacrifice the abilitity

I see statements like this all the time from people that either fundamentally misunderstand Haskell, and use to have the same misunderstandings myself. You really don't sacrifice anything by using it.

> the abilitity to write print statements to do debugging

I can slap a `trace` statement wherever the fuck I want inside my Haskell code for debugging. Even inside a pure function, no IO monad required. If I want to add a logger to my code, a 'Writer' monad is almost completely transparent, or I can cheat and use unsafePerformIO.

> and lose the ability to reason about order of execution.

If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language.

> But does it really result in more performant code

Haskell has really surprised me with its performance. I've only really been using it for a short time, having been on the Java bandwagon for a long time.

One example I had recently involved loading some data from disk, doing some transforms, and spitting out a summary. For shits and giggles, we wrote a few different implementations to compare.

Haskell won, even beating the reference 'C' implementation that we thought would have been the benchmark with which to measure everything else, and the Java version we thought we'd be using in production.

Turns out that laziness, immutability, and referential transparency really helped this particular case.

- Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking. Other implementations had separate buffer and process steps (Even if hidden behind BufferedInputStream) that blocked the CPU while loading the next batch of data

- Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer, wasting CPU cycles, memory bandwidth, and cache locality.

- Referential transparency meant that we could trivially run this over multiple cores without additional work.

Naturally, a hand-crafted C version would almost certainly be faster than this - but it would have required a lot more effort and a more complex algorithm to do the same thing. (Explicit multi-threading, a non-standard string library, and a lot of juggling to keep the CPU fed with just the right amount of buffer).

On a per-effort basis, Haskell (From my minimal experience) seems to be one of the more performant languages I've ever used. (That is to say, for a given amount of time and effort, Haskell seems to punch well above its weight. At least for the few things I've used it for so far).

I'm still of the impression that well written C (or Java) will thoroughly trounce Haskell overall, but GHC will really surprise you sometimes.

I haven't used OCaml much - but my understanding is that the GIL makes it quite difficult to write performant multi-threaded code, something that Haskell makes almost effortless.


> Haskell won, even beating the reference 'C' implementation

This has always interested me. I have never gotten an answer, and I suppose I can't seriously expect one now, but I am still compelled to ask:

Why did you put C in quotes up there? Why isn't Haskell in quotes? You didn't put C in quotes in other parts, but that isn't what I'm talking about.


No specific reason really. I didn't think about it at the time, that's just how I typed it.

Probably because C is a single letter, and thus potentially needs some differentiation from the surrounding sentence, whereas Haskell is an actual word. But no idea really.


Thanks for your answer.


because 'C' is a char, while Haskell is a string


But then there should be double quotes.


> You really don't sacrifice anything by using it

What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.

> If I'm writing pure code, then order of execution is irrelevant. It simply does not matter. If I'm writing impure code, then I encode order of execution by writing imperative (looking) code using do-notation, and it looks and works just like it would in any imperative language..

Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider:

    import Data.Time.Clock
    main = do
	start <- getCurrentTime
	fact <- return $ product [1..50000]
	end <- getCurrentTime
	putStrLn $ "Computed product " ++ (show fact) ++
	            "in " ++ (show $ diffUTCTime end start) ++ " seconds"
This program appears to time a computation of 50000 factorial, but in fact it will always output some absurdly short time. This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness.

> Turns out that laziness, immutability, and referential transparency really helped this particular case

I don't buy it. In particular, laziness is almost always a performance loss, which is why a big part of optimizing Haskell programs is defeating laziness by inserting strictness annotations.

> Laziness meant that a naively written algorithm was able to stream the data from disk and process it concurrently without blocking

This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that.

> Immutability meant that the Haskell version could extract sections of the buffer for processing just by returning a new ByteString pointer. Other versions needed to copy the entire section into a new buffer

Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.

Like laziness, immutability is almost always a performance loss. This is why ghc attempts to extract mutable values from immutable expressions, e.g. transform a recursive algorithm into an iterative algorithm that modifies an accumulator. This is also why tail recursive functions are faster than non-tail-recursive functions!

> Referential transparency meant that we could trivially run this over multiple cores without additional work

It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.

Standard C knows nothing of threads, while Haskell has some nice tools to take advantage of multiple threads. So this is definitely a point for Haskell, compared to standard C. But introduce any modern threading support (like GCD, Intel's TBB, etc.), and then the comparison would have been more even.

When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.


"This is because the true order of execution diverges greatly from what the program specifies in the do-notation. This has nothing to do with purity; it's a consequence of laziness."

Of course, that's not what the do notation specifies, but I agree that's somewhat subtle. As you say, it's a consequence of laziness. Replacing "return" with "evaluate" fixes this particular example.

In general, if you care about when some particular thing is evaluated - and for non-IO you usually don't - an IO action that you're sequencing needs to depend upon it. That can either be because looking at the thing determines which IO action is used, or it can be added artificially by means of seq (or conceivably deepSeq, if you don't just need WHNF).


>That Haskell provides it is an admission that strict R.T. is unworkable.

Perhaps it is, but that doesn't mean it's not immensely valuable as a default. And it's worth noting that in the case of Debug.Trace, the actual program is still referentially transparent, it's just the debugging tools that break the rules, as they often do.

>Haskell's laziness makes the order of execution highly counter-intuitive.

Yes, there are some use cases where do-notion doesn't capture all the side effects (i.e. time/memory) and so a completely naive imperative perspective breaks down. But these cases are rare, and it's not that hard to learn to deal with them.


First up - I'll preface my reply below with a big disclaimer that I'm a relative notice with Haskell, so these are purely my opinions at this point in my learning curve.

> What's "it" - Haskell, or referential transparency? Referential transparency definitely has its victims, and debugging is one of them. Debug.Trace is quite useful, and also violates referential transparency. That Haskell provides it is an admission that strict R.T. is unworkable.

I'd disagree that this is any real attack on the merits of referential transparency, since Debug.Trace is not part of application code. It violates referential transparency in the same way an external debugger would. It's an out of band debugging tool that doesn't make it into production.

> Baloney! Haskell's laziness makes the order of execution highly counter-intuitive. Consider

I wouldn't say it makes order of execution highly counter-intuitive, and your above example is pretty intuitive to me. But expanding your point, time and space complexity can be very difficult to reason about - so I'll concede that's really a broader version of your point.

> Haskell returns a new pointer to a buffer, while other versions need to copy into a new buffer? This is nonsense.

C uses null-terminated strings, so it order to extract a substring it must be copied. It also has mutable strings, so standard library functions would need to copy even if the string were properly bounded.

Java using bounded strings, but still doesn't share characters. If you extract a substring, you're getting another copy in memory.

Haskell, using the default ByteString implementation, can do a 'substring' in O(1) time. This alone was probably a large part of the reason Haskell came out ahead - it wasn't computing faster, it was doing less.

Obviously in Java and C you could write logic around byte arrays directly, but this point was for a naive implementation, not a tuned version.

> This would seem to imply that Haskell will "read ahead" from a file. Haskell does not do that

It would seem counter-intuitive that the standard library would read one byte at a time. I would put money on the standard file operations buffering more data than needed - and if they didn't, the OS absolutely would.

> Like laziness, immutability is almost always a performance loss.

On immutability -

In a write-heavy algorithm, absolutely. Even Haskell provides mutable data structures for this very reason.

But in a read-heavy algorithm (Such as my example above) immutability allows us to make assumptions about the data - such as the fact that i'll never change. This means that the standard platform library can, for example, implement substring in O(1) time complexity instead of having to make a defensive copy of the relevant data (Lest something else modify it).

On Laziness -

I'm still relatively fresh to getting my head around laziness, so take this with a grain of salt. But my understanding, from what I've been told and from some personal experience:

In completely CPU bound code, laziness is likely going to be a slowdown. But laziness can be also make it easier to write code in ways that would be difficult in strict languages, which can lead to faster algorithms with the same effort. In this particular example, it was much easier to write this code using streaming non-blocking IO that it would be in C

> It is not especially difficult to write a referentially transparent function in C. Haskell gives you more confidence that you have done it right, but that measures correctness, not performance.

Except that GHC can do some clever optimizations with referential transparency that a C compiler (probably) wouldn't - such as running naively written code over multiple cores.

> When it comes to parallelization, it's all about tuning. Haskell gets you part of the way there, but you need more control to achieve the maximum performance that your hardware is capable of. In that sense, Haskell is something like Matlab: a powerful prototyping tool, but you'll run into its limits.

I completely agree. If you need bare to the metal performance, then carefully crafted C is likely to still be the king of the hill for a very long time. Haskell won't even come close.

But in day to day code, we tend to not micro-optimize everything. We tend to just write the most straight forward code and leave it at that. Haskell, from my experience so far, for the kinds of workloads I'm giving it (IO Bound crud apps, mostly) tends to provide surprisingly performant code under these conditions. I'm under no illusion that it would even come close to C if it came down to finely tuning something however.


A really great rebuttal of his points. I like Haskell, I really do - but I can never get any useful work done out of it. (Note: I am a hobbyist and not a professional programmer)


It's not a great rebuttal, it's just showing why people with imperative mindsets don't really understand Haskell still. The rebuttal rebuttal is good.

Do notation is not specifically a line-by-line imperative thing, and complaining that it isn't that doesn't make it bad. Obviously, the goal in Haskell isn't precisely to do imperative coding. It remains true that you can hack imperative code into Haskell in various ways effectively.


>sacrifice the abilitity to write print statements to do debugging

No.

>lose the ability to reason about order of execution

No.

>But does it really result in more performant code?

That is not the goal. The goal is being able to reason about the code, and write code that is correct. The fact that it performs very well is due to a high quality compiler, not purity.

> Every benchmark I've ever seen, more practical languages like Ocaml have come out on top.

Doesn't look that way from here: http://benchmarksgame.alioth.debian.org/u32/ocaml.php How exactly is a language that is unable to handle parallelism "more practical" than one that handles it better than virtually any other language?


OCaml is more practical than ML and Haskell because it has objects, for loops, more edge cases in the language, built in mutable keyword, and extensible records.


No it is not. Ocaml's objects make it less practical, not more. That is why they are virtually completely unused. At best, for loops are irrelevant. I'd say they are closer to a negative than irrelevant though. What do you mean "more edge cases?" That the language is less safe? How is that practical? Haskell has mutable references too, with the added benefit of them being type safe. And haskell has extensible records, they are just a library like anything else: http://hackage.haskell.org/package/vinyl


> That the language is less safe?

Not necessarily.

> And haskell has extensible records, they are just a library like anything else:

And OCaml has monads, they are just a library like else.


Monads are arguably a library in Haskell, too... though one the standard guarantees is present, exposed by the Prelude, and relied on by a lot of code.


>Not necessarily.

Then what? You made the vague statement, make it not vague.

>And OCaml has monads, they are just a library like else.

And? I did not claim ocaml lacks monads. You claimed haskell lacks extensible records. You do understand that my post was a direct reply to what you said right? Not just some random things I felt like saying for no particular reason.


I don't know what that word means, to me it has always seemed a bit silly. The whole point of software is its side effects, you will always have some. So instead of admitting that, you do a little dance to pretend side effects don't really happen in your program. It's very strange to me.


On the contrary, the whole point of software is its effects. They are only side effects if you're not able to encode the effects in the type of the function. Haskell doesn't treat effects as less important, it treats them as more important by allowing (and, as a consequence, forcing) you to reason about them explicitly in compiler-checked ways. This lets you do great things like have STM transactions with a guarantee that retrying them won't screw anything up. It also lets you define your own kinds of effects that you want to track and plumb them through your system in a coherent way. And it lets you write code that doesn't care what kind of effects it's working with, and yet which can still be used in contexts like STM that need to apply restrictions.


What I'm not liking about this line of reasoning is that in practice Haskell seems to not go far enough[1].

Yes, you have to declare your effects. In practice that means that most of your code returns IO, and isn't constrained anymore. I don't know if this is a library feature, or an essential feature of the language[2], but it would be very interesting for example to put a GUI together by computing events in functions that returned an "Event" monad, widgets in functions that returned a "GUI" monad, database access in functions in a "DB" monad, etc. Instead, all of those operate on IO.

[1] A completely subjective assessment. [2] I've though for a short while on how to code that, but didn't got any idea I liked.


It's not clear that that's the distinction that makes the most sense. I think typing based on capability makes more sense than typing based on purpose. So you would have a type like FSRead and FSWrite which could only read and write from the filesystem, for example. (Ideally with a nice way to combine the two!)

Of course, you can do this as a library. In fact, this is an example use case[1] for Safe Haskell which also prevents people from circumventing your types with unsafePerformIO and friends.

Moreover, some existing libraries already take similar approaches. FRP libraries extract reactive systems (like events but also continuously changing signals) into their own types. A button gives you a stream of type Event () rather than an explicit callback system using the IO type. Check out reactive-banana[2] (my favorite FRP library from the current crop) for a nice example.

Similarly, people use custom monads to ensure things get initialized correctly, which has a similar effect to what you're talking about. The Haskell DevIL bindings for Repa[3] come to mind because they have an IL type which lets you load images and ensures the image loader is initialized correctly exactly once.

Sure, in the end, everything will need to be threaded through IO and main to actually run, but you can—and people do—make your intermediate APIs safer by creating additional distinctions between effects.

[1]: http://www.haskell.org/ghc/docs/7.8.3/html/users_guide/safe-...

[2]: http://hackage.haskell.org/package/reactive-banana

[3]: http://hackage.haskell.org/package/repa-devil


That's great to know. Looks like I have a ton of stuff to read now.


I think there's quite a bit more of this going on than you seem to be aware of. In addition to the things mentioned in a few siblings to this comment, there is STM for software transactional memory, several different DB monads provided by persistent, Query and Update in atomic-state, Hander and Widget in yesod, Sh in shelly... In my experience, very little of my code runs in an unadorned IO monad. Some of these have holes punched in them with liftIO to let me run arbitrary IO - whether that's appropriate depends on the particular context.


I'm currently working on a UI library similar to what you describe. When you've got a situation where "everything just ends up in IO" you probably just have an early design iteration on that space of possible libraries.


Generally the IO monad means "FFI". So in that sense it is very distinctive. The reason they are all lumped into IO is because they all involve FFI.

The thing is IO is generic. So IO (Db) and IO (Gui) are different things.


Yes, one thing I would like to see in Haskell is more monads based on IO, but specialized to deal with specific kinds of side effects.

An example of this(not a very good one I'm afraid) is the X monad.


The IO Sin Bin seems to be about the same thing that you're concerned about.

http://www.haskell.org/pipermail/haskell-cafe/2008-September...


I wrote about this in another comment a bit, but I'll try here to be a bit more florid.

The best way to understand IO is to think about working with pure functions in an impure language. Let's say I've given you a promised-pure function which emits commands (re: the "Command pattern" if that's the way you want to see it) and you operate them using side effects. This is a massive inversion of control issue of course, but you can see how it might work.

Further, you might understand that your job is easier due to the purity of the command-emitting function. You explicitly give it all of the inputs you desire and operate it as needed. For instance, you can perhaps run it forwards and backwards as desired. Or weave it in with another "thread" I parallel knowing that only you must handle races and shared memory—the threads are pure.

Finally, you might understand that the risk of bad programming is borne on your shoulders primarily—side effects are complex and you're the only one handling them.

In Haskell, "you" are the RTS and the pure threads are Haskell programs. The IO monad is nothing more than what it feels like to be "inside" a useful kind of command pattern. Finally, we compartmentalize all side-effects into the RTS so that we only have to get them right once.


The same way you pretend to have infinite memory by using a garbage collector.


So, speaking as someone who loves functional programming but finds a lot of the flap surrounding the recent popularity of Haskell to be tiresome (just the flap, the language is great), I see it this way:

Purity makes it easier to reason about the semantics of your code. This isn't about parallelism, it's about concurrency, including single-threaded concurrency. Case in point, I recently spent quite a while scratching my head over a bug that happened because someone else had written some code to mutate a piece of shared state when I wasn't expecting it.

But the pure functional programming model is a very high level of abstraction (deep down, every interesting thing in computing is a state machine), and it has a tendency to leak like mad. One such case where it does so is I/O. In fact, you can't even do I/O in a 100% pure language - and that's what the I/O monad is really about; it's punching a hole in the language in order to let the big bad ephemeral outside world in. But in a controlled manner, so that the language's fundamental ethos of purity can be maintained, which in turn makes its laziness manageable. In short, the deeper downer reason why Haskell loves its I/O monad so much is because without it the language would be fairly useless. Anyone who tells you the I/O monad's really about making I/O concurrency headaches less of a hassle has been doing more blog-reading than programming.

So why preserve the illusion? Well, ghettoizing all things stateful lets you take advantage of pure semantics everywhere else in your code, which theoretically makes it easier to reason about and maintain.

As for monads themselves, IMHO they're kind of overblown. It's just another design pattern, akin to Visitor or Strategy or Decorator, only functional instead of object-oriented. Super-useful, applicable in all sorts of circumstances, and easily worth knowing. And, just like object-oriented design patterns, easy to make sound way more complicated than it is if you try to explain the idea to someone else without having fully digested it yourself first.


It is strange to you because it is strange period. It is also a strawman that people like to present for dismissing haskell without learning about it. Haskell does not do anything of the sort. This is why haskellers find the concept every bit as strange as you do. Haskell makes side effects first class. You can pass around side effects and manipulate them just as you would any other value.


Are you sure that's quite what Haskell's rhetoric is? It sounds to me more like the bad rhetoric of people who do a poor but enthusiastic job of trying to explain Haskell.


Physics is pure, god is the runtime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: