Hacker News new | past | comments | ask | show | jobs | submit login

I have a few friends who program largely in C++ or in interpreted languages like Python, Ruby, and Perl. They've scoffed at my love of Haskell, claiming it's too academic and overly complex and that they'd rather just get things done.

Just last week I asked them what piece of software was missing from their lives in an attempt to get ideas for a new project to start. One of the first things on all their lists was software that would automatically parallelize an algorithm for them. Inwardly I wanted to put palm to face because the easy parallelization in Haskell is one of the killer features I'd been trying to convince them of.

I think the major barrier to adoption is lack of real-world results to show people that's viable. All the benchmarks in the world won't convince these people to change their ways, but if they see other people using it to good effect, they will seriously consider it.




I wasn't aware Haskell had good multi-threaded support.

How is Haskell better at auto-parallelizing an algorithm compared to say, java's fork/join or .NET's equivalent?


Haskell's tools mostly have the same sort of power as forkjoin (scheduling work to be done on separate threads; creating the right number of threads based on the hardware; providing an easy-to-use set of functions that call for work to be done in parallel).

The major difference is that the tools compose with the language better. Instead of having to keep "I'm working in a multicore environment" in your head all the time when you're writing your client code (as you would in Java, worrying about concurrent mutation and so on), Haskell enforces immutability throughout the language. Often you get the extremely convenient situation where you observe that something could be parallelized, change map to pmap and you're now using all your cores.


Yes. But immutability is not the whole job. You have to make sure to use the right data structures. Linear data structures like lists don't parallelize well. You need to use something that breaks into more equal parts instead of car and cdr.

Compare Guy Steele's talk "Get rid of Cons" at the International Conference on Functional Programming 2009. For the slides go to http://news.ycombinator.com/item?id=814632, the video should also be available online somewhere.


A large part of Haskell's strength in this regard comes because it is a pure functional language, so neither the programmer nor the parallelism library has to worry much about side effects. This both makes it easier to parallelize more code and it can make the parallelism more efficient in some cases (since Haskell isn't operating in an environment designed for mutability).


Its mainly the language that makes the difference. By having such a strict type system, the language can know for sure that something is safe to parallelize.


You can get by with a much weaker (and/or dynamic) type system, and still get a language that's easier to parallelize. As long as you keep all mutation explicit.

Haskell does keep track of mutations in your program, because you have to wrap those in a monad. Since monads are part of the type system, you are right--but I felt that needed more explanation.


> [...] interpreted languages like Python, Ruby, and Perl [...]

Just to nitpick: Haskell can be interpreted as well, see Hugs. And Python commonly gets compiled to byte-code. (I have no idea how Ruby and Perl implementations work.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: