Hacker News new | past | comments | ask | show | jobs | submit login

I am a big believer in FP (reasoning about functional code is much sounder even in the sequential case) but not in parallel programming. Most algorithms just do not parallelize that well, regardless of the programming paradigm used.

Quote Donald Knuth: '[...] So why should I be so happy about the future that hardware vendors promise? They think a magic bullet will come along to make multicores speed up my kind of work; I think it’s a pipe dream. (No—that’s the wrong metaphor! "Pipelines" actually work for me, but threads don’t. Maybe the word I want is "bubble.")'

'I won’t be surprised at all if the whole multithreading idea turns out to be a flop, worse than the "Itanium" approach that was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.'




If you look at classes of problems, sure, only a few of them are easily parallelizable.

But if you look at what many programmers are actually doing on a day-to-day basis, it turns out that a huge chunk of it is really basic, straightforward SIMD. Hence the popularity of for-each loops in modern programming languages. That kind of stuff is often quite trivially parallelizable. Microsoft demonstrated that nicely with the introduction of, e.g., Parallel.ForEach() in .NET 4.

There's also something to be said for tasks which are naturally. . . let's say concurrent. ETL comes to mind - if you do ETL using purely sequential code, you're going to end up with software that's immensely slow compared to code that has enough basic time management skills to realize it could probably be doing something more productive than twiddling its thumbs while waiting on the disk controller.


That's true, but for those tasks, it's not clear you need to abandon imperative languages at all. In cases where you're just doing the same operation on all elements in an array, with no data dependency, you can do that in C without fiddling with pthreads by using OpenMP's "parallel for" construct.

FP promises to make it less error-prone and practical to do more complex parallelization than these trivial kinds of parallelization, but the skeptical take (which I partially share) wonders whether that is a big win. In other words, are the trivial parallelizations just the 10% tip of the iceberg, and there are 90% wins left from more complex parallel programming (in which case perhaps FP is what lets us unlock that future), or is trivially parallelizable stuff more like 90% of the parallelization win?


With the question posed that way, I think I fall on yet another skeptics' side. Regardless of whether it's the majority of the win or not. Suppose, for the sake of argument, that the non-trivial parallelization stuff is complicated enough that it's really not feasible for folks in general to be tackling it outside safety of what's effectively being billed as a linguistic walled garden. If that's the case, then the previous sentence ended about 13 words too late. Difficult, error-prone tasks like that should really be handled by a library or run-time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: