This is exactly the sort of thing languages that wish to become popular need. User-friendly applies to programming environments too; batteries included should be the result - they can always be removed later.
I'm sure someone will call that approach "bloat by default", but I think that's a red herring. Good programmers will remove bloat where it matters, and bad programmers will add bloat where it's harmful.
I absolutely agree, but the haskell guys have not quite gotten it complete. I envision mathematicians a-plenty installing ubuntu because it's "programmy" and then giving up installing the haskell platform because they don't know what compiling is or why they need all those "X-dev" packages.
I for one know a few math guys that absolutely love haskell when I show it to them (more so when I talk about category theory and types :-), but would just get distracted if I asked them to run console commands in order to install it, which is a terrible shame because they totally get it on my system once everything's installed and set up.
I realize that microbenchmarks like those posted at http://shootout.alioth.debian.org don't really tell us much about a language's performance in or suitability for larger applications, but I must admit I'm surprised that the Haskell programs there aren't significantly shorter than the Java versions. After all, Java is roundly criticized for its verbosity and these kinds of problems seem to be particularly good candidates for short, expressive functional solutions.
Are the Haskell versions written in a non-idiomatic style for performance reasons?
Many of those programs were written several years ago in non-idiomatic Haskell for performance reasons. They should be updated now that GHC is smarter.
I tried and gave up a few times to build the last version of the Haskell platform but this one seems to have built & installed cleanly on the first try. Looks like a good start.
Yes. Those programs serve as a warning against what you might have to contort your short, sweet, clear programs into in order to get good performance without dropping down to C.
That is assuming you are comparing the speed against C. I rewrote one into short, sweet style and found it was much faster than python and ruby. The only frustration was being unable to get the code shorter than the ruby version.
Note that the benchmarks are still for 6.10.4, they might get a bit faster when run with 6.12.
I am curious, are you speaking from some particular experience of problems with the speed?
I have experienced cases where simply recompiling with a new GHC speeds up my program 5x. I've also experienced cases where I had to take perfectly readable code and turn it into something ugly--but not to the extent that it is done in the Haskell code on the language shootout.
I would prefer to have the language shootout revert back to idiomatic versions of the programs, and then look at how we can optimize those idiomatic programs automatically in the same way that good SQL implementations optimize SQL queries.
But it seems that Haskell, because of static type checking and compilation should compete against C, C++ and Java in terms of speed. Because if it is competing against Python, I'll just always use Python when I need to get things done.
It does compete with Java, as both are garbage collected. C, C++ and Fortran are in their own category because they don't have to deal with the indirection of boxed heap objects.
The mythical "sufficiently smart complier" can remove these indirections, but no current compiler can do it perfectly, as doing so requires generative the copious amounts of object code that C++ is so roundly criticized for (templates). GHC is pretty good at removing these indirections. I understand C# compilers are as well. I've not used Java much but isn't there some issue where you can't store primitive integers in a collection because they don't inherit from the base Object type and/or the JVM doesn't support unboxed objects?* This is the same kind of annoyance you have to deal with in Haskell to get fast-as-possible performance (i.e. using types which are not polymorphism-friendly thus decreasing code reuse and increasing verbosity) so in that way I think they are equal with regards to the performance/brevity trade-off.
* I'm sure this has probably been fixed in some recent version of Java but it's still illustrative of why garbage collection imposes a real performance impact that C and C++ just don't have to deal with.
Speed is definitely part of the appeal. Perhaps more important though is the degree to which static typing might reduce errors and make code more readable and maintainable. I haven't done enough coding in languages like Haskell yet to decide if static types do in fact help in either respect but I think those things might actually be more valuable to me than raw performance.
That depends on a couple of things. First it assumes as fast as C is the goal, for my project the goal was faster startup than Perl and more safety. Also I remember hearing the requirements of the benchmarks had to be changed to prevent Haskell form lazily evaluating it's way to first place. I know in my case lazy evaluation is part of one of my optimizations.
No kidding. I'm at 5.1 kb/s. This is exactly the [legal] sort of thing that torrents were designed for. Once my download is complete (~8 hours from now), I'll make a torrent of it.
If I understand -- note that this is the new macports version, which is ghc 6.10.
Simultaneously, there is a new mac installer for the Haskell Platform with ghc 6.12 -- it works with Snow Leopard, and is 32 bit.
I believe that everything is installed under /Library/Frameworks/GHC.framework and then symlinked. (At least, that's what I'm seeing on my system from the last version of the installer.)
I'm sure someone will call that approach "bloat by default", but I think that's a red herring. Good programmers will remove bloat where it matters, and bad programmers will add bloat where it's harmful.