Haskell is a significant departure from most other languages commonly used by industrial programmers. It's a relatively shallow learning curve from Java to Python to JavaScript, but making the leap to a pure functional language is very difficult. Path-dependence plays a huge part here. This isn't just a matter of "people being afraid of what's different" as the article suggests; there are very rational reasons for a profit-seeking firm to exploit the fact that their developers (and those available to hire) are already relatively proficient at writing procedural code.
Another, more important reason, is that Haskell is too intellectually demanding for most industrial programmers. I consider myself an enthusiast of functional programming, but achieving anything practical using purely functional code remains extremely difficult for me, even though I regularly dabble in it during my free time. The Haskell IRC channel can be helpful, but it's very difficult to square "Haskell is easy enough for anyone to learn" with the inevitable "you are too stupid/impatient/incompetent to use Haskell effectively" taunts you're likely to hear, when you're asking for help to perform a simple task. Many Haskell evangelists don't understand that most developers aren't nearly as smart or dedicated as they are.
I'd be curious to know where most Haskell users believe they lie on the distribution of programming ability. I'd estimate most of them lie at the 99% percentile, and that any of them arguing otherwise are doing so out of modesty. (Note that I'd include dedication and curiosity in with intelligence in this metric.) I believe the most likely explanation for this is that Haskell is a particularly difficult language to use effectively.
Excel is pretty much a functional language (though it is a little disabled), and people do amazing things in it.
The problem isn't that functional languages are hard. They are a bit different, but good code is often fairly functional anyway. The problem is that most of the community seems to be obsessed with showing that functional languages are both better, and more difficult, than mere procedural languages.
Take monads. The only way to "get" monads is to realize that they are just sections of the program that are imperative. But most Haskell programmers introduce them in the most incredibly obscure double-talk, just to avoid admitting that Haskell needs the ability to do imperative things in order to be useful.
Perhaps the reason that Haskellers don't describe monads as "just sections of the program that are imperative" is because that statement is _not_ true.
A monad is a very nice container abstraction - period.
The IO part of Haskell just happens to leverage monads. One of the benefits of which is an explicit marking of impure methods in the type signature, but there are others.
Monads are used in plenty of purely functional parts of Haskell.
Disclosure: been programming Haskell for only a few months now, so someone pull me up if I'm wrong. However, I have used a lot of monads so far (and they're not hard, its just a higher, more convenient level of abstraction).
edit: to address your post further. I honestly don't think that the community is the reason that Haskell isn't taking off. If you were to gather stats on the points at which would-be Haskellers give up, I think most people would leave 1) when they can't grok the syntax easily or 2) when they can't get tools or libraries going easily (it was a problem for me) and long before they get to the point of visiting the news groups and start running up against academic-type people.
The Maybe and List monads have nothing to do with state, and are very
common in programs.
Let's use the List monad to determine someone's roommates.
> import Control.Applicative
> import Data.List
First, the data:
> type Person = String
> type Address = String
> addresses :: [(Person, Address)]
> addresses = [("jrockway", "123 Fake St."), ("jrockway's cat", "123 Fake St.")]
> people :: [(Address, Person)]
> people = uncurry (flip (,)) <$> addresses
And some helper functions around this data, a function to return all
addresses for a person, and a function to return all people that live
at a certain address:
> assocFilter :: Eq a => a -> [(a,b)] -> [b]
> assocFilter p xs = snd <$> filter ((==p) . fst) xs
> addressesForPerson person = assocFilter person addresses
> peopleAtAddress address = assocFilter address people
To find a person's roommate, we have to chain two computations.
First, we have to find zero or more places where a person lives. Then
we need to find who else lives at each of those addresses. With the
List monad, this is not much code!
> roommatesFor :: Person -> [Person]
> roommatesFor person = do
> address <- addressesForPerson person
> peopleAtAddress address
The monadic combinator >>= (which is hidden by do) does all the looping for us, so we don't have to explicitly write it out.
BTW, you can just cut-n-paste this into a .lhs file and run it, if you want to try it out.
I don't know how familiar you are with Haskell... but anyway:
Say you're working with the Maybe type, which is often used to represent operations that might fail, such as a map lookup, for example.
data Maybe a = Just a | Nothing
We're using two functions defined like so (feeling unimaginative at the moment, forgive me):
x :: String -> Maybe String
y :: String -> Maybe String
We want to use them together. Someone who's not familiar with monads might write something like this:
myFunction text = case x text of
(Just newText) -> y n
Nothing -> Nothing
(If x works and returns a value, put that value into the y function. If x didn't work, just return Nothing)
It works fine, but imagine you had 3 functions that returned Maybes, it would get tiresome and messy nesting all those case statements endlessly.
[...]
case x text of
Just newText -> case y newText of [...]
Not being satisfied with boilerplate, lets make an operation that will simplify this a bit:
bind :: Maybe a -> (a -> Maybe b) -> Maybe b
bind (Just x) f = f x
bind Nothing _ = Nothing
now we can define myFunction like so:
myFunction text = (x text) `bind` y
if we want to add another operation:
myFunction text = (x text) `bind` y `bind` z
Congratulations you've mostly made a monad. Bind is one of the monad operators (>>=), the other operators are extremely trivial to implement for the Maybe type.
This is why we say a monad is just a container with some handy operators. Maybe is the container, and bind is a way to chain together operations (without explicitly taking the value out of the container) so the operators themselves don't have to know anything about the nature of what they're dealing with.
I won't bang on anymore, this is a longer, better written tutorial[1] in the same vein as this short summary.
Oh yeah sure all the time, Maybe shows up a lot. It becomes more useful and less of a pain once you realise you can deal with it using monad operators. From the standard library a list operator and a map operator:
elemIndex :: a -> [a] -> Maybe Int
lookup :: k -> Map k a -> Maybe a
Say we have a map of lists of string, for some reason. We need to find the index of the element "foobar" in a particular key, so you could write:
lookup key map >>= elemIndex "foobar"
(a simple example for the sake of brevity)
Not very impressive, but monads aren't anything groundbreaking after all, they're just utilities that make our lives easier. There are more complicated operations that come in handy later, but they are quite straightforward once you get a grasp of the container metaphor.
Take monads. The only way to "get" monads is to realize that they are just sections of the program that are imperative.
Not really. Monadic function composition is just like regular composition, except that the programmer is given the ability to make "f of g of x" do something more than just pipe the result of g(x) into f. This can look like imperative programming, but it's still purely functional.
If you're willing to call monads Kleisli arrows instead, then you can even the same syntax to chain monadic computations as "regular" computations.
For example, write a function to add one to a number, then multiply by 4:
f :: Num a => a -> a
f = (*4) . (+1)
That's a normal function, with the normal function composition operator.
Now write a function to increment the state, in a stateful computation, by a number:
inc :: Int -> State Int ()
inc x = put . (arr (+x)) . get
Even though State is a monad or Kleisli arrow, you can treat the stateful computation as regular function composition, because it is just composition. There is no imperative programming anywhere to be seen. Although operating on some hidden state feels imperative, it's not. (Under the covers, it is a bit different than what you might be used to. Each stateful function is really a function from its arguments to another function from the current state to the result. We compose the "result" functions into one big function from state to result. This involves a bit of plumbing, but the actual implementation is only two lines of code. And it makes for a very useful abstraction in many cases. But I digress...)
Monads are just a way to make similar things, chaining computations with an arbitrary combinator, look similar in your code. We did it with State above, and there are lots of other things that work similarly. Computations that can return zero or more results, computations that can fail, computations that operate on transactional memory, etc. We use "Monad" (or "Arrow") to provide the programmer with a common syntax for interacting with each. That's all a monad is.
(IO is weird, and it is a monad, but it could also be an applicative functor, or comonad, or arrow, or a lazy list, or. Don't extrapolate your knowledge or fear of the IO monad onto monads in general.)
I have no idea what a Kleisli arrow is, and a simple Google search points me right back in the direction of the HaskellWiki. They may be a perfectly simple concept, but if they are, then the term is not in widespread use. One of the major problems with explaining monads is that programmers who understand them frequently attempt to explain them by using terms even less well-understood than "monad". You get points for precision, but none for evangelism.
It is of no help or use to those who do not understand monads that they could also be an applicative functor, or comonad, or whatever. You may be perfectly correct, but for someone who is not already part of your community, and conversant in its terms, it does not particularly help.
A Kleisli arrow is just an arrow that works like a monad:
-- | Kleisli arrows of a monad.
newtype Kleisli m a b = Kleisli { runKleisli :: a -> m b }
instance Monad m => Category (Kleisli m) where
id = Kleisli return
(Kleisli f) . (Kleisli g) = Kleisli (\b -> g b >>= f)
I only brought it up to show that we can use the same operator, Control.Category.., to compose functions, monads, and arrows and that they are not all that different from each other.
(It's also worth noting that every language has their own language-specific terminology. If I Google for "std::string", I am only going to find C++ results, despite the fact that strings are a generic concept. If I search for "IEnumerable", I am only going to get C# results, despite the fact that enumerations are a generic concept.
Similarly, not many languages think of programming in terms of arrows and objects and categories, so you don't see many results for "Kleisli arrow" outside of the Haskell community.
Way to make his point. Only programming language where Arrows ever come up, Haskell. Wiki page on Arrows "Generalized Monads". So far our definition of a Monad is a specialized version of the generalization of Monads.
Hear hear. I looked at Haskell years ago, liked the functional aspects, recursion, lists, etc., but monads for I/O, database, was like hitting a wall. Switched to ocaml, it had enough imperativeness for that, but I got turned off when I got to functors of modules or something like that. Worked in Common Lisp for a while, but now I'm happy in Clojure, I got back the FP but I don't need a PhD for talking to a database.
Problem is, when you use a programming language without arrows, you end up rewriting generic plumbing every time. If you want to write more code in your app to avoid using the word "monad", that's fine, but probably not rational.
This has nothing to do with the use of monads, but how they are explained. When your explain monads to people who don't no what monads are speach starts with "It is an arrow..." You are explaining monads to yourself, not to someone who doesn't understand monads (and consequently doesn't have a clue what arrows are.)
So far as I can tell monads are abstraction of state(though calling them abstractions of function application is more correct, calling it state is more intuitive to me, and how they are represented in the type system.) From there they get used for different things: Maybe == Nullable Types, List == lists, Either == Unions, St == Mutable State, IO == hide IO. Then a function that is pure and doesn't know about what your monad does gets invoked by the monad where the hidden state changes the flow of computation. Maybe performs a null check, List calls map, Either picks the type in the union then calls the function on that, St allows you to use and update the hidden state, IO performs IO then shares the result with you.
Now there is an explanation of a monad that doesn't assume you already know what a monad is. It may not be good since it basically says monads are sticky higher order functions masquerading as a data type, but there you have it.
Honestly, I'm tired of explaining monads in every HN article about Haskell. I've written detailed explanations for newbies 100s of times. If you want to learn more about something, Google it.
Remember, you understanding Haskell will improve your understanding of the Universe. You understanding Haskell will do nothing for me. So you can see how the incentives are aligned...
I don't disagree that every technical community has its own terms. But it's very difficult to broaden that community by restricting yourself narrowly to those terms. To me, "an arrow that works like a monad" reads like a recursive definition: if you don't understand how one works, you can't understand the other.
The net effect is that your community is going to be composed of people who made a conscious effort to break through the communication barrier. This isn't necessarily a bad thing; you end up with a small, focused group of very smart people. But it doesn't buy you many converts in the industry.
When a function uses global state to compute a value, it's like the global state is an argument to the function.
When a function modifies global state, it's like it returns both the value it computes and the modified global state.
If you're interested more you can read http://research.microsoft.com/en-us/um/people/simonpj/papers... which I believe is the original paper introducing monads into Haskell. The first 11 pages are very approachable and give you the rational for why monads are necessary in Haskell and what they're for.
> Monadic function composition is just like regular composition, except that the programmer is given the ability to make "f of g of x" do something more than just pipe the result of g(x) into f. This can look like imperative programming, but it's still purely functional.
> If you're willing to call monads Kleisli arrows instead, then you can even the same syntax to chain monadic computations as "regular" computations.
I have no idea what any of that is meant to mean! Are you just trying to prove wisty's assertion that "Haskell programmers introduce them in the most incredibly obscure double-talk"?
Function composition is obscure? Did you take Algebra 1?
(I used the term "Kleisli arrow" so that you could Google for something if you wanted to learn more. All you really need to know is that you can use . to pass the output of a regular function into the input of another one, and you can use . to pass the output of a monad into the input of another. Saying it that way is very imprecise, though, and the goal of people explaining things is to say them precisely so that you have some hope of eventually gaining an understanding.
You have to bootstrap understanding. Assume you know what a Kleisli arrow is. Then read the rest of the post. Now your assumption is correct!)
I disagree. His point is not merely that Haskell advocates cannot clearly explain simple interfaces like Monad (let's be honest here; this is more about Haskell folks being too researchy, and not being better skilled in the dog-and-pony-show environment of "practical" marketing), but that the lack of skill in clear explanation is due to a reluctance to admit that "Haskell needs the ability to do imperative things in order to be useful".
The thing to "get" about Monads are that they are a means to describe imperative things going on, in a way that blends in with not necessarily imperative things. Which is part of why it's so tough to explain them - we're all talking past each other regarding what "the point" of using Monads are. The point to them in Haskell is mainly twofold:
1) Haskell users can use a Monad to describe net access, DB access, file access, and other "once done, the results are out of your control" effects in the same way as describing using (Writer) computations that keep a log of what's being done, (Error) exceptions, (Reader) environment variables, (List) functions with multiple - or no - results, and more.
It's really easy...and kind of mind-blowingly complex, all at the same time.
Another, more important reason, is that Haskell is too intellectually demanding for most industrial programmers. t
While that's often stated as a reason, I don't believe it myself, simply on the grounds that anyone smart enough to use C++ "in anger" is smart enough to learn any language.
The reason that Haskell isn't popular (IMHO) is that a lot of programming isn't clever algorithms, it's forms (interfaces for getting data into databases) and reports (interfaces for getting data out of databases). A lot more is glue (connecting the output of one program to another). Haskell just isn't a good fit here.
I am very excited by F#; here we have an ML dialect that has minimal impedance mismatch with the OO/imperative world (and C# with LINQ has minimal impedance mismatch with the declarative world of relational databases). This is where Haskell types should look for work...
Why is Haskell not a good fit for glue? That's what I use it for, and it works great -- my programs are short, efficient, and easy to write and test.
My first Haskell project for work was initially a C++ project, but it was too hard to use C++ as a glue language, so I switched to Haskell. I would have used Perl, but Haskell works better on Windows and has an easier-to-use FFI.
I actually ended up spending more time trying to get some old version of Visual Studio to link against my super-old-proprietary-C-library that was the core of my project than I did writing the whole Haskell FFI binding to that library and writing the first version of the Haskell-based program.
This turned out to be the most trouble-free program I ever wrote. It had to operate on large datasets, and never died in the middle (like I'm used to with Perl). The only bug that the program ever had was where one of my data validation rules was too strict, and rejected some valid date. (The validation rule said that the dates in the time series had to be increasing. But one series only had one point, and hence wasn't strictly increasing. The bug resulted in "Warning: bad data", though, not "Prelude.(!!): index too large", as the implementation was with fold.)
Incidentally, as a result of this project, c2hs supports Windows DLL calling conventions :)
After having written that, I realized it was more nuanced than that. Haskell's good for parsers, for example. I am actually working on one for a proprietary logfile format right now (it will "convert" said logfiles into SQLite .dbs). But if you have preexisting libraries for system A in Java and system B is CORBA then you would be mad to put Haskell in the middle, the impedance mismatch is too great.
This is a software-engineering-in-general problem, though, not a Haskell problem. If you have a bunch of components that can only communicate in a very specific way, then all the components have to communicate in that very specific way. This limits flexibility and "working with the system" becomes the main engineering problem. I think this is why most software dev teams explode to needing 10 teams of managers to manage a million developers -- because so many "irreversible" decisions were made that the application becomes workarounds on top of workarounds. Haskell is not going to magically eliminate bad planning and the fear of refactoring. And similarly, it's not going to make it easy to keep adding shit on top of the shit pile, like Scala or Clojure would. (Not to dis on Scala or Clojure specifically. I've actually never seen anyone do this; they just add more Java 1.4.2 on top of their existing Java 1.4.2-only mess. It would be me that added the Scala or Clojure :)
I actually have a problem like this right now; I'm starting re-development on an application whose components talk via CORBA over an ultra-expensive and ultra-overengineered proprietary message bus. My plan is to write a bridge between the proprietary message bus and JSON-RPC (or something like that) in Scala, and make the rewritten components only talk JSON-RPC. Then, when everything is rewritten, there is no more super-expensive-message-bus requirements, and we shut it off. Now we have the ability to write components in any language.
(Also, the reason this doesn't count as adding shit on top of shit is because eventually the ugly bridge will go away. That only exists to make the rewrite into a refactor. You have to have a scaffolding, or all the shit will collapse on top of you and make a big mess. :)
Now I know the reply is going to be, "well, not everyone can just replace their proprietary crap with something simpler and more generic", but again, that's not a Haskell problem.
I would argue that most people who use C++ in anger don't fully understand that language, and get by because they do what they know has worked before. The accumulated experience doesn't imply intelligence.
That's not to say that I think that it's intelligence that's lacking with respect to Haskell etc. I think the problem there is the mode and style of thinking, and that most programmers are too old to easily pick up the different way of approaching problems that it becomes second nature. That, and functional programming's chosen compositional breakdown of fixed common data with an open-ended set of applicable functions is not always as applicable as object orientation's breakdown of open-ended data with inheritance of fixed sets of functions; you often need to introduce an extra level of indirection (=> abstraction) with functional programming to get access to the dynamism that OO gives you in the box.
Haskell is not popular because to be popular you must cater to the average Joe. And to cater to average Joe your foremost goal must be not making him uncomfortable about himself. Never forget this.
And since most average Joes just work to pay their bills, they don't give a damn about technical superiority, you know.
Yeah but still being hacker is to make oneself uncomfortable when not being skilled enough, then to overcome this by practicing the skill. So maybe we should say that it depends if someone is a hacker or a coding monkey. I've seen rates of about 20%-80% in the industry.
I'll have to correct you a bit. Judging by the article, Haskell currently isn't usable for work. So why should average Joe choose half-finished product? So why even the brightest hacker choose it when he already has all the tools he need?
You made a point. I've been rumbling on this lately, and I'm sharing my current conclusion here to hear HNers' opinions.
Let's consider Common Lisp. Newcomers and outsiders complain that CL doesn't catch up because there aren't free CL environments around with a thoughtless setup (I've been in this camp too). However, if you read comp.lang.lisp, it jumps into your eyes that many lispers are accomplished and sharp programmers. Still, they haven't fixed such "issues" once and for all. So, what's the real deal? I think that here lies an explanation: such issues are not such an issue to experienced programmers. That is, if you can't setup a CL system, you simply are not skilled enough as a programmer. Get over it. I think the same applies to other systems like Haskell, GNU/Linux, Emacs and so on.
What do you think? Thanks.
EDIT: Of course, there are other causes to consider... for instance, since most CL developers work on *nix, there is less work done on Windows. I think that installing a CL environment kind of fires the Law of Leaky Abstractions: you'll have to deal with some nitty-gritty details...
I don't think it works like that. Put enough barriers to adoption (lack of: easy install, good documentation, friendly community, good error messages, bug free implementation, good libraries) in front of your prospective hacker audience and you'll find that adoption will suffer.
Not because they're not experienced enough or because in principle they can't make it work (I'm sure they could) but simply because there is only so much time.
If you haven't reached a certain level of confidence that a path is the right one within a given amount of time the natural thing for an experienced programmer is to abandon the path, after all, there are so many technologies to choose from that you can't afford to invest too much time into something that feels like a dead-end, even if in the longer term it might turn out to be great.
> I don't think it works like that. Put enough barriers to adoption (lack of: easy install, good documentation, friendly community, good error messages, bug free implementation, good libraries) in front of your prospective hacker audience and you'll find that adoption will suffer.
I can see your point, and I agree on principles. I disagree on "good documentation", "friendly community", "bug free implementation", "good libraries"... If there are either lacking of "good documentation" or "bug free implementation" or "good libraries", that just means a language either is academic or not ready yet for production, thus you'd better skip it. About "friendly community", I'd rather say that there are self-selecting communities. For instance, many people complain about "social problems" of Lisp, yet I had a very nice experience while asking even controversial questions on comp.lang.lisp.
> If you haven't reached a certain level of confidence that a path is the right one within a given amount of time the natural thing for an experienced programmer is to abandon the path, after all, there are so many technologies to choose from that you can't afford to invest too much time into something that feels like a dead-end, even if in the longer term it might turn out to be great.
That's why I've resolved to put my time only in both time and battle tested languages like Common Lisp and Erlang.
That is, if you can't setup a CL system, you simply are not skilled enough as a programmer. Get over it.
"Can't" is not at issue here - with enough effort, anyone can do just about anything. The question is whether it's actually worth the effort.
Given a choice between languages A and B, both of which have very enthusiastic userbases that are utterly convinced that their language is the One True Programming Language, but neither of which I've actually spent enough time with to know whether it will be worth the investment, which one will I spend the time to learn to work with?
A smart programmer will pick whichever one gets up and running the quickest so that he can actually see what coding is like in that language, as opposed to configuring/installing/compiling sources/tracking down a supported version/etc. Why waste time doing all that stuff when if you just hop to the next language over you can have it all for free? Not to mention that problems setting up a dev environment are loosely predictive of future problems dealing with deployment, hiring extra hands, getting support when things go wrong, etc...
Common Lisp is getting its clock cleaned by Clojure because the Clojure folks know that lowering barriers to adoption is a critically important thing when people are faced with such a glut of choice.
But then again, I've never gotten the sense that the CL folks actually want more people using it, that community seems to be more than happy to remain an elite insider's-only club that is - obviously - smarter than everyone else. Which explains a lot of the hate for Clojure - it's bringing powerful tools to the unwashed masses, and it turns out they wield them quite effectively, shedding some serious doubt on the implicit assertion that you have to be really, really smart to use any sort of Lisp.
> "Can't" is not at issue here - with enough effort, anyone can do just about anything. The question is whether it's actually worth the effort.
Perhaps I should have said "with little effort", then. If you can't setup a system with just little effort, you are not skilled enough as a programmer. See, Ewjordan, I understand your frustration. I've had my share with setting up "newbie-hostile" systems, and you know what? As a consequence, I now understand better how things work, and I can setup whatever system faster. I have "graduated" from Ubuntu to Debian user, I now use Emacs as my only editor, and so on. Little improvements... Still learning. Oh, and I'm faster at tracking down and fix issues which I or my coworkers stumble upon.
> A smart programmer will pick whichever one gets up and running the quickest so that he can actually see what coding is like in that language, as opposed to configuring/installing/compiling sources/tracking down a supported version/etc.
Would we really call her a smart programmer? A street-wise programmer maybe. Being such a programmer could be your goal? That's fine. Have fun and make a lot of money. I think a really smart programmer will pick the language which will pay more dividends in the long run. Kind of choosing a Vim clone over a regular editor.
> Why waste time doing all that stuff when if you just hop to the next language over you can have it all for free?
Because not all language are created equal and you sometime have to choose between fighting the system in the beginning and fighting the language in the long run.
Of course, you'll have an hear for practical considerations too. For instance, I'm currently learning Common Lisp over Scheme because even if I like Scheme better, CL has a wider user base and a proven track record.
> But then again, I've never gotten the sense that the CL folks actually want more people using it
I agree. Most of them don't seem interested. And I think I understand why. CLers are an old community and a lot of newbies jumped in, whined about things not working and left. Their hears are full. However, if you ask for help and ask for it smartly, you'll get answers, you'll - you won't believe this! - you'll get even code written for you.
P.S.: Sorry, but I don't have time to review this post. Bye.
That is, if you can't setup a CL system, you simply are not skilled enough as a programmer
But these things are unrelated! Your ability as a Lisp programmer is entirely unrelated to your ability as a Windows/Linux/whatever sysadmin/build engineer/whatever. If anything it's the old "it works on my computer" excuse that (bad) tech support trots out. That's the thing that sets the Clojure guys apart, they actually are interested in making it possible to take it for a spin.
I'm reminded of the difference between PADI and BSAC, two competing dive schools. PADI says let's get you in the water as soon as we can, don't worry, it's only a swimming pool and we have instructors and lifeguards on hand, and when you're ready we'll go out to sea and continue learning there. BSAC says study in a classroom for 6 months, then let's go dive a wreck at 50m...
> Your ability as a Lisp programmer is entirely unrelated to your ability as a Windows/Linux/whatever sysadmin/build engineer/whatever.
Are we sure? Isn't this a case of Law of Leaky Abstractions? We may think we can and should ignore issues related to underlying hardware, but reality is we can't and we shouldn't. As programmers we need a bit of knowledge about system administration too, even when we are working on a virtual machine.
Kudos to Clojurers! Maybe there is more enthusiasm and understanding about newcomers in Clojure because it is a new language. That is, the are not under the Curse of Knowledge.
We should acknowledge that Haskellers are trying to meet the needs of newcomers too, by releasing the Haskell Platform.
Yes and no. If you are experienced programmer, then you'll have no problems with "issues". On other hand, why should you waste your time with new language?
Each year there is new programming language or framework that "will change the computing". Oh come on. We all know that Windows will become lousy implementation of Unix. We also know that all programming languages eventually will have half-assed implementations of Lisp features. Why you should learn Haskell when Lisp is more superior?
I'd like to add that I've not said "average Joe" in a dismissing tone. I just meant that most people are not that interested in living their lives to their fullest, and are just not interested in asking deep questions in whatever domain.
Moreover, hating to feel uncomfortable about yourself is just a human trait and most people just like following the path of least pain (another human trait, I think).
I just meant that most people are not that interested in living their lives to their fullest, and are just not interested in asking deep questions in whatever domain.
Now if that isn't an insult I don't know what one is. You don't have to grok Haskell to live your life to the fullest(especially if your Domain isn't programming languages.)
Why an insult? Actions speak louder than words. How many people say health is one's most important asset, yet they don't take any action to really take care of it? Would you still think they care that much about their health? How many people would like to do that and that, still they don't take any action toward it? OTOH, if those same people were starved to near death, don't you think they would resort to extreme actions, because then their foremost interest would be getting food?
Again, I don't mean using a dismissive tone. If it feels like that, that's just because I'm not a native English speaker, and I already struggle to say what I think.
I do think that many people taking it easy are sorely needed by society. Innovative minds do push our culture forward, yet armies of "average Joes" - who just take every day as it comes - keep up with day-to-day tasks, are loving and unselfish people, and so on.
And of course you don't need to grok Haskell or even programming to live your life to the fullest.
Not being interested in some smart subject does not mean you're stupid.
Speaking as a very experienced average Joe, all I need to see is a use case that shows how learning all of the weirdness discussed in the other comments in this thread will help me do my job.
I can't remember the last time I had trouble getting something done in C because it didn't support Kleisli arrows, or whatever.
Indeed Haskell lacks a proof of concept of its superiority. And you can't provide such a proof with a few lines of code.
When I studied Erlang, I was flabbergasted by both code size reduction and expressiveness thanks to pattern matching, by its fault tolerant capabilities, by its hot-code swapping... then reading how Yaws was alive and kicking long after Apache died under heavy load.
I think the main issues are that lazy evaluation makes it more difficult to reason about the performance of your code, both in terms of what actually gets computed in which thread / on which core, and in terms of space requirements (lazy evaluation can force the runtime to keep things referenced longer than you might expect offhand). I'm not saying a good Haskell programmer can't overcome these obstacles, just that this is definitely a perceived barrier to commercial Haskell development.
Check out this paper for a discussion of these issues in a real world Haskell application:
Failing to work on Windows is a much bigger problem than I think most people realize. You can write Java on Windows and (mostly) have it run on Linux or Solaris with no problem. And Windows has a 95%+ share in the corporate OS market. Ruby on Rails has a similar problem (though it's got better on Windows recently), and so does Django, which is a nightmare to get working in a Windows environment.
I'm not sure that these are really the same market share segments. Django, Rails and all that server side stuff is best run on servers, which more often than not, are going to be running a *nix.
Windows has C# and all that, and it works nicely for them.
Perhaps this leads to a wider question of who the Haskell audience really is and what problems they think they solve. Lots of languages have pigeon-holes, Ruby has its super dynamism, Python is an excellent scripting environment, C is fast, etc etc. Haskell is... mathsy. Mathsy applications already have Matlab and such.
I think at the end of the day, Haskell will probably remain a language from which other languages learn from, and that's OK, and that doesn't mean it's unimportant (I think even SPJ has said similar).
I think the issue here is buy in from management. If I work at a Windows-centric shop, it's much easier to sell a different platform if it doesn't require building new servers for it. It's much harder to pitch a project when your requirements include new OSes and servers.
For example, I work at a Windows-centric shop, but if I could make a compelling case to build a tool/product on a different platform, I could get my boss to approve it if it didn't require a new piece of hardware. We have written tools in other languages besides C#, but we stick to languages that are easy to deploy on our current servers.
Haskell works fine on Windows. Some of the lead GHC devs use Windows machines. GHC even comes with a Win32 binding that lets you do GUI programming (doesn't come with a GUI kit for any other platform standard.)
Haskell is fiddly to use on OSX too. All the devs seem to have moved to OSX 10.6 and it isn't being fully tested on 10.5.8. Now you might say "just upgrade" but that's not the point.
I am running a Debian VM in VirtualBox for my Haskell work now... That's great when it's just me playing, but not so much for production code.
As someone who has been learning Haskell recently I think I can bring up a point that no one else has. Record syntax in Haskell sucks. Most business programming revolves around a form of DDD, where each of your business entities are fairly well defined. Creating records that match these business entities is fairly simple, working with them is not. None of the web frameworks I've seen follow a model based approach where you bind directly to your form representation of a model. I've been trying to figure out a way to do this in Haskell, but it looks like I have to learn Template Haskell and Generics on top of the already steep learning curve of Haskell, for something that would be simple in almost any other language. So far I haven't given up, but I don't see mainstream programmers getting into Haskell until they can describe their processes in a model focused fashion (and no, I don't mean with OOP).
Why doesn't Haskell support objects? Seems like that would be quite useful. They don't need to be mutable or anything. They could be like records, that scale.
Typeclasses don't solve the same problems as objects do. Typeclasses are a way of getting access to more operations in polymorphic code, which in functional languages is a compile time feature; they're also a way to implement overloading.
Objects are a way of implementing protocols without needing to know the details of the object that implements the protocol, even at runtime. Polymorphic code in OO systems is a runtime feature, not a compile time one; objects may be loaded dynamically and almost always are at some level in any large OO system, for configurability and testing if nothing else.
Because two features are different doesn't mean they don't solve the same problem. Here, most of the time, class polymorphism is used to solve problem best solved with parametric polymorphism (sometimes called genericness).
class Base
Base(int i_init) -- constructor
virtual int f(int param) -- a method
class Inherit1 (extends Base)
int f(int param) -- reimplementation 1
class Inherit2 (extends Base)
int f(int param) -- reimplementation 2
class Inherit3 … (ad nauseam)
We don't need class polymorphism with inheritance, here. We can do simpler:
class Base
Base(int i_init, int f_init(int)) { f =: f_init; … }
int f(int)
Or even simpler:
int f(int i_init, int f_init(int)) // C-like syntax
f: int -> (int -> int) -> int -- Haskell syntax
(And don't tell me that passing function as parameters is weird, or complicated. Functions are typically way simpler than "Objects".)
Typeclasses are a fine substitute for polymorphism in behavior (methods). But does Haskell throw out the baby with the bath water? Haskell seems to lack an ability to express a taxonomy of data models. Records are not even close to a substitute.
Why can't Haskell make it easier to express data structures? Something "turtles all the way down" introspective, like a MOF (meta-object facility, http://en.wikipedia.org/wiki/Meta-Object_Facility) would be nice.
Every language has a catalyst that pushes it from obscurity into mainstream use. Whether it be a project (Ruby on Rails), a programmer (Linus Torvalds -> C), a company (Google -> Python), or a library (Boost -> C++), there is always a force behind adoption.
Most languages undergo a "fad period", where it's hip and cool to write in it and people just do it because other people do it. Clojure is going through this right now, as are Scala and (arguably) Haskell.
We'll just have to see. I honestly hope Haskell becomes popular. Judging by http://shootout.alioth.debian.org/u32/haskell.php, performance really isn't an issue. It's more of just an issue of programmer adoption and a willingness to throw yourself out there to spend some time to learn how to think outside of the imperative programming box.
Writing a quick directory traversal function should be a no-brainer in any reasonably general-purpose language. The fact that the above reddit thread is jam-packed with the ins-and-outs of doing this simple task in Haskell tells me that Haskell may be great for some things, but it's probably not general-purpose, or a great fit for anything close to the heavy I/O, streaming, processing sorts of code I write.
I actually own Real World Haskell, and have tried off and on to get into it, but I found that Clojure 'took' in my brain 1000x more easily than Haskell. It's a real bitch, because I'd love to be able to say I can write Haskell, but gosh-darn it, I just can't. Grrr. Sigh.
quick edit: in answer to your point about a 'killer app' for Haskell, the parsec parser-combinator library looks bloody brilliant. Maybe if I run across a parsing need, maybe it'll help me get something useful out of Haskell, but I suspect my brain just doesn't get on with the Haskelly/MLy languages.
To illustrate this point, a choice snippet from that Reddit thread, explaining part of what seems to be considered 'the proper solution'. Please note that my intention is not to make fun of this, but to illustrate that the terminology in which people in the Haskell community communicate makes it impenetrable for someone just trying to pick up the language and do stuff with it.
it's a straightforward lifting of the non-monadic version. You can almost get
it from g_hylo by using the identity comonad, it's distributivity law, the
identity natural transformation, and using T.sequence as the monad's
distributivity law[1]. But this doesn't quite get us there because it requires
that we can refactor the monadic parts of the coalgebra into the algebra.
I think you are being misled by the seemingly simple headline "how would you write du...". The question was not straightforwardly practical but deep in the territory of so-called 'recursion schemes:
The natural way to do this is to use an unfold to
generate a list or tree of all the files in the
directory tree, map over them to get the sizes,
and do a fold to get the total.
-- And similarly elsewhere, more explicitly, as he makes clear why he isn't interested in the 'obvious' solutions, e.g.:
Yep, this is a good practical approach if I just want
to "du". I'm looking specifically for the fold . map .
unfold approach ...
The interventions of doliorules and then winterkoninkje (whom you quote) were in fact the ones that spoke to his condition.
What most other languages do, though, is exactly equivalent to Haskell's unsafePerformIO. If you want to interact with your environment in possibly-destructive ways (and sending a message out of the runtime, whether to the network or the OS, always has the possibility of being destructive, because you can never guarantee what a system not under your control will do in response to your message), just accept the reminder that it's not functional/idempotent/memoizable/whatever else, and do it.
This is going to sound like an odd analogy, but saying "I won't use Haskell because I want to do IO, but don't want to unsafely do IO", makes you sound like a person playing a fighting game who blames their loss on the game not being "fair." If you're playing to win[1], and there's an "unfair" tactic, you use it yourself. If "cheating" at Haskell lets you do a higher-quality job, faster, than either using a different language or "playing Haskell as it was meant to be played" (i.e. being a scrub) then why not?
But "not necessary" has never been the problem here. People are complaining because the idiomatic way to explore a directory structure in Haskell isn't familiar to them—it's functional and mathy and strange (because exploring a directory structure in a pure-functional way is mathy and strange.) They want the language to be "as easy to use" as, say, Python, for directory-diving—but that basically means that they want to do it the same way they'd do it in Python. The only part that's hard is figuring out how to do it "the Haskell way"—a.k.a., the unfamiliar, math-filled way.
What people need to learn is that you don't have to do that for everything. You can have all the advantages Haskell brings while not trying to make everything in sight into arrows and co-monoids; if it's easier to do it the non-idiomatic way (i.e. the way that doesn't involve turning your easily-expressible business domain into difficult-to-express Mathematics), then do that part that way, and get back to work.
I think you've just nailed it down: Haskell is for mathematicians, or those mathematically oriented.
Most programmers are not mathematicians (especially self-taught ones) since at the end of the day common programming tasks don't require you to know advanced math (I'm not implying knowledge wouldn't be very beneficial).
Learning Haskell has been a side project of mine the summer of two years ago, when there were a bit less tools to help newbies. Not an issue for me, since I've grown used to run code in my mind while reading it, because I firstly learned C without a computer for testing code on. I need fewer "reality checks" than more "hands on" developers (but I still throw snippets at the compiler to understand the gory details).
I think Haskell is for those mathematically oriented both because only mathematicians would have cared to wrestle with monads to achieve purity (Clean is as pure as Haskell, with no monads in sight) and because its notation, whose succintness matches that of maths.
I confess I've been scared of Haskell's heavy leaning on operators and many levels of precedence among them. I prefer more verbose languages. I'm not mathematically oriented, I think.
The problem with that Reddit post is that it would take a long time to write du, and Reddit users don't have that long of an attention span. Some people are being nice and trying to answer, but nobody took enough time (say, a week of work) to generate a really good answer.
If you asked any question in the form of, "how do I write <non-trivial program> in <any langauge>", and you aren't paying for a week of someone's time to get you a really good answer, you aren't going to get a really good answer. It has nothing to do with Haskell. The reality is that programming involves trial, error, and iteration. People on Reddit can give you good first-drafts, but the final product is up to you. (If you asked, "how can I write du in C", then someone could just paste the source code. But du isn't necessarily the best implementation of du.)
I've written some directory traversal code in Haskell for a work project, and I didn't find it to be particularly mind-bending or difficult. I did it the same way I would have in Perl or any other language; visit directory, run my computation, collect the answer, recurse.
I guess the original poster had some performance requirements that the standard lazy IO approaches weren't satisfying. I think he may well have gotten a few good answers there, but I really wish he would have tested them out and reported back about which (if any) worked for him!
maybe I'll test them out and do a little blog post on the subject
I started using Haskell five years ago (before the age of dons), and wrote a very intensive (parser generator)^^3 project in it in 2007 that was inspired by some half-finished postdoc papers I found from researchers at Utrecht University — but I haven't really used it in the last several years.
When people in that reddit thread started debating zygohistomorphic prepromorphisms, even I assumed it it was reddittard parody. Apparently not!
The desiderata were not 'writing a quick directory traversal function'; these were and are a dime a dozen. The writer explicitly demanded something you could only contemplate in the context of a few programming languages, namely an unfold - fold over the directory system. This is why he rejects all of the no-brainer solutions; he repeats this again and again.
I am similarly puzzled at the implication that C++ was obscure prior to Boost, given that C++ was pretty well entrenched in both industry and academia by the time work on Boost began in 1998.
I want to add that he was just the first guy that popped into my head when I thought C. I questioned it at first too, but the more I thought about it, Linux itself could have just as well have been written in Pascal or Lisp. If it had been, perhaps a lot of the tools, drivers, etc that interact with Linux would've been written in that language as well?
C was never obscure in the way that Haskell is now and C++ once was. It was popularized by the project for which it was invented: UNIX, of which Linux is a two-decades-late clone. C's prevalence has always been closely tied to that of UNIX.
Not sure about that - I seem to remember reading some stuff from the mid/late seventies that was pretty skeptical about the whole idea of using a comparatively high level language for OS development.
Similarly, I don't think C++ was ever obscure in that it was widely publicized as the "next" C and people were pretty keen to use it - it was more that the early tools for C++ failed to live up to the initial promise.
Let's not fixate on the examples, folks. Perhaps they're arguable, but the original point is still valid: languages gain traction through prominent projects.
Yeah, let's take a position in a discussion, defend it by examples (a questionable tactic, but let's roll with it for now), and then when someone picks apart the examples say 'let's not fixate on the examples, folks'?
(not attacking the OP, I'm not convinced yet one way or the other on the topic, just saying that when someone is called on his arguments and methodology the refutation in an intellectually honest discussion shouldn't be vigorous hand waiving).
To expand on this, and on the OP, one thing that you need after the force is momentum.
I strongly feel that if Ruby was just Rails, it would have petered out by now (maybe people might have even gone back to Perl and CPAN). I think that GitHub may well be the driving force behind Ruby at this point. GitHub gems are pretty decent, but like the OP points out, Haskell packages aren't.
Well I've read comments so far and I did not see anyone mentioning this.
I program both on desktop and on web.
On desktop I program C# which is mediocre and recently I have started using C++ with Qt. Qt has huge advantages over C# (QtCreator is IDE that is going to be better in time than Visual Studio, currently it lacks few features, but still I find it usable). On both languages I have my set of libraries which you can't find everywhere else and make my life easier.
On web I use php and sometimes I use Rails. Each has advantages, I use php cause I've used it since php3 so a lot of historical baggage. On each I have my set of code and libraries I use the most.
So lately there's a whole new languages and frameworks coming out. Why should I spend my time porting my libraries to Haskell? Do I get significant advantage coding in it? No, because I use mostly my own libraries.
For a variety of reasons — including critical third-party libraries — anything that doesn't run on the JVM and interoperate with legacy Java code is just a non starter for my company. I suspect many other organizations are in a similar situation. It looks like someone is working on a JVM port now so hopefully we'll see something usable in a few years.
I appreciate the fact that Haskell's designers don't look down on "average" programmers like the Java designers did but I think it's sad that they don't look at all.
Only to the extent that Firefox is a JavaScript library. I think the major idea is that there is some application whose operation is very broadly directed by the embedded language, in a sandbox-like environment designed for the embedded language. For instance, you could write a video game whose graphics and collision detection and whatnot are all C++, but have Haskell bindings so that all the high-level game code would be Haskell. This isn't all that different from the usage of the much more C-like UnrealScript, which is used for exactly this purpose and is both statically typed and compiled, if I recall correctly.
Not to answer the question, but I can provide some reasons why I am not going to learn Haskell. I must say up front that I know next to nothing about the language, and my reason my sound very irrational, superficial and plain silly, however:
it just looks ugly. That's it. I cannot imagine myself sitting all day and staring (or writing) something that looks like explosion on a regexp factory with ruins of Perl fallen through.
True, the beauty is in the eye of the beholder, however even if I believe there are beautiful thing which may need considerable effort and understanding to appreciate the true elegance of it I cannot imagine such thing being ugly at the first sight.
For me it just looks like a lot of effort went just to make it look different. Maybe it makes perfect sense once you learn it, but it just does not look elegant and thus kills all the motivation to try.
This all of course is IMVHO.
"True, the beauty is in the eye of the beholder, however even if I believe
there are beautiful thing which may need considerable effort and understanding
to appreciate the true elegance of it I cannot imagine such thing being ugly
at the first sight."
It's always the same reason why people dislike new languages. People don't
realize, that they can't look open-minded at a new language, if they already
know a language. Because they will always compare the new to the already known
language.
Most people are only open-minded if they learn their first language, after
that, everything else is just ugly compared to the first one.
You just can't decide if something is ugly, until you have learned it,
understood the meaning of the syntax, how all the single parts of a language
fit together.
Haskell is nothing like Perl. It's one of the most beautiful and
well-thought-out language I saw until now. Absolutely nothing compared to the
auto magic of Perl.
Really? How beautiful it is is exactly what keeps drawing me to Haskell even though I have more invested in the dynamic language camp.
>max = head . sort
Due to the laziness, the above will find the max entry in O(n) time, just like your hand written loop would. How can you not find that beautiful?
Now I do agree that they often seem to use too many symbols that look like other symbols but the few times I've investigated it actually ended up making sense (e.g. Arrows).
I love haskell and understand your point. Laziness is beautiful and allows to express things in a simple way and also improve the reusability of the code etc.
However I think that you didn't pick the best example.
I tried out:
>head $ sort [1 .. 10000000]
6 secs, using ~2g of heap! (actually is should be a reverse sort)
and the "hand coded loop":
>let mx (x:xs) m = if x > m then mx xs x else mx xs m; mx [] m = m
>mx [1 .. 10000000] 0
3 secs, heap usage stays negligible low and constant.
Something is clearly not behaving as you depicted.
(of course my 'loop' code is not the exact equivalent of the last.sort composition, since it requires a 'minimum' parameter to be passed in advance, which not all types have. On the other hand it works also for the empty list)
I also have the feeling that, unless special compiler optimization (a very 'specific' one, I fear), the simple application of the 'head' function to a sorted list would stop when the first result element is produced, which is not after O(n). Granted, you don't have to wait for a full sort, since the sorting algorithm could guarantee that the rest of the list contain 'lesser/bigger' elements only and thus stop. But keeping track of all this should be space consuming, in respect to a simple linear scan.
Anyway the space problem of the "lazy" solution is a bigger issue than the number of comparisons.
I'm not a haskell master, but if I got it right, one of the mayor problems of lazy programming is that in some situations it can degenerate to a huge amounts of "unevaluated thunks", which are frozen computations yet to be performed, but which require some state to be held in memory (like function arguments, I guess).
or in this case, the space usage is caused simply because the list has to be materialized in memory instead of be simply traversed and generated on the fly. (but 2g seems slightly too much).
Anyway, the point here is that the two methods are not equivalent.
I think that understanding the impact of laziness on space is an issue that certainly increases the learning curve, as it requires time to master this and other optimization techniques if you want to get predictable performances from haskell.
(BTW, I actually use haskell for work, perhaps in a slightly conterintuitive way. I use haskell for quick prototyping ideas and solutions. Sometimes I get inspired by the solution I end up with haskell and translate it in clojure or java (work requirement), or at other times I have to rewrite it completely, but the possibility to quickly prototype in haskell really helps me a lot. I would love a stable ghc JVM backend.... it would change my life)
As far as I can understand from the blog and and the cited mailing lists, it works but it requires
a carefully coded sorting method, otherwise O(n log n) as expected.
I understand that you wanted to make a simple example, I just wanted to expand a little bit the issue of laziness and make it look slightly less magic.
I found a fascinating examples of the expressiveness and beauty of haskell due to lazy evaluation, which I think is better suited for FP evangelism:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Here we are defining the fibonacci sequence in a declarative way, recursing with a recursive "call".
The way the 'fibs' function is coded directly reflects the definition of the fibonacci sequence itself:
The 'fibs' list is defined as a list containing the first 2 elements and then, as a tail, the result of the application of the + function on the previous pair of elements.
"You can borrow things from the future as long as you don't try to change them"
No, using head is what causes the result to be O(n) even though sorting should be more complex. By only using the first entry, only the first entry will actually be found by the sort.
I assume you're pointing out that I have my sort backwards but I was being intentionally as ambiguous with this part because it's not relevant to the point I was making.
I understand that lazyness could make it only sort as much as needed to get the first element in the sorted list.
Yes, it's triggers my Rainman instincts to point out that
1. It should be max = last . sort
2. Or min = head . sort
3. You are making assumptions on the sorting algorithm to make it possible to short-cut the evaluation.
It wasn't me who downvoted. I can't (since you responded to me) and wouldn't have anyway. If you use last then that means the whole list will have to be sorted O(n log n ish) then the last element retrieved. And yes, I'm assuming a merge sort. But I still don't think that detracts too much from my point.
But it depends on the sort algorithm. In the exercise I did this on (in Ocaml with some custom lazy code, since Ocaml's wasn't lazy enough for what I was doing) I did sort in descending order.
Love the syntax myself. Even before I knew anything about it, the resemblance to maths was very appealing.
Its even better when you understand it more. The syntax brilliantly expresses the fact that you're essentially working with a computerized lambda calculus.
The default behavior was to statically link to libgmp. I don't know about you, but making people's programs GPL by default doesn't exactly inspire confidence in the industry.
Whether or not libgmp was ever statically linked by default depended on your distribution. Were you shipping apps on Windows prior to the -dynamic flag's introduction? If so, then you were watching what libraries you were using, and following these instructions: http://haskell.forkio.com/gmpwindows
These days you can just run "cabal license-check" (IIRC) which will type check all the libraries you use for compliance.
Dons, I know you are devoted to Haskell and will not take any criticism of it lightly. But I am not your enemy. I am willing to praise it myself when it is more suitable. Not when you try to spin it.
You and your supporters succeeded in bombarding every article that even mentions Haskell the wrong way with snide comments, and probably won over a few hobbyists. At the end of the day though, the managers making the decisions will not think "Don Stewart said so, it must be true". They will find the same conclusions that Jon Harrop, myself, and other people you have ostracized have found. And this will continue until it even if you succeed in suppressing all negative feedback on Haskell.
The only thing I ever saw Don Stewart "bombing" were factual evidence. And what he destroys by truth should be annihilated anyway (P. C. Hodgell).
"Don Stewart said so, it must be true" actually isn't such a bad heuristic (as far as Haskell is concerned). From what I've seen, Don is quite cautious.
The only thing I ever saw Don Stewart "bombing" were factual evidence.
This is not surprising coming from someone involved with Haskell. Do I really need to point to you the bug ticket that it was only 8 months ago that you could use something other than GMP? http://hackage.haskell.org/trac/ghc/ticket/601
Even with the option to dynamically link, it still doesn't change the fact that it was the __default__. And what a scary default that was. Company lawyers don't like to touch anything related to GPL with a 10-foot pole, for the reason that it is really up to the jury to decide what is a "derivative work" even if you only link dynamically! Maybe if you live in France, or have a 5 person company, this is an acceptable state.
Anyway I am done, you and Dons win, are you happy? I simply have no ulterior motive or incentive to defend my findings against organized groups who have their livelihoods and PhDs based on Haskell.
Err, I was only objecting to your attack on Don's on-line behaviour. Actually I'm (still) an outsider. Consider "what I ever saw" as anecdotal evidence.
Now it's a pity that companies are scared of the GPL. As far as I know, most software is custom or private, is never released[1], and thus can't possibly infringe the GPL.
I think the real problem here is corporate culture. No company would object using the glibc in a proprietary program. So why would they be scared of any other LGPL library? When they don't even release the software? That's plainly irrational.
Now our choice is hard, but simple: get rid of the "GPL == we can't use it" line of thinking, or get rid of restrictive licences.
[1]: Many custom software actually belong to the company that wrote it, rather than to the company (or government) that purchased it. In this case GPL infringement could happen. But really, I can't fathom why some customers still don't demand complete ownership (including source code) for their custom software. That strikes me as either wildly misinformed or incredibly silly.
Why do you object? I just proved to you the legitimacy of my arguments while you still stand by Don's complete denial?I think it is very off-putting that anything remotely negative about Haskell is usually met with snide comments questioning their intelligence or being labeled as a troll. It also doesn't help that my comment was upvoted a few times, and when Dons & Co. replies, it drops to -3. I've also observed this type of behavior on Reddit.
You aren't really an outsider considering your participation in the Haskell Cafe mailing list and being credited in Real World Haskell.
And companies do object to using glibc in a proprietary program obviously. You are probably thinking of the GCC runtime which has an explicit exception for commercial programs that prevents them from turning GPL.
You also don't have any proof that most software is custom or private. Even if it is the majority, you cannot deny that the shrink-wrap industry is huge, especially during most of Haskell's existence. The answer is to avoid GPL code in the language which is what I was saying the whole time. You don't put the foundation of your business at risk just so you can use a cool new language.
> It also doesn't help that my comment was upvoted a few times, and when Dons & Co. replies, it drops to -3.
Interestingly, I saw it happen almost everywhere. It looks like people are more likely to down-vote comments which are contradicted by a recognized authority. This is of course not a good thing.
> You aren't really an outsider considering your participation in the Haskell Cafe mailing list and being credited in Real World Haskell.
I didn't post more than a few messages, 2 years ago, and I posted about 3 minor comments in Real World Haskell. I've read a few papers, but seldom have written anything (except http://www.loup-vaillant.fr/articles/assignment of course). I'm not an outsider, but hardly what I consider to be an Insider.
> You also don't have any proof that most software is custom or private.
I don't. But in this (huge) niche, my point still stand: being afraid of the GPL is silly. Corporations that are should stop being.
> you cannot deny that the shrink-wrap industry is huge
I cannot and I won't.
> The answer is to avoid GPL code in the language […]
Yes, assuming you want the (proprietary) shrink-wrap industry to use Haskell. I want it gone. Therefore, I see the GPL as the solution.
This is febrile raving, codexon, why are you insisting on it? It is like a kind of madness, right on its face. A range of solutions were given on http://haskell.forkio.com/gmpwindows of which the writer, Sigbjorn Finne, said "nothing too magic or new here". None of them seem particularly appetizing, of course, which is presumably why people continued to labor.
I was trying to think why I found this so disturbing: "I simply have no ulterior motive or incentive against organized groups who have their livelihoods and PhDs based on Haskell." The thing is, it cannot be honest. A simple reflection, a Google search, e.g. http://www.google.com/search?num=100&hl=en&lr=&a... and some time spent reflecting on the modules linked, will show that Don S. could make a far better 'livelihood' doing just about anything other than writing Haskell. The trouble you are faced with is this, that the only way to explain all of this is that he does these things because he thinks they are right and good, and for no other reason.
He does it because he is biased towards something he has already spent years on his PhD candidacy and many hours on writing libraries already. If he completes his PhD, it is likely that he will try to get a Haskell job. Just because he wrote a thousand Haskell packages doesn't mean he is as suitable in another language. Sure he could probably be a C programmer, but his pay grade won't be as high as if he landed a Haskell job to take advantage of his experience and degree.
That you cannot see this is because you also have the same bias as Dons.
Interestingly, I came to the exact opposite conclusion. Haskell is easier than C++ (less code and more safety), and it is equally fast.
Haskell may not be the perfect language for your generic FFT library to be distributed with your OS, but it's a great choice for building applications that would otherwise be C++ or Java. Java and C++ had no trouble catching on, despite being slower than C, and Haskell isn't even that much slower than C.
You know, it'd be more convincing if you can come up with reproducible benchmarks to backup your claims. According to This guy's data: http://www.codexon.com/posts/debunking-the-erlang-and-haskel... Haskell is even slower than python (which is not known for its speed) for implementing simple servers.
Haskell code that performs close (within an order of magnitude in terms of run time and space) to its C/C++ equivalent typically is also close in code length (and often less readable!) and often unsafe as well (requires explicit unboxing etc.)
Haskell is a significant departure from most other languages commonly used by industrial programmers. It's a relatively shallow learning curve from Java to Python to JavaScript, but making the leap to a pure functional language is very difficult. Path-dependence plays a huge part here. This isn't just a matter of "people being afraid of what's different" as the article suggests; there are very rational reasons for a profit-seeking firm to exploit the fact that their developers (and those available to hire) are already relatively proficient at writing procedural code.
Another, more important reason, is that Haskell is too intellectually demanding for most industrial programmers. I consider myself an enthusiast of functional programming, but achieving anything practical using purely functional code remains extremely difficult for me, even though I regularly dabble in it during my free time. The Haskell IRC channel can be helpful, but it's very difficult to square "Haskell is easy enough for anyone to learn" with the inevitable "you are too stupid/impatient/incompetent to use Haskell effectively" taunts you're likely to hear, when you're asking for help to perform a simple task. Many Haskell evangelists don't understand that most developers aren't nearly as smart or dedicated as they are.
I'd be curious to know where most Haskell users believe they lie on the distribution of programming ability. I'd estimate most of them lie at the 99% percentile, and that any of them arguing otherwise are doing so out of modesty. (Note that I'd include dedication and curiosity in with intelligence in this metric.) I believe the most likely explanation for this is that Haskell is a particularly difficult language to use effectively.