Hacker News new | past | comments | ask | show | jobs | submit login
Clojure: All grown up (wit.io)
240 points by djacobs on March 12, 2013 | hide | past | favorite | 185 comments



Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat. You do not need to understand complex math to actually use Haskell! All the important ideas can be understood and used in Haskell terms alone--you can think of them just like the ideas and jargon introduced in other languages. Except more general, more elegant and more consistent because they have a unifying underlying theory which the people who designed them in the first place do understand.

The biggest conceptual shift is to thinking functionally rather than imperatively, so it's going to be similar in both languages. The difference is that Haskell is more thorough, but the fundamental ideas are very similar.

Haskell, of course, has many of its own advantages which are mostly detailed elsewhere. I'm merely going to give you my highest-level take-away from trying Clojure: Haskell has much better facilities for abstraction. Haskell allows you to use types and data representation specific to your donation while still allowing you to take advantage of some very generic libraries. And the custom types are not actually complex, unlike what Clojure leads you to believe with its preference to using the basic built-in types: you construct your types from basically three fundamental operations: combining multiple values into one (like a tuple or struct), giving a choice of values (like a variant or disjoint/tagged union) or creating functions. That's all there is to them!

Basically, don't be afraid of Haskell, whatever the common wisdom is. As usual with these sorts of things, it's much more common than wise.


> Haskell has much better facilities for abstraction

I've been using Clojure for about 5 years now. I've actively worked for the past two years on core.logic which has quite a few fancy abstractions (custom types, protocols, macros) that all interact quite nicely together. Now, I'm not a Haskell expert, but I do feel that some of the things I've built would be could be challenging in the Haskell context. I might very well be wrong about that.

That said, I'm actively learning Haskell as there are some aspects of Haskell I find quite attractive and in particular a few papers that are relevant to my Clojure work that require an understanding of Haskell. My sense is that even should I eventually become proficient at Haskell, my appreciation for Clojure's approach to software engineering would be little diminished and likely vice versa.


Although another responder floated Template Haskell as Haskell's alternative to macros, Haskell loses out in that comparison. TH is both harder to work with than Lisp macros and sacrifices type safety[1], so it is avoided where possible. TH is used to generate boilerplate, but for other purposes (especially creating DSLs), Haskell favors abuse of do-notation through something like Free Monads[2].

Of course, this is nothing at all like macros, but in practice we can achieve many of the same goals while maintaining type safety. So, win win win (the third win is for monads).

[1] http://stackoverflow.com/a/10857227/1579612 [2] http://www.haskellforall.com/2012/06/you-could-have-invented...

EDIT: As dons points out, I was imprecise in my wording. I don't mean to say that TH leads to Haskell programs which are not type safe. The compiler will, of course, type check generated code. In general, given that dons has been programming Haskell for 14 years compared to myself who has been doing it for 1 year, prefer what he has to say on this subject.


> sacrifices type safety

How does TH sacrifice type safety? The generated code is type checked.

For actually customizing syntax via macros, quasi quotation is the popular approach, combined with TH. People can eg embed JS or ObjC syntax fragments directly into Haskell this easy. http://www.haskell.org/ghc/docs/latest/html/users_guide/temp...


I'll have a crack at some suggestions, as a Clojure user (nice work on core.logic!) with some Haskell familiarity.

For logic programming: you might find Control.Unification [1] interesting -- and of course if you don't need unification then you can go pretty far with the plain old List monad.

Protocols/multimethods: Haskell typeclasses are really really powerful. Probably my favourite of all the approaches to polymorphism I've seen anywhere, although YMMV (they do lean heavily on the type system.)

Macros: there is Template Haskell [2], which is pretty interesting although I can't claim to've used it myself. It doesn't quite share the simplicity of macros in a homoiconic dynamic language, but at the same time in Haskell it feels like you don't need to lean on compile-time metaprogramming as much as you do in a Lisp to achieve power. (Can't quite put my finger on why).

[1] http://hackage.haskell.org/packages/archive/unification-fd/0... [2] http://www.haskell.org/haskellwiki/Template_Haskell


Kiselyov et al have an interesting paper on logic programming in Haskell [1]. They introduce a backtracking monad transformer with fair disjunctions, and negation-as-failure and pruning operations. And the implementation is based on continuation, so it's more efficient than reification-based approaches such as e.g. Apfelmus's operational monad [2]

[1]: http://www.cs.rutgers.edu/~ccshan/logicprog/LogicT-icfp2005....

[2]: http://apfelmus.nfshost.com/articles/operational-monad.html


Is that Basque-Icelandic pidgin?


Heh, fair point. I guess it can read like that. I'll expand.

Monad transformer = thing that can transform a monad into another one with additional primitive operations. Monads are more or less the Haskell way of achieving something akin to overloading the semicolon in imperative languages. In imperative languages the meaning of the semicolon is baked into the language. It seems natural and stupid to think about what it means, normally it's just "do this, changing the program state by assigning some values to some variables, then do this with this new state" but if you think harder, it can also mean "do this and skip the rest of the loop" (if the first statement is a break) or "do this and then skip a bunch of levels up the call stack until you find an appropriate handler, unwinding the stack in the process" (if it's a throw). Haskell doesn't have any of this baked in, but in a way allows you to define the semantics of the semicolon yourself.

So monad transformers then allow you to build up more complicated semicolon semantics by combining simpler ones (and getting the full set of their primitive operations, such as assignment, break or throw statements in the previous examples).

Logic programming, in its simplest form, is concerned with deducing "goal" facts of the form C(X), from a bunch of rules of the form "A(X) and B(X) and ... imply C(X)" and other facts of the form "A(X)". One way you can do this is look for all the rules with your goal fact as their conclusion, then look how you can derive their premises, and so on. Which essentially boils down to a backtracking search.

So what Kiselyov et al did was to implement some primitive operations and overload the semicolon in a way which makes it easy to perform a backtracking search. Or more precisely, since it's a monad transformer, they figured out a way to add these primitives to any set of existing ones (such as assignment primitives for instance). Their implementation also provides some interesting backtracking operations which can be tricky to implement (the aforementioned fair disjunctions, negation as failure and pruning). And it is efficient since it's based on continuations (which are just functions of a certain format), as compared to other approaches which first have to represent the target program as data ("reify" it), then define an interpreter for that data and finally run the interpreter.

Better?


Almost.

One of my frustrations with learning Haskell is that everybody assumes that I am coming from a imperative programming background. I've toyed Python a wee bit, but I've really only ever used functional languages (Scheme, R, Clojure) and have only ever programmed in a functional style. Needless to say, I have no clue what the semi-colon is supposed to do in imperative languages. As I have been told many times, knowing functional programming lets you start at 2nd base with Haskell...but no further.

Thanks for the attempt. I'll try again after I'm done with RWH.


Sorry, I guess that it's the default assumption since most people do come from an imperative background. And you had to choose a language with significant whitespace as your first imperative language, that's just low ;) But essentially, in languages with Algol-inspired syntax the semicolon is an end-of-statement marker. In Python, this is normally (but not always) the newline character.

I hope that helps decrypt the monad explanation part of my post somewhat. I wish I could give one which better relates to your background, but of the three languages you mention, I've only had very superficial exposure to Clojure. And you've also probably heard it before, but I would recommend LYAH over RWH as the first Haskell text.


I also find it amusing that Haskellers always tell you "semicolon" when you normally won't ever see one in any code.

What they're talking about is statements. If you have a completely pure functional language, no function should ever have more than one expression. If the function had two expressions that would mean one of them did something and the result was thrown away. But if you don't have side effects, why would you have an expression that does something and then throws the result away?

In Haskell any function can only have one expression. So if you need to do several steps (i.e. you need side effects) you have to use do notation. Do notation is syntactic sugar for turning what appear to be several statements into one statement.

If you use the "curly brackets" way of coding then you would separate these statements with semicolons (tada!), but most code is white space significant so they just use one line per statement.

So what do does is takes the code you wrote in the lines and determines how to combine it. If you're using do then you're using a monad of some kind (a container, basically). Given the container and the contained, you can do two kinds of operations: "side effect kinds" that appear to do a statement and ignore the result (these actually work with the container) and expressions.

The interesting thing about this kind of code is you don't keep track of the container. Sometimes you don't see any trace of it at all except in the type signature. There will be functions that talk to the container but they don't appear to take the container as a parameter (which is good since you don't have a handle to it anyway). Behind the scenes do is setting up the code in such a way that the container is actually passed to each of your expressions so your "side effects" aren't side effects at all, they're modifications to the invisible (to you) container.

And each line is fused together by using one of two functions the container defines. So this is where the power comes in. How the container chooses to put these lines together depends on what the container actually is. An exception container, for example, might provide a special function (called, say, "throw") that will call each expression (functions by the time the container seems them) unless one of them calls its special "throw" function at which point it doesn't execute any further expressions and instead returns the exception it was passed.

I don't know if that makes things better or worse. :)


That did make it a little better, thank you :)


> Better?

Much better! Thank you :-)

I think you've also managed to prod me a little closer to understanding monads -- I always feel most tutorials are too complex, or too theoretical. Relating monads and the semicolon gave my brain a nice "hint" towards understanding them better, I think.

It's exactly related to the problem(s) I have with Haskell -- magical syntactical sugar that for me doesn't really seem to simplify anything. Quite like how semi-colon works in JavaScript -- end-of-statement is needed, but in javascript it is inconsistent. And like in other languages where the semicolon is a statement, but one one normally doesn't really think about...


For me, monads made sense when I started thinking of them as an API for composing normally uncomposable things. I'll try to explain it without assuming much knowledge of Haskell.

Composition is key in functional programming, and when you have two functions a -> b and b -> c, there is no problem... You can just compose them and get a -> c.

Using the List monad as an example, consider the case when you have a -> [b] and b -> [c]... You have a function producing bs but you can't compose it with the other function directly because it produces a list. We need extra code to take each element of the source list, apply the function, and then concatenate the resulting lists.

That operation becomes the "bind"[0] (>>=) of our Monad instance. Its type signature in list's case is [a] -> (a -> [b]) -> [b], which basically says "Give me a list and a function that produces lists from the elements in that list, and I will give you a list", but that is not so important. The point is that you start with a list and end up with another list, which means that by using bind, you can compose monadic operations (a -> [b]) indefinitely.

The "return" operation completes the picture by letting you lift regular values into the monad, and when return is composed with a regular function a -> b it transforms it into a -> [b]

The more generic form of a monad in Haskell is "m a" which just means a type constructed from "a" using whatever "m" adds to it (nullability, database context, possible error conditions etc.)

As you can see from the type signature, "a" is still there. Monadic bind and return allow composing existing a -> m b and a -> b functions, and this abstraction is made available through the Monad typeclass.

[0] Note that for lists, there's actually another valid definition of bind that behaves differently but also satisfies the required properties. Since you can't have two instances of a typeclass for a single type, it's defined for a wrapper type "ZipList" instead.


You're welcome. But I'm now not sure if I got my point across correctly on the monad part of the post (but then again, smarter people than me have failed on that front). I'm confused about what you mean with syntactic sugar. I wouldn't call semicolons in JS syntactic sugar, as they are automatically inserted by the compiler. So I guess you could call lack of semicolons syntactic sugar for semicolons. Personally, I would call it idiocy :) (not necessarily the lack of semicolons - I like the appearance of e.g. Python - but their automatic insertion).

In general, I'd even say that Haskell provides very little syntactic sugar, and the stuff that it does provide is both quite useful and rather understandable. Examples being list comprehensions, or even list syntax to begin with, where [1,2,3] is sugar for 1:(2:(3:[]))). Yes, the do-notation (which is what you normally use with monads) is also sugar, but it's not the difficult part of understanding monads. The difficult part is understanding the various bind operators (or equivalently, join or Kliesli composition) and how exactly they model different computational effects, and lastly forming the "bigger picture" from the examples.


  > So I guess you could call lack of semicolons syntactic sugar for semicolons.
Yes, exactly. Inconsistent sugar. My early experience with Haskell was that a lot of the syntactic sugar was (or seemed) very brittle -- combined with uninformative error messages -- much like missing semicolons can lead to -- even in Java iirc.

(I believe that is fixed now, however. I still remember it -- contributing to a somewhat irrational fear of Haskell :-).


That was a great explanation, thank you.


> (Can't quite put my finger on why).

Lazy evaluation. Much of what you use Lisp macros for is to control when/if arguments get evaluated. Haskell just works this way so you don't need macros for it.


Another way to think about it is that every Haskell function is a macro.


Yes, already having gotten "over the hump" on Haskell, the issue I have with Clojure is that I dig into it (finding it a truly appealing language, and wanting to achieve that sweet spot), but usually walk away thinking it really isn't giving me anything Haskell isn't, and is lacking many of the niceties I've come to rely on. I just have this feeling I would love it if I didn't already know my way around Haskell, but the "benefit-venn-diagram" feels like concentric circles.

If there are some people out there who are well versed in both Haskell + Clojure, I'd love to hear some insight into where Clojure shines, and where you find yourself saying "Clojure is better at this!" in some distinct and meaningful way.


Well, the Clojure repl is way better. Not being pervasively lazy makes it easier to reason about many things. Not strictly boxing IO in the IO monad makes it easier to debug (debug by println is still useful!). Macros are much easier to understand than Template Haskell, and since you don't have real typing you can have heterogeneous collections, or values where the type of one part depends on the value of another part, easily.

Also I think it really is easier to get started with Clojure than with Haskell, even for the (very well thought out!) concurrency primitives, though obviously that doesn't matter if you're already up and running with Haskell.

I use Clojure exclusively at my job, but for some things I really miss something like Haskell's type system, even though I freely admit I don't really understand the type system at its higher levels (specifically related to GADTs, existential types, type familes, etc.). Applicative functors would make some things so much nicer.

and you don't have to understand dependent typing to do it, either. I know there are heterogeneous list implementations for Haskell, but I can never understand how they work.


>since you don't have real typing you can have heterogeneous collections

You can have heterogeneous collections in Haskell, you just have to mean to. :) Check out the the "Obj" example here [1]

[1] http://www.haskell.org/haskellwiki/Existential_types


"I know there are heterogeneous list implementations for Haskell, but I can never understand how they work."


No, there are no heterogeneous list "implementations". There are ways to get the type system to basically use the type-class as the "type" instead of the concrete type (this is how you get heterogeneous lists in languages like C#/Java as well btw). That means the same normal list can hold this heterogeneous data, as could maps, sets, Maybe, anything.


http://hackage.haskell.org/package/HList-0.2.3 sure seems like an "implementation" of heterogeneous lists.


That looks like some kind of research thing, not something anyone is using. And as I say, there is no need to since you can use Existential types or GADTs to type the list on interface instead of representation. And that will give you heterogeneous containers of any kind where as this package you showed would only work for this one kind of package.


Thanks for the answer, exactly what I was looking for.

A few comments:

> Well, the Clojure repl is way better.

I've messed around with it, not enough to know, but wouldn't shock me. `ghci` is pretty decent, but it's not really world-class at anything. iPython is probably the best repl I've ever used for any language.

> Not being pervasively lazy makes it easier to reason about many things.

Agree. This is, indeed, problematic at times.

> Not strictly boxing IO in the IO monad makes it easier to debug (debug by println is still useful!).

Well, Debug.Trace gets you most of what you want there (using unsafePerformIO under the covers), so there is an "escape hatch" for doing printf-style debugging. But it's not quite as smooth as printf in languages that don't sandbox purity so much. On balance, I'd still take the purity (because I do a lot less debugging!), but point granted.

> Macros are much easier to understand than Template Haskell

No experience with macros, but since lisp-like things are so big into macros, sounds plausible.

> and since you don't have real typing you can have heterogeneous collections, or values where the type of one part depends on the value of another part, easily.

I'm not sure I view this as a /virtue/, to be honest. I'd rather have the type checking, and use typeclasses or ADTs to put mixed types into a sequence.

> Also I think it really is easier to get started with Clojure than with Haskell,

Yeah, I think this is undoubtedly true. Haskell veterans often say "whats' the big deal?", but the deal is big. And it's not so much because of "math", in the classical sense, as much as it's about very high, very new, abstractions. Many of them without analogies to things you've done before or things in the "real world". I did ocaml for awhile before I did Haskell, and Haskell was still a pretty big leap.

Anyway, thanks again for the constructive feedback.


As someone who is trying to learn Haskell, my best guess would be: Clojure's "hump" is like 1/1000000 the size of Haskell's "hump".


Not well versed in both, but I think desktop apps are an area where Clojure shines. Let's say I want to write a cross platform desktop GUI app in Haskell. What do I use? wxHaskell seems to be actively maintained, but a lot of the projects listed on http://www.haskell.org/haskellwiki/Applications_and_librarie... appear to be inactive or dead.

With Clojure I have access to Swing (yeah I know...Swing isn't everyone's favorite) without installing any additional libraries. Then there is seesaw which is a very nice layer on top of Swing. To install seesaw I simply edit a text file in my Clojure project and the project management/automation system installs it for me. Very nice.


A meta-comment: I find it amazing that we are arguing whether Clojure or Haskell is better suited for real-world applications.

Discussions like this one convince me that we have advanced our programming techniques over the last decade or so. After all, we could be arguing about semicolons, the GOTO statement, or other similarly important issues.

It's a nice feeling.


Sure, arguing that a functional language is the solution to our problems hasn't been done since oooo.... LISP? We have come so far.


The arguing here is which functional language is the solution, not arguing that a functional one is!


> Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat. You do not need to understand complex math to actually use Haskell!

In the abstract, possibly, but not in practice. You can't ignore the effect the general bent of the community has on the language and the code that is idiomatic in the language. People tend to describe Haskell code in mathematical terms, and libraries use operator overloading and the like in mathematical analogies.


All of the terminology used widely in the community can be--and almost always is--defined in terms of Haskell rather than math. You can pretend the math does not exist and treat it as language-specific jargon. You might miss out on the truly cutting-edge libraries until somebody writes a nice tutorial for them, but that would still be true even if the abstractions were not mathematical in nature.

Many of the people in the community do not know much of the math beyond the terminology they learned in Haskell. That's certainly what I gathered talking to people at a local Haskell meetup and it matches my impression of many of the people online (e.g. in the mailing list or StackOverflow.


If it is language-specific jargon, then people's problem is that Haskell has way too much jargon for them to wade through. No matter how you phrase it, to many people, any real Haskell is an impenetrable morass of esoteric abstractions.

Basically, Haskell itself is pretty easy. The language is small, simple and elegant. But the libraries are massively, massively complex and do things in ways that are very often not at all intuitive if you're not familiar with the math behind them. That's what people mean when they say it's too heavily based in math.

This is not to say that you can't use Haskell in the real world, but it has some attributes that make it daunting for many people. I don't think this a bad thing, either — not everything should be for everyone. Variety is the spice of life.


> But the libraries are massively, massively complex and do things in ways that are very often not at all intuitive if you're not familiar with the math behind them.

I'm just a beginner Haskeller, but this seems like a pretty bold (and false) claim. I've used Parsec for writing a Scheme (while reading the book), Shelly and Shake for build and test automation, and some statistics libraries for, well, statistics. They were all intuitive to use, and there was no "impenetrable morass of esoteric abstractions". I'm not sure what you're on about. Maybe provide examples?


I will write this answer in the flavor of your average Haskell library, as read by somebody who has learned the core language very well but not that library:

  you <*$%^> everyone %%<#>%% experience
Are you really going to tell me you have never looked at code written with a package you haven't learned and initially found it to be unintelligible? If not, then I would guess that you are simply well-suited to Haskell. Again, I'm not saying Haskell is bad, but that it does have attributes that make it daunting to many people. The top-rated comment on here even talks about how easy Haskell is now that he's "over the hump." Even compared to a relatively esoteric language like Clojure, Haskell's hump is pretty big.

I believe the canonical example of a weird math-based library that makes beginners cry is Control.Arrow. Reading arrow-based code without having fully grasped both the idea of arrows and the API itself (do you think <&&&> could use a few more ampersands?) is an exercise in frustration. Even the humble monoid — simple as it may be — is hard for many people to grasp, because they're such a frustratingly generic idea with horribly unhelpful terminology attached to them.

Want more? Here's one of the top Haskell projects on Github — a library that does lenses, folds, traversals, getters and setters: https://github.com/ekmett/lens

With that library, you can write things like

  _1 .~ "hello" $ ((),"world")


You mean that the example would be more readable if it used proper function names instead of operators? You would still need to "learn the library", i.e. look-up what the functions do.


I mean that as a whole it's daunting and feels "unfriendly" to a lot of people. The heavy use of operators is part of it, but it's more than that. Haskell gets really abstract, which is cool, but it's hard to think about. Like I said, even monoids and monads seem hard to people learning Haskell despite being really simple. And look at the FAQ on that Lens library — it's an abstraction on top of things that are already pretty abstract in Haskell to offer "a vocabulary for composing them and working with their compositions along with specializations of these ideas to monomorphic 'containers'". That's cool and I really don't mean to criticize it, but it's undeniably esoteric and abstract.


Function names give you at least a vague idea of what the function does, but operators don't. I think Haskell's operators can make code more concise and elegant to someone who's familiar with them, but it does make it harder for a beginner to understand what's going on.


I'm pretty sure all the railing hard on the symbols above the number keys must be symptomatic of some sort of verbal learning disability in Haskell programmers...


I don't think that's fair. It allows for some really concise code that actually is quite readable once you're conversant. For example, I do not think Parsec would be better off if "<|>" were renamed to "parsecOR" or something like that. It's just, like I said, obscure and daunting.


I do think Parsec would be better off if "<|>" were something like "parsecOR". Good function names are good.


The idea is that Parsec represents a grammar, similar to BNF with <|> giving alternative rules. So something like:

    atom =  many digit
        <|> identifier
        <|> stringLiteral
        <|> between (char '(') (char ')') expression
I can't imagine any way this would look better with a name instead of the <|> operator. You could write it something like:

    atom = many digit
        `or` identifier
        `or` stringLiteral
        `or` between (char '(') (char ')') expression
but that's no clearer and a bit harder to follow. With a much longer identifier than "or", it would be completely unreadable. If you had to make "or" prefix (the `` let you use a normal function in infix position), it would be even less readable. Combine the two (a longer, prefix identifier) and you have an even bigger mess!

The operators let you see the structure of an expression at a glance. You don't have to read the whole thing to figure out what's going on. Moreover, the <|> symbol is pretty intuitive--it's clearly based on | which usually represents some sort of alternation.


I find the second one much clearer than the first one, because there is no ambiguity to me whether '<|>' means the same thing as '|'. It's also less ugly.


> chc: I do not think Parsec would be better off if "<|>" were renamed to "parsecOR"

> sanderjd: I do think Parsec would be better off if "<|>" were something like "parsecOR"

Do you think perhaps some people think about functions visually while others think about them auditorily?

An IDE could easily render function names however the human reader wants. The option whether to render function names as names or symbols would be customizable, just like the display colors for various element types are.

Within each option, there's further choices. For names, there could be a choice between various natural languages. For symbols, it could be restricted to ASCII e.g. <|>, or full Unicode, e.g. | enclosed by a non-spacing diamond.


For what it's worth, Emacs already does that last thing. As a contrived example borrowed from a post I wrote a while back[1]:

    \ a b c -> a >= b && b <= c || a /= c
gets rendered as:

    λ a b c → a ≥ b ∧ b ≤ c ∨ a ≢ c
However, this is a bit of a pain to do in general. It works well for common functions and syntax elements, but has to be built into the editor. Doing it more generally would require the author of the initial function to come up with multiple different names for it, which sounds unlikely.

[1]: https://news.ycombinator.com/item?id=4742616


Your examples are terser which many programmers would prefer.

> Doing it more generally would require the author of the initial function to come up with multiple different names for it, which sounds unlikely.

Unlikely for now, but could become more likely in the future.


If you're really interested in this, I'd encourage you to take some actual Parsec code and make this change, then compare the two for readability. My guess is that while it's conceptually nice, you'll find in practice it hampers readability more than it helps.


Thanks for the constructive suggestion - I don't have much invested in this particular little debate so I'm not sure I'll take the time to do that, but it's definitely a good idea.

I will say that in my opinion, the alternative in the reply above this (which I can't reply to) looks fairly nice, is clearer, and not even a little bit harder to follow. But I'll allow that as a very casual Haskell programmer, my opinions about Haskell are probably not particularly valid.


>you <*$%^> everyone %%<#>%% experience

Haskellers like to use symbols for two argument functions because of precedence rules. You learn them for the modules you're using just as you would have had to learn the function names. But usually seeing how the symbol is named combined with it's type signature is enough.

As far as lenses, I personally don't like them so far. But not because of the (admittedly, questionably named) functions but rather that doing anything in lenses seems to require template magic.


Given a well-designed library that does something concrete related to a task you understand, I find that I generally don't have to learn the function names just to read code that uses it. Like, if see "HTTP.post(data, headers)" or "record.attach(file)", I have a pretty good idea what it's doing even if I don't know those particular libraries.


I think you are correct here. In my view, advanced languages facilitate the development of complex abstractions in the libraries. I find this to be the case in both Haskell and Scala. But I also don't think we will ever return to the innocent days of yore. Programming is growing up after six decades of trying to figure out what 'mainstream' ought to be. I suspect that the whole business will bifurcate into easy-to-use languages and increasingly complex libraries atop more sophisticated languages to meet the challenges of scale and speed.


This just isn't true. Everything in Haskell is about the types and that's what you have to come to grips with if you want to use the language. Have you looked at projects like Yesod or Snap? Not a lot of math there but a hell of a lot about types.


Even if Haskell could somehow be proven to be "the better language" (whatever that means), Clojure is still extremely elegant, simple and beautiful, and it has one huge advantage: it runs on the JVM, so it automatically harnesses the vast JVM ecosystem, as well as all of its (unmatched, IMO) monitoring, profiling and runtime-instrumentation tools (+ dynamic class-loading and hot code replacement).


Indeed, I think when most people say "suitable for real world projects" they mean "I can get away with using this at work because it's just a jar".


Tried Frege?

https://github.com/Frege/frege

"Frege is a non-strict, pure functional programming language in the spirit of Haskell ... Frege programs are compiled to Java and run in a JVM. Existing Java Classes and Methods can be used seamlessly from Frege."


" in the spirit of Haskell " means it isn't Haskell.


True, but it's in the same camp, where "elegant" and "I have to write parentheses all the time" are two different concepts, and, and this was the point, it can use the JVM libraries. (in a type safe manner, btw)


I think you can use both. As Clojure is more dynamic in nature where Scala/Haskell are good compiled static compliments.

Large projects, I would go with Scala/Haskell and for front-end systems, I would use clojure.

Why do it this way? With clojure, you can easily modify data or small pieces of logic. Simply edit the script without the recompile. With the scala/haskell api or library, you probably need something that changes less frequently. That backend system may act as a core piece of your library. ...

And if you don't like that. You can do Python and Java/C# which can give you the same effect.


I think Haskell is great but it's weak in libraries.

This is where Clojure has a slight advantage because you can tap into the Java ecosystem for stuff that isn't currently in the Clojure world.

Database libraries are a good example of this, database support in Haskell is still pretty flaky and cumbersome to setup in comparison to JDBC.


+1. People get themselves worked up about words like "monads" and "algebraic data types," but using those things is not difficult, and the amount of guarantees they give you about your code is so far beyond anything else out there that IMO it's not worth doing functional programming unless you have a proper type system together with monads.


I'm curious what advantages you think Haskell has over ML/Ocaml. I'm pretty familiar with OCaml and like it a lot, and Haskell seems to be very similar, but I haven't used it for anything substantive. What benefits, if any, does Haskell have over Ocaml, and is it worth learning if one already knows Ocaml? I know that Haskell has lazy evaluation while Ocaml doesn't, but you can simulate lazy evaluation in Ocaml.


I actually also really love OCaml. It has some truly brilliant features which Haskell lacks like polymorphic variants (and structural sub-typing in general) and a top-notch module system. (Much unlike Haskell's "worst-in-class" module system.)

Haskell does have a bunch of advantages though. A pretty petty one is syntax: Haskell's is simpler, prettier, more consistent and yet more flexible. They're very similar, of course, but I think Haskell gets all the little details right where OCaml often doesn't.

The single biggest advantage Haskell has--more than enough to offset the module system, I think--is typeclasses. Typeclasses subsume many of the uses of modules, but can be entirely inferred. This moves many libraries from being too awkward, inconvenient and ugly to use to being a breeze. A great example is QuickCheck: it's much more pleasant to write property-based tests in Haskell because the type system can infer how to generate inputs from the types. Being able to dispatch based on return type is also very useful for basic abstractions like monads. Beyond this, you can also do neat things like recursive typeclass instances and multi-parameter typeclasses.

Honestly, if I was forced to pick a single Haskell feature as the most important, I would probably choose typeclasses. They're extremely simple and a foundation for so much of Haskell's libraries and features. Most importantly, typeclasses are probably the most obvious way the type system goes from just preventing bugs to actually helping you write code and making the language more expressive, in a way that a dynamically typed language fundamentally cannot replicate.

Laziness in Haskell is not really a big deal. It can make much of your code simpler, but it also makes dealing with some performance issues a bit trickier. It also tends to make all your code more modular; take a look at "Why Functional Programming Matters"[1]. I am personally a fan of having laziness by default, but I think it's ultimately a matter of philosophy.

And philosophy, coincidentally, is another reason to learn Haskell: it takes the underlying ideas of functional programming further than OCaml. In Haskell, everything is functional and even the non-functional bits are defined in terms of functional programming. Many "functional" languages are actually imperative languages which support some functional programming features. OCaml is one of the few that goes beyond this, but there is really a qualitative difference when you go all the way.

Once you know that everything is functional by default, you can move code around with impunity. You on longer have to worry about the order code gets evaluated in or even if it gets evaluated at all. You also worry far less about proper tail calls, which mostly compensates for having to deal with some laziness issues.

Haskell also embraces the philosophy in another way: you tend to operate on functions like data even more than in OCaml. The immediately obvious example is that Haskell has a function composition operator in the Prelude. There is no reason for OCaml to not have this, but--as far as I know--it doesn't. It's an illustration of the philosophical differences between the two languages. On the same note, Haskell also tends to use a whole bunch of other higher-order abstractions like functors and applicatives.

So I think the most compelling difference is ultimately fairly abstract: Haskell just has a different philosophy than OCaml. This ends up reflected in the libraries, language design and common idioms and thus in your code. I think that by itself is a very good reason to learn Haskell even if you know OCaml; and since you do, picking Haskell up will be relatively easy!


Thanks for the awesome reply! I think you've just convinced me to check out Haskell more thoroughly at the next chance I get.

Edit: It's great that you mention OCaml's module system too; my professor probably mentions the benefits of it in just about every lecture.


That was a great reply. I would personally like to hear more about the advantages OCaml has. OCaml was my first foray into functional programming but I was really put off by +., print_int and friends (which you obviously don't need in Haskell). I know about functors but what else is so bad about Haskells module system? The biggest weakness of it I see is the inability to handle recursive imports.


I used to love Ocaml, but it doesn't have the manpower/momentum/excitement that Haskell has (had?)

(IIUC because Ocaml licensing is not very welcome to open source efforts).

I'm very good at picking losers... (Ocaml, D, Factor..)


Haskell got some sharp edges which make me hesitant to use it. I once had a Haskell program that had a serious space leak that would cause it to exhaust ram and crash. The culprit? Adding "+1" to a counter repeatedly and then never evaluating the counter. It took a very smart programmer (who happens to have a PhD in math) six hours of debugging to find it.

Also I've never seen Haskell's FFI work - it's just too much of a pain. Clojure's Java integration works right out of the box.


> Also I've never seen Haskell's FFI work - it's just too much of a pain.

For what it's worth, I've found Haskell's FFI very pleasant for interfacing with C (and the reverse looks just the same, but I haven't tried). You then interface with any othet language through C, since most can FFI through C anyway. I think it's a very reasonable solution!


> six hours of debugging

I guess this just shows that Math PhDs don't know how to use the fine profiler: http://stackoverflow.com/questions/15046547/stack-overflow-i...

Which would have saved about 5hrs 50mins...


> I'm merely going to give you my highest-level take-away from trying Clojure:

Then you go on the describe what you find good about haskell? That sounded weird.


No, I was trying to explain what the biggest difference I found between them when I tried Clojure. I agree that wasn't worded very clearly.


"And the custom types are not actually complex, unlike what Clojure leads you to believe with its preference to using the basic built-in types"

Custom data types are not always complex, but efficient and useful data types often are. Clojure's data structures have very good performance for immutable data structures. Clojure's maps, for instance, have what effectively amounts to a O(1) lookup, while Haskell's Data.Map is a size balanced binary tree with only O(log n) performance.


> effectively amounts to a O(1) lookup

So O(log n) then. We obey gravity around these parts.

I presume you are referring to HAMT-like structures ,such as found in http://hackage.haskell.org/packages/archive/unordered-contai... which are by no means unique to Clojure.

Besides simply having the data type, you still need a good allocator and GC optimized for immutable data, which is where GHC stands alone - http://benchmarksgame.alioth.debian.org/u32q/benchmark.php?t...


> So O(log n) then. We obey gravity around these parts.

Which is of no practical difference to O(1) if log n is always very small.

> I presume you are referring to HAMT-like structures... which are by no means unique to Clojure.

Of course they're not. My point was that it is not trivial to derive a efficient and useful data structure like a HAMT from Haskell's type system. The data structures in a library like unordered-containers are in practise just as opaque to the developer as the core data structures in Clojure.


> as opaque to the developer

http://hackage.haskell.org/packages/archive/unordered-contai...

This highly optimized data type is defined in 6 lines, easily accessible from the docs.


No it isn't, at least not in any meaningful way. The type definition itself may be 6 lines, but that doesn't adequately describe the data structure, otherwise there'd be no need for the rest of the library.


Actually most Clojure maps are Hashtables, which do have O(1) amortized lookup and insert. Granted, the 'amortized' is important.


>Clojure is nice, but the idea that it is somehow built for the real world where Haskell isn't is just patent nonsense that I wish people wouldn't repeat.

Really? Because SPJ, for one seems to agree with it.


Coldtea, could you provide a link where SPJ says this ? I googled it but couldn't find anything.


I think he may be talking about this: https://www.youtube.com/watch?v=iSmkqocn0oQ


Yes. Plus the cheeky "avoid success at all costs" remarks etc. Of course he doesn't mean it's not usable (or used already) in the real world at all.

Just that it's not in the perfect practical form that a real world language would have.


This is just a propaganda - repetitive reciting of slogans made of long words.)

It is, of course, difficult to argue with zealots, but I will try nevertheless.)

The cost of what is called "advanced type system" is inability to put an elements of different types in the same list or tuple or whatever. It is not just a feature, it is a limitation. In some cases, when you, for example, dealing only with numbers, say, positive integers, it is OK, but what then the real advantage of such type checking?

On the other case, the concept of a pointer which all those "packers" trying to throw away is very mind-natural. When we have a box we could put anything in it, as long as it fits. Imagine what a disaster it would be when you moving from one flat to another but must pack stuff only into special kind of boxes. This one is only for such kind of shoes, this is only for spoons, that for forks. So, with a pointer we could "pick up" everything, and then decide what it is and where it goes.

Another mind friendly concept is using symbols and synonyms to refer to thing - s symbolic pointers. It is how our minds work. Those who are able of thinking in more than one language know that we could refer to it using different words, but "inner representation" is one and the same.

These two simple ideas - using pointers (references) and have data ("representation") to define itself (type-tagging is another great idea - it is labeling) gives you a very natural way of programming. It is a mix of so-called "data-directed" and "declarative" and, as long as you describe a transformation rules instead of imperative step-by-step processes, "functional" styles.

Of course, the code will look in a certain way - there will be lots of case-analysis, like unpacking things form a big box - oh, this is a book - it goes to a shelf, it is a computer speakers, it goes on the table, etc. But that's OK, it is natural.

The claims that "packing" is the best strategy is, of course, nonsense. Trying to mimic natural processes in our mind (as lousy as we could do it) is, in my opinion, has some sense.

There are some good ideas behind each language and not so good. Symbolic computation, pattern matching, data-description (what s-expression, or yaml is) are good ones. Static typing, describing "properties" instead of "behavior" - not so.)

Also it is good to remember that we're programming computers, not VMs. There is something called "machine representation" which is, well, just bits. It doesn't mean to swing into another extreme and program in assembly, but it is advisable to stay close to hardware, especially when it is not that difficult.

Everything is built from pointers, you like it or not.) The idea to throw them away is insane, the idea (a discipline) of not doing math on them is much better. The idea of avoiding over-writing is a great one, it is good even for paper and pencil - everything become a mess very quickly, but avoiding all mutation is, of course, madness.

So, finding the balance is the difficult task, and it is certainly not Haskell. Classic Lisps came close, but it requires some skill to appreciate the beauty.) So, the most popular languages are the ugliest ones.


> The cost of what is called "advanced type system" is inability to put an elements of different types in the same list or tuple or whatever. It is not just a feature, it is a limitation. In some cases, when you, for example, dealing only with numbers, say, positive integers, it is OK, but what then the real advantage of such type checking?

I can't say I managed to completely understand the argument you're making here, but doing mostly Java/Python for work, I don't remember the last time I had to write a heterogeneous list. At worst, you can always go for existential types.


A list that has a special marker at its end is a general concept. The-empty-list in Lisp, EOF in C, etc. With this you have streams and ports and all the nice things.)


Either today's tea was not strong enough, or you're answering the wrong person.


Yeah, I should change a tea-shop.)

In my opinion, the single-linked-list data structure as a foundation of Lisp was selected not by accident. It is also most basic and natural representation of many natural concepts - a chain, a list. You could ask for the next element, and find out that there is no more. Simple and natural. Because of its simplicity the code is also simple.

Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence. As far as I know, Python's lists actually are dynamic arrays.

The sequences of the elements of the same type (same storage size and encoding) with a marker at the end could be also viewed as a homogeneous lists. C strings processed as a list of characters is canonical example.

Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.

Now, you see, the concept of a homogeneous list from math is not enough for CS, and sometimes it is much better to have it fuzzy. What is a list is a matter of a point of view.

I think I will keep my dealer.)


> In my opinion, the single-linked-list data structure as a foundation of Lisp was selected not by accident. It is also most basic and natural representation of many natural concepts - a chain, a list. You could ask for the next element, and find out that there is no more. Simple and natural. Because of its simplicity the code is also simple.

Yes, that's a recursive data structure well-suited to functional languages, same as Haskell lists.

> Such kind of lists should be heterogeneous, because when all the elements are of the same type, it is more natural to represent it as an array - an ordered sequence.

From a memory point of view, maybe, but that's really an implementation detail. If you take, eg, Perl arrays, which you can access by index, push, shift, unshift, and pop, the actual implementation is invisible from the programmer, just as it should be in this sort of language. Java will happily store cats and dogs in an array of Object, for instance.

> Now consider a UTF-8 encoding. It is a variable-length encoding. UTF-8 string is not an array, and because you cannot tell the boundaries between runes while reading, is not a list. But, nevertheless it could be considered and processed as a stream, until EOL marker is reached. This is why it was invented in Bell Labs to keep things as simple as possible.

You could very well store it as a list of bytes. This wouldn't be terribly efficient, but it's perfectly doable. Whether you process it as a stream or you have it stored in whatever array/collection is orthogonal to the fact that your language of choice supports heterogeneous lists. You can do stream processing in Haskell too (with, eg, pipes). You have also access to other data structures which are not lists, but which do expect to be homogenous.


Yes, it is indeed a stream of bytes, with a zero-byte as an EOF marker. That's why UTF-8 is good-enough.

As for lists, as long as your next element is not always in the next chunk of memory, you need a pointer. A chain of pointers is a single-linked list. This is the core of a Lisp and it is not an accident. Together with type-tagging, you could have your lists heterogeneous, as simple as that.)

This is a part of the beauty and elegance of a Lisp, in my opinion - few selected ideas put together.


This is very true:

"Leiningen and Cake, joined forces to become an all-powerful build tool. Then Leiningen reached version 2. (And let me tell you, Leiningen 2 alone makes Clojure worth using.)"

Other build tools, and package managers, such as "bundler" in the Ruby world, seem pretty weak compared to Leiningen. This is a very powerful tool.

The tooling and the eco-system are reaching a very powerful level. For now, I use Emacs as my editor, but I am waiting for LightTable ( http://www.chris-granger.com/2012/11/05/meet-the-new-light-t... ) to get just a little further, and then I intend to switch to it.

This whole article is good, but this is the part that gets to the heart of the matter:

"Rubyists know that their language effectively got rid of for loops. In the same way, Clojure gets rid of imperative iteration in favor of declaration. Your thoughts shift away from place-oriented constructs like memory and gravitate to data structures and functions like map, reduce and filter. Your “class hierarchy” turns out to be a type system that happens to also lock away well-meaning functions into dark dungeons (more on that in another article), and getting away from that is freeing."

That might be the best summary of the strengths of Clojure: it helps you think about data structures and transformation, rather than thinking about the ceremonial and imperative code that your language needs to hear.


I'm a big fan of Clojure, but I don't think I'd ever write an article like this. I guess I must not be an evangelist at heart...I will tell people that I like something, but I never tell them that they must use it.

You know yourself far better than I know you, and so why would I presume to tell you how to live your life?

Functional programming is sometimes great. Most forms of programming are sometimes great. But I don't think there is one form that is great all the time. Maybe there is, and maybe if I come across it I'll be smart enough to recognize it... but in the mean time, I'll just try to use what makes sense to me.

And sometimes, functional programming just doesn't make sense to me. I still can't quite get my head around monads. They have just one or two too many levels of abstraction for me to hold in my head. I think I'm almost there, and was trying very hard to grasp them, but then in one of the videos I was watching, the guy said this: "Monads are a solution to a problem you will never have." He said it in jest, partly because the language at hand was Javascript, but it really stuck out to me.

Clojure has what I would call "sensible" state containment via STM. And sometimes just plain storing some state is the easiest and most straight forward way to go.

I love working in Clojure because it makes it so easy to break down problems into bite sized functions. That, and the concision of the syntax suits me. I'm trying to accomplish the same thing in Java by having some classes that I treat as a namespace and load them up with static functions in that namespace. I'm sure a lot of people would spontaneously barf on their screen if they saw my code though.


Meh. I personally find articles like this useful: that's how I was turned onto all the technologies I really love right now like Linux, Emacs and Haskell. Most importantly, I was convinced to work through the learning curves for all of these--which were all easier than reputed but still took some effort--which turned out to be more than worth it.

Basically, the reader can decide for themselves. Also, each reader reads more then one blog post. So having a bunch of extreme opinions to compare (e.g. a strong case for a bunch of different languages) is more useful than a whole bunch of hedged blog posts that all repeat the same refrain: "well, all languages are basically equal and you should use what seems best".

In fact, blog posts like that are a big waste of time. (I'm looking at you, prog21.) I would much rather hear a bunch of different, reasoned opinions--especially if they contradict each other--than hearing the same boring, condescending tripe about choosing "the right tool for the right job" over and over.

Also, I think the idea that all--or even most--programming languages are somehow equal is patently absurd. Similarly, I think the oft-reused tool analogy is deeply flawed. But that's neither here not there and enough material for a blog post of its own.

Of course, that's something of a false dichotomy, although it does come up in practice. But I think posts advocating a technology are also good by themselves.

The choice of technology may be subtle, but this post only needs to present one option, which it does admirably.


    > I will tell people that I like something, but I never tell them 
    > that they must use it.
I think it's just a "dude, you've got to check this out" sorta thing. Not really a command but that special itch to just share your opinion with someone.

I gravitate towards the tools/technologies/paradigms that compel this sort of enthusiasm every time.

* Martin Odersky's (and the general community's) enthusiasm led me to pursue Scala and convinced me to take a serious stab at functional programming.

* Rich Hickey (etc.) to Clojure.

* Sandi Metz (etc.) to object-oriented design.

* DHH (etc.) to Ruby and Rails.

Vim, Coffeescript, Alfred, Postgres, tmux, Crystal Reports, Dropbox, most my entire toolchain, where I went out to eat last night -- all cajoled by the infectious enthusiasm of others inspirited by those things.


> I'm sure a lot of people would spontaneously barf on their screen if they saw my code though.

This is exactly the problem I had. My canonical "getting to know your new language" project is always writing a simple chess engine. After writing a few of the high-level constructs, my next step was to try to browse some other .clj libraries, because I was pretty certain that a lot of the tedious tasks involved in writing a chess engine had already been solved.

After reading their code, I very quickly gave up.


I tried switching to Clojure, and while I love the purity, simplicity, and logic of it, two things always bothered me:

1) Immutable data structures are always going to be slower than their mutable cousins.

2) "Clojure code is beautiful" should be changed to "Your OWN Clojure code is beautiful". When I finished writing a compact piece of logic or data transformation, I was often struck with the beauty of it. When I tried to read someone ELSE's Clojure code, however, I couldn't even begin to make sense of it.

I am ever open to being proved wrong, however. Any Clojure programmers reading this, please reply with some code that is readable, elegant, and performant, to provide a counterpoint to my pessimism.


> Immutable data structures are always going to be slower than their mutable cousins.

Not always. It depends on usage. Mutable data structures can actually entail more work if the data needs to be accessed from more than one place. Immutable data structures can be shared and reused much more freely. (For example, adding an item to an array you got from somebody else means either the caller or the callee needs to have copied the whole array, while adding an item to an immutable list you got from somebody else means creating a single cons cell.)

At any rate, even if a destructive solution would be faster, the functional solution will probably still be faster in Clojure than the destructive solution would be in Ruby. And if not, you can still do something destructive — it's just not the default.

> "Clojure code is beautiful" should be changed to "Your OWN Clojure code is beautiful". When I finished writing a compact piece of logic or data transformation, I was often struck with the beauty of it. When I tried to read someone ELSE's Clojure code, however, I couldn't even begin to make sense of it.

I believe this is largely a matter of experience (not completely — some code just is hard to read!). Most people get that feeling when they're working with a relatively unfamiliar language. I find that the more familiar I become with Clojure, the better all other Clojure programmers seem to become.


> I find that the more familiar I become with Clojure, the better all other Clojure programmers seem to become.

That could be. However, I believe it's more to do with the fact that idiomatic Clojure encourages continual definition of nested layers of abstraction. Which makes the code beautiful from a certain viewpoint. However, it's also very similar to essentially writing a new language and adding features to it. If you buy this comparison, then reading your own Clojure code is like reading code written in a language you wrote, which is tailored to your own ideas about aesthetics and interfaces.

Which is great. However, that would then mean that in order to read anyone else's Clojure code, you would have to learn a new programming language every single time. I would buy that this gets easier the more you do it, because my second programming language was definitely easier to learn than my first.


That's true to some degree, but for well-written code, I don't think it's necessarily as severe as that description makes it sound. Any Ruby on Rails codebase is also chock-full of pseudo-DSLs (controllers, scopes, formats, etc.), but you don't hear the same complaints about Rails code being unreadable very often. It's possible to go overboard and turn your codebase into the Clojure equivalent of Negaposi, but from my (somewhat limited) experience, it's not exactly endemic.


> That's true to some degree, but for well-written code...

This is sounding painfully close to No True Scotsman argument. Also, for the number of times I've heard Clojure programmers say this, you'd think I would have at least ONCE been shown some of the programmer's actual code, in an effort to provide a concrete example of how elegant Clojure is.


> This is sounding painfully close to No True Scotsman argument.

Well, excuse me for trying not to make over-broad claims. Would you like me to go find some Negaposi to prove that other people's Ruby code is unreadable? The thing about "No True Scotsman" that makes it bad is that it makes a categorical claim, but it defines the category in an arbitrary way. I'm not really making a categorical claim — I'm just saying that, much like any other language, Clojure authors can write readable or unreadable code. Clojure gives you slightly more powerful tools, but overly "clever" use of the tools available in Ruby can produce unreadable code just as well.

> Also, for the number of times I've heard Clojure programmers say this, you'd think I would have at least ONCE been shown some of the programmer's actual code, in an effort to provide a concrete example of how elegant Clojure is.

Take a look at any of the more popular Clojure projects (https://github.com/languages/clojure is a good place to start). I find that they tend to be fairly readable.


As someone who has breathed Clojure for the past three years, I very rarely come across code that is truly difficult to understand. It does tend to be more information-dense than other languages (especially Java) but that just means I only have to understand a couple hundred lines split into 3 files instead of thousands of lines across dozens of files.

That said, there is definitely some nasty Clojure code out there, but I'm not sure that's avoidable in any language. Fortunately, the conventional wisdom to only use macros when necessary has started to catch on and that's made the situation a good deal better.


> As someone who has breathed Clojure for the past three years, I very rarely come across code that is truly difficult to understand.

I'm not really arguing that Clojure code is "truly" difficult to understand, if that is taken to mean reasoning about its performance and effects. Rather, I'm talking about the fact that idiomatic Clojure code encourages writing functions upon functions upon functions, which continually builds up layers of abstraction that inherently make code more and more difficult to read. Example:

console.log("Hiya.");

console.log("How's it ");

console.log("going?");

vs.

printFirstString();

printSecondString();

printThirdString();

function printFirstString() { ...etc }


I don't know anyone who writes code like that. Unnecessary wrapping of functions is an antipattern.

Idomatic Clojure for what you just wrote is:

(println "Hiya.")

(println "How's it ")

(println "going?")

If a Clojureist wanted to get fancy and build an abstraction (as they probably would) they'd probably write a debug function instead of using 'println' directly:

(log/debug "Hiya.")

(log/debug "How's it ")

Which is, granted, an abstraction, but isn't any harder to read.


Or define a function called `printlns`. Then call...

    (printlns "Hiya." "How's it " "going?")


Actually the default `println` already does that. ;)


`println` inserts a space between its arguments. I was showing we could define one that inserts a newline instead.


I could say the same thing about Java. Clojure/Lisp code just tends to have functions that do mostly what their names indicate. Java code tends to be broken up into billions of classes, and understanding an algorithm tends to involve trying to collect together all the pieces of it that are spread out as methods in a bunch of classes.

Take something like code generation from an AST. In Clojure, you might do it in a single file with multimethods. In Java, the code generation algorithm would be spread out over dozens of different AST classes, each with a visitor method.


Clojure has some very clever immutable data structures based on Phil Bagwell's work [1] that help mitigate the slow down.

I think you are totally right with regard to reading other people's clojure. Sometimes it's easy and pleasant, sometimes inscrutable.

I think the issue at hand is that it is so easy to go the DSL route in clojure. I call that an issue because when you invent a DSL to solve your problem, you've essentially created a new language that only you understand.

So now, if I'm reading your code, I have to understand raw clojure, and then I have to try to figure out your DSL.

[1] http://lampwww.epfl.ch/papers/idealhashtrees.pdf


Re-posting at a higher level of the tree since the branch I posted this in earlier might not be visible (sorry about the breach of etiquette)

Here's a microbenchmark I just threw together in a REPL, since you ask: The operation is to create a map data structure, then, ten million times, add/assoc a key value. The key is the .toString() of a random integer between 1 and 100,000. The value is the integer.

Therefore, the resulting map will end up with a size of 100k elements, and be modified 10m times.

Except for the map implementation used, the code is identical.

I ran each test six times, discard the first three (to allow the JVM to warm up) and took the average result time of the other three.

The results on my 2010 MBP:

java.util.TreeMap: 8573ms

java.util.HashMap: 3243ms

Clojure immutable hashmap: 7909ms

Clojure immutable treemap: 21248ms

Clojure hashmap (using transients): 5113ms


> Clojure has some very clever immutable data structures based on Phil Bagwell's work [1] that help mitigate the slow down.

This is true, and I've read some of the papers describing said structures and was blown away by the cleverness. However (and I don't have a source here, so please forgive me if I'm just totally wrong) I have watched many of Rich Hickey's hour+ long talks, and in one of them, I distinctly remember him saying that his vectors were "Within an order of magnitude of mutable vectors performance-wise, which is good enough for me".

I was... put off by that.

EDIT: Now that I think about it, it might have been Maps that he was talking about. I are not good at remember.


So that was a pretty early talk. I don't have benchmark numbers in hand, but I'm reasonably sure that the performance gap isn't that wide any more.

Also, there's a new(er) feature called transients that can allow you to use mutable data structures where needed for performance without changing the structure of your code: (http://clojure.org/transients).

And you can drop down to mutable Java data structures, if you need to: they are fully supported by Clojure, just not the default.

In general, Clojure's approach is to be thread safe and purely functional by default, but it's not too difficult at all to drop out of that for particular parts of your code you've identified as bottlenecks.


Honestly, one of the things that bothers me the most is how Clojure enthusiasts get hand-wavey when people talk about performance. They will often "prove" its performance characteristics instead of providing solid real-world examples, and will often dismiss other peoples' benchmarks as having something to do with the quality of the Clojure code on which the benchmark is run.


Whoa. I didn't dismiss any benchmarks, and I'm happy to back up any of what I've said with real-life experiences.

For example, I was working on a system a few weeks back that involves loading several-hundred GB CSV files. I wrote it in a functional style, with several map and filter operations over each of the rows. I was able to program the logic viewing the entire CSV file as a single lazy sequence to transform.

After profiling I ended up using Java arrays instead of Clojure vectors as the primary data format for records in the system: given the fact that this app was IO-bound, this resulted in about 20% improvement for that particular use case.

However, because Clojure is capable of treating arrays as sequences polymorphically, the change required very few modifications to my code and I was able to maintain a functional style.


Also, here's a microbenchmark I just threw together in a REPL, since you ask: The operation is to create a map data structure, then, ten million times, add/assoc a key value. The key is the .toString() of a random integer between 1 and 100,000. The value is the integer. Therefore, the resulting map will end up with a size of 100k elements, and be modified 10m times.

Except for the map implementation used, the code is identical.

I ran each test six times, discard the first three (to allow the JVM to warm up) and took the average result time of the other three.

The results on my 2010 MBP:

java.util.TreeMap: 8573ms

java.util.HashMap: 3243ms

Clojure immutable hashmap: 7909ms

Clojure immutable treemap: 21248ms

Clojure hashmap (using transients): 5113ms

Data!


I know that, in the Java world, you guys probably don't really understand this fact, but 3x slower is not only bad, it's so bad as to be completely unworkable in the real world.

Source: I was a Google engineer for 5 years.


Mutable vectors are about as fast as it gets (sequential writes to contiguous memory), so within an order of magnitude is still pretty fast. And for the default, catch-all data structure, I'm not convinced that being as fast as possibly possible is ideal. The C++ STL takes this approach, going to great lengths to be as fast as C arrays, but I think it suffers a lot for it in terms of usability.

There is a cultural distinction to be aware of here between Lisp programmers and C/C++ programmers. C/C++ programmers tend to be very "layer oriented." They assume everything will be built on top of built-in constructs like arrays, so they try to make it as fast as possible. But Lisp has historically been more of a "bag of stuff." It offers lists as the default "go-to" data structure, which has a lot of positive qualities. Mutable arrays are a separate data type that you can use if you need the speed. I think Clojure inherits some of the same thinking. Raw JVM arrays are always there if you need them, but the defaults are optimized for things other than raw speed.


Clojure's immutable data structures allow you to scale out across many cores without needing as rigorous locking semantics as a destructively modified version of the same work would need; copy on write presents no contention until you want the new value to be the canonical value.

Would I trade a 1000% performance hit on write ops in exchange for the ability to scale out over 10000% as many cores simply? Every day of the week.

Single threaded execution is a computational dead end. If you want to go faster, you have to parallelize, be it on a single system or on a cloud service. Clojure's persistent data structures ease this. That the persistent data structures also have canonical serializations ALSO ease this.


Give me a break, we're talking about vectors here.

I like Clojure, spent some time messing around with it a couple years ago and will one day actually use it for something, probably involving complex configuration where code-as-data really shines along with concurrency/performance.

But if you're talking about working a ho-hum vector with 100-10k entries, a linear scan over a mutable, contiguous array will typically be faster than the most clever multithreaded code you can come up with, and take up less of the CPU while it's working. 10 cores are a Bad Idea for that kind of work.

Amdahl's law tells us we should look at larger units of concurrency in our architecture rather than getting all excited about some auto-parallelized map function. At that point, it starts being important how fast the (single-threaded!) individual tasks run.


Well, no. A linear scan over a large memory array is going to crap all over the CPU caches if you have to do it more than once.

Break into blocks < CPU cache size, perform multiple stages on each block.

Having all that handy control-flow stuff makes it easier to get the block-oriented behavior you need to maximize performance, which in these cases is all about memory bandwidth.


Do immutable data structures data structures really let you scale out so easily? I thought that was something of a myth...


"so easily" is poorly defined, but when you are talking about collection data subject to concurrent modification you have a few options for correctness:

1. Read lock on the data for the time a thread is using it. This ensures that it is not destructively modified while iterating over it. This is a terrible option if you're using any kind of blocking operation during the lifespan of the read lock. The thread who obtains the lock runs quickly without any kind of penalty. After it obtains the lock, which might have taken an extremely long time. Especially if some OTHER thread was blocking with a read lock held.

2. Read lock long enough to copy the data. Work with your copy in isolation. You have to incur the cost of the linear copy, this might or might not be less than the cost of performing the work you actually want to do, but if it's close, your run time just went up 2x.

- Brief caveat: Any explicit locking scheme subjects you to risks from deadlocking. This is where complex multithreaded applications develop explicit locking orders from, and what can make large codebases difficult to work in.

3. Don't read lock, hope for the best. This can work better if you have a means to detect concurrent modification and restart. You might even get the correct behavior for 99% of cases.

4. Work with immutable data structures that cannot be destructively modified out from under you. Immutable data is computationally cheaper to read in the face of parallelism than every other option. It is more expensive to write to. What do your read and write loads look like?

- Also please keep in mind that while Clojure provides immutable persistent data structures out of the box and its reader generates them for all the canonical serializations, it does not take away Java's destructively modifiable types


> Would I trade a 1000% performance hit on write ops in exchange for the ability to scale out over 10000% as many cores simply? Every day of the week.

And this is, in my opinion, the best possible argument in favor of using Clojure. Off the top of my head, I remember Rich Hickey saying something about how he developed Clojure due to his irritation at working on a huge project that wrote code to handle thousands upon thousands of interconnected nodes. That makes perfect sense.

However... writing a web app with Clojure, at least to me, doesn't.


> However... writing a web app with Clojure, at least to me, doesn't.

Why not? I'm serious here, if you're serving thousands of clients in a web app, why wouldn't you want parallelism? I mean, sure at small loads you don't need it, but what about scaling up?

Also, I find using compojure for web app development to be an absolute dream. I might need to get out more (my day job is java web apps), but I love the ability to iterate rapidly in the repl on a web app without having to restart my JVM.


> I'm serious here, if you're serving thousands of clients in a web app, why wouldn't you want parallelism?

For the same reason you don't do parallel by default for every loop you write in java. It's overkill. You can successfully argue that multiple service calls across a network need to be parallel, but this is relatively little to do with a desire to serve many clients and a lot more to do with responsiveness. (a noble goal)

> but I love the ability to iterate rapidly in the repl on a web app without having to restart my JVM

HTML and mixed services > (any combination of java technologies you can dream up to serve web apps)


I totally understand. One thing to watch out for though is "premature optimization" as I've heard it called here on HN.

It may very well be that you need that order of magnitude, but if you don't then you might be missing out something you might have really enjoyed otherwise.


> Immutable data structures are always going to be slower than their mutable cousins.

True in general, but only if you just care about single core performance. A data structure that allows your algorithm to scale gracefully to more cores is actually more performant than a data structure that's faster for a single thread, but is extremely hard to scale in concurrent settings. Clojure sacrifices single-thread performance for multithreaded scaling, and it's the right tradeoff given current hardware trends (for example, consider the design of Clojure's reducers, and how it focuses on multi-core scaling).

My main concern with immutable data structures (and most data structures designed to work in concurrent settings, actually), is their heavy reliance on garbage collection, and on old-gen garbage collection in particular. Old-gen GC still has some ways to go, at least in HotSpot.


Reading other Clojure code is weird. I agree that it can be hard to get an eye for other peoples code, but the flip side is that everything is just functions and data. When I decided to go look at ring or compojure it was incredible how simple the code really is. I've looked at plenty of other clojure libraries and I have this revelation every time. Its awesome.

This is in comparison to reading an OO library in a language I'm much more familiar with but where inheritance / mixins mean you have to dig through many files (often not obvious which ones) to understand a piece of code.


I also struggle reading Clojure code that isn't my own. It reminds me of a quote by Brenton Ashworth on the subject: http://goo.gl/yCIWI


It seems like that mentality would make building a community of any kind difficult, as programming communities (companies, shared libraries, etc) are all about being able to read and improve one another's code.


I interpret this more as saying that it is ok if there is a barrier to entry. If it takes you six months of programming Clojure before you can easily read idiomatic code written by other people for example - maybe he's ok with that.


Interesting. One of the things I've come to love most about Clojure is that I find it easy to read and reason about code I did not write. Your comment demonstrates that is not universally true.


Have you used transients in Clojure? http://clojure.org/transients

They allow mutation as long as it's localized.


> 1) Immutable data structures are always going to be slower than their mutable cousins.

This would be ok, if immutable data structures weren't also much harder to reason about, implement, and actually use in your algorithms. Nothing is stopping you from doing functional programming and using immutable data structures in an imperative language (the reverse is very much not true), but why would you bother if it's easier to think about (and prove properties of, if you're into that thing) mutable data structures?

People recommend Okasaki's Purely Functional Data Structures all the time, but the main result of that dissertation aren't the clever immutable data structures, it's the fact that Okasaki was the first person to come up with a somewhat workable way to do asymptotic runtime analysis of immutable data structures. IMO it's not pretty.


I don't understand why you think that immutable data structures are more difficult to reason about and prove properties of. Perhaps you would enlighten me.


Likely at least in part because they've been studied far less than mutable/imperative data structures, and thus there are significantly less tools for reasoning about them. Plus the most efficient functional datastctures rely on lazy evaluation, adding a layer of complexity to reasoning about their behavior. In fact, in his 10-years PFDS retrospective Okasaki noted exactly that:

> In 1993, I ran across an interesting paper by Tyng-Ruey Chuang and Benjamin Goldberg, showing how to implement efficient purely functional queues and double-ended queues. Their result was quite nice, but rather complicated, so I started looking for a simpler approach. Eventually, I came up a much simpler approach based on lazy evaluation, leading to my first paper in this area. (When I say simpler here, I mean simpler to implement—the new structure was actually harder to analyze than Chuang and Goldberg's.)

or

> , I went into a frenzy, playing with different variations, and a few hours later I came up with an approach again based on lazy evaluation. I was sure that this approach worked, but I had no idea how to prove it. (Again, this was a case where the implementation was simple, but the analysis was a bear.) [...] The whole way home I thought about nothing but how to analyze my data structure. I knew it had something to do with amortization, but the usual techniques of amortization weren't working.


That's the passage I was thinking of. Compare the implementation and analysis of queues in Purely Functional Data Structures with any introductory data structures book if you don't believe me.


He means reason about performance/memory properties.


Immutable data structures are harder to implement yourself, not to reason about.

Anyone who has spend a few days with Clojure realize how much more easier it is to reason about them and to work with them.

Immutable data structures combined the software transactional memory that Clojure provides are a godsend.

In a lot of case they're also much faster then the good old copy-on-write and because they're immutable you have lockless concurrency. This is huge.

But it gets better: if you really find a performance bottleneck due to the use of immutability, you can fallback to mutability / place-oriented-programming.


Totally agree. We built a PaaS entirely in Clojure (https://circleci.com) and its absolutely a grown-up language. I was unsure when we started (my cofounder has been using Clojure since 2008 and was adamant that this was a good idea), but Clojure makes it very easy to write big hardcore systems. Definitely one of our competitive advantages.


I take this as an extremely positive indication - that someone can be skeptical and won over after having built a substantial system. Virtually every time I look at a shiny new technology, even ones I profess to like, I like it substantially less after I've been using it for some time and know where its faults lie.


I've now been writing clojure for 18 months, and its definitely my least-hated language (python is a distant 2nd, coffeescript a distant 3rd).


I also started with Ruby (well, after a number of other languages) and used Clojure for a time, and I just can't sympathize with this:

"And—if you like avoiding unnecessary frustration and boilerplate—it will make you happy."

Didn't get this feeling. I like avoiding unnecessary frustration and boilerplate! A great way to avoid boilerplate is to hide it behind macros, however that's not necessarily a great way to avoid frustration.

Know what's really awesome? When a macro injects a recur point, and suddenly you're not recuring to where you think you are, you're recuring to a point within the macro somewhere. The only way to figure this out is to go dig through the source of the macro.

Sorry Clojure fans, what can I say? This did not make me happy. I am told that if Clojure did not make me happy, that's my fault, because I didn't study Clojure hard enough or something to get to the point where I should feel the Zen of Lisp flowing through my brain. Clearly this must be the blub paradox at work.

Maybe it's my fault, or maybe Clojure isn't the greatest language in the world for everyone.


There is no language that is "the greatest language in the world for everyone." It's unfortunate that you found a point of frustration with clojure (or with one of it's 3rd party libraries?), though a single anecdote about a single problem you encountered shouldn't turn you or anyone else off a language.

As for macros, they should be used with care for many reasons, this being a good example. Similarly for ruby, monkey patching should be use with great care as it can introduce more problems than a surprise recur point.


Macros have a great deal of power, which is why they should almost never be used.

A macro injecting a recur point sounds just a little bit insane. I can't think of a reason why you'd ever do that.


I wrote an event collector for SnowPlow in Clojure (https://github.com/snowplow/snowplow/tree/master/2-collector...), and really enjoyed the experience. Leiningen is excellent, far better than any other build tool I've used, and Ring and Compojure were both great.

My only grumble with Clojure is that nobody seems to document the types that their functions take and return. It's a PITA having to read through a whole chain of functions just to figure out the types which are passing through it.


We use a lot of Clojure here at Factual: http://www.factual.com/jobs/clojure

I'm still in the process of learning from the Clojure gurus around here but I see that it has a lot to recommend it. I can already see increases in the clarity of my code when I write filters and functions for our data pipeline. The next step is to learn Cascalog.

We're fielding a sizable contingent to the Clojure West conference this weekend (which we're also sponsoring) so come say hi if you're in out in Portland!


As for me, I did not even know the word "homoiconic", until yesterday, when I bought a book on Clojure, after reading the author.

What I do know, is that I was in love with Assembler since my first steps in programming and hacking. While everyone in my surroundings were using pascal and basic that days, inlining asm only for occasional IO work, I used to scaffold tremendous routines and structures in a matter of days, using base asm and macro. While my friends, looking at my sources, were only able to say "what the ...ck is this, that is insanly sick, how do you understand all this?", asm was so natural and fluent to me.

Then dark times of C and C++, Java, C# etc. followed (BTW, I hate purified OOP deep inside, it always seemed to me so unhuman), and several years ago my roads crossed with LUA, and I instantly loved it. Pity, I had no chances to use it much, but I remember that feeling, when code and data magically interlace and create beautiful structures.

Now I am looking at Clojure and recalling my Asm youth, and these awesome days with LUA. But this time it has all the power of interop with major libs and services. I am giving it a try.


Any recommendations for a web framework in clojure? I played around with Noir for a while but it seems like that project is abandoned now? What's the Sinatra/Flash of Clojure?


This is currently one of the best sites explaining the clojure web stack:

http://www.luminusweb.net/

There isn't really a single framework, rather you can pick and choose your components depending on your needs or preferences.


Compojure, Ring, and Enlive or Hiccup are a pretty common stack.


It should be noted that Clojurers are very much about having composable libraries with narrow applicability.

In Sinatra or Flask, you get a server (interface), a routing syntax, and built-in templating.

In Clojure, you have Compojure, which handles routing, Ring, a server abstraction for requests, and responses, and the templating library of your choice. (I've used Hiccup and Enlive, as well as Mustache and Markdown). You're responsible for gluing them together, which is really not too hard given the power of the language to create abstractions on the fly. This results in slightly more code, but also makes it much easier to replace or augment any one component.


To be fair to Sinatra, it uses Rack underneath, and farms out templating to different libraries. It's certainly more of a framework than Compojure is, but it's not monolithic by any means.


In one particualr website, it was much simpler when I removed Compojure. I remember thinking that was pretty cool. (Seeing clearly what Compojure did and dropping it in favor of a simple hashtable).


Sounds good when you know nothing but Java and never heard of CL - knew nothing about, say (disassemble #'foo) or what is FFI etc.

This is also very telling - lets you deploy your Clojure Web app to a JBoss server and take advantage of JBoss’s scalability without any XML configuration. You get a mature enterprise-ready Java server without the pain of Java or of configuration.

I wonder how many orders of magnitude difference in "scalability" we would see with a simple nginx -> fastcgi -> sbcl setup.)

Memory usage under long periods of time with pending storage/back-end calls is also interesting topic - how JVM blows up just after few hours in "production".)


That's unnecessarily dismissive. CL is a wonderful language, but it's gathered a lot of cruft over the years and essential features like multithreading are not standardized.

More importantly, it doesn't have a focus on immutability and functional programming by default.

I'm not even sure what you mean about the JVM blowing up after a few hours... Tuned correctly it'll stay up indefinitely.


> but it's gathered a lot of cruft over the years

In what way? The Common Lisp standard hasn't changed since 1994.

> essential features like multithreading are not standardized

No one has made up their minds what the best atomic test-and-set operations are. A lot of platforms can't even agree on semaphores vs. condition variables. Which concurrency primitives do you think should be standardized, and which should be left to libraries? To me that seems like a trick question, because there's no good answer.

For example, I think software transactional memory is a dead-end, and shouldn't be part of a language standard, but many people involved in Clojure obviously disagree.

> More importantly, it doesn't have a focus on immutability and functional programming by default.

You can always do FP in Common Lisp, but you can't do gotos in Clojure.


> You can always do FP in Common Lisp

I really wish this were true because I write a lot of Emacs Lisp (closely related to CL), but it's not.

If you don't have immutable lists and strings built-in to the language, you never know whether a given library function is going to assume it's fine to just bash the data in place. And that's not even getting into fundamental equality issues: http://home.pipeline.com/~hbaker1/ObjectIdentity.html


Tuned correctly means "throw in more RAM as long as there are no swap and then don't change anything"?)


It depends on what you want to do. With a large Java ecosystem not having easy FFI may not be a big deal.

CL, and SBCL, are great, but SBCL uses a conservative garbage collector (or at least it did the last time I used it) so I'd say the chances of having memory problems with Clojure are less than with SBCL. In fact I remember having some memory issues with SBCL though I no longer remember the details.

Being able to disassemble functions is a great but honestly I don't miss that option in Clojure.

Clojure's a great language that's really worth trying out. I'm not saying it's perfect, there are still warts, but it's immensely satisfying too.


The main issue is that Clojure happens to be more enterprise friendly than traditional Lisps.

It is easier to give a jar to the devops team, than trying to convince them to add support for yet another language the crazy developers are now trying to use.


Can anyone speak to the experience of debugging Clojure, especially without having JVM experience? The one thing that has me concerned is the depth of crash dumps and the amount of JVM knowledge required to interpret them.


Others experiences may vary, but mine look something like this. First, there is a stacktrace library that you need to use if you get runtime errors. This stack trace gives you line numbers and function names, so it's pretty easy to track the problem to the nearest named function.

From there, and for other errors, generally I use print debugging. Often times you can also just test a function in isolation on the REPL. This and the fact that you can hot-load code into a running process make print debugging tolerable.

There are fancier debugging tools in development for clojure, though I have no experience with them. Essentially they let you attach a REPL at arbitrary failure points. This would make it quite easy to identify the source of the problem.

Knowledge of the JVM does come up, but you won't be digging through compiled bytecode or the guts of java libraries (unless you use a lot of java library interop).


How does this compare to lisp, where you can break straight into editing code, inspecting variables, etc, and then continue? This seems to be something Clojure really lacks, but I never see it explicitly stated in 'why you should use Clojure' articles, nor in Clojure books. What I want to read is a sober 'pros and cons of using Clojure' article, so I can properly judge whether I should switch.


Well the stacktraces are deeper, there is no way getting around that. In practice it's not much of a problem though as you quickly learn to identify the lines from your own code.

Debugging tends to be more REPL based than debugger based. It is possible to debug from Eclipse using the build in debugger, but it's not something i have done as it is not really necessary.


Luminus and LightTable (wow, that in-editor javascript demo) look pretty awesome.

Coming from Scala/Play my initial resistance to jumping the fence are: 1) lack of compile time type safety 2) odd, for me, language syntax 3) no ScalaQuery/Slick functional SQL wrapper equivalent

How is Scala-Clojure interop? Would be interesting to jar up existing Slick-based model layer and invoke within Clojure stack ;-)

Not having yet taken the plunge, based on the LightTable demo Clojure development seems pretty rapid fire (read: no waiting for compiler to catch up).


For sql wrappers do checkout Chris Granger's korma (sqlkorma.com/docs), clojureQL(https://github.com/LauJensen/clojureql), hyperion (https://github.com/8thlight/hyperion) etc.


I did run through Korma docs; if I were coming from Rails or other ActiveRecord based framework I might get excited, but as soon as I see start to see things like has-one and belongs-to I get worried about the generated SQL.

ClojureQL, however, looks intriguing, seems more functional than the alternatives listed here.

Would love a more rapid fire development experience than what Scala/Play offers, but for now there's a price to pay for type safety.

Clojure seems a great call for the dynamic side of the fence, and being on the JVM with its massive ecosystem is a major plus.


optional clojure type checking: https://github.com/clojure/core.typed/ (not finished, but showing promise)


This is a very intriguiging post it constantly makes reference to Rails and Django. But it stops there at the abstract.

Can someone point to an A/B comparison ... a simple Rails/Django app and the Clojure equivalent?

"Just show me the code."


Weird font rendering on this site - the glyphs are very distractingly blobby. Increasing or decreasing the zoom level helps a lot if anyone's finding it similarly unreadable.


No thanks, I'll stick to Gambit.

Fast, RnRS/IEEE Scheme compliant, has a great runtime layer and FFI, can integrate with anything written in C.


I'd like to play around with this (as well as actually read SICP). However, does the fact that Apple keep blacklisting the Oracle JVM get in the way? Would be nice to have Clojure bundle its own JVM rather than have a dependency, unless I am totally doing something wrong in setting it up (homebrew).


I use Java daily on my Mac, and it has never been a problem. I don't remember doing anything special to defeat any blacklisting, either. I believe that Apple's anti-malware agent is simply disabling the Java browser plugin.

Regardless, it doesn't seem to affect desktop development, so Clojure on!


Anybody would recommend Eclipse instead of Emacs? Or maybe other IDE? I don't want to learn Emacs and its keybindings, but if I have to, better tell me now. :)


Most people are going to prefer Emacs, but lots of people happily use Eclipse. Light Table is also a budding option. Vim is also fine if you're into that. ST2 can suffice with the https://github.com/odyssomay/sublime-lispindent plugin.


just learn emacs or vim. it's for the best.


If you work in a Lisp dialect, the time you invest upfront in learning Emacs keybindings will pay off massively later.

besides which eclipse is a finicky monstrosity like visual studio


Light Table is still in beta, but it is looking very promising.


Clojure has been grown up, it hasn't change that much since several years ago, especially on the surface.


This guy reminds me of Uncle Bob


there is a gulf between clever and smart. it may be clever to use Haskell, but is not smart to hijack a clojure thread to argue about it, again. if I had the power to downvote I would be using it here.


Clojure's great. While I like static typing in general, I see a better chance of long-term success in Clojure over Scala. With Scala, there's nothing wrong with the language, but the amount of Java-in-Scala code I've seen has convinced me that there's unintended cultural risk-- at least in the enterprise, where software has been done wrong for decades-- while Clojure forces people to be exposed to new ways of doing things.


Scala and Clojure only work if you have top developers on your team.

Given my experience on international enterprise projects I think:

- static languages win over dynamic ones, because most teams only write tests if obligied to do so, and even then most tests aren't proper. At least it makes sure the code is in a compiled state.

- most developers on the teams are google-copy-paste monkeys that only have basic understanding of Java/C#/VB so they need similar enough syntax to do the transition to new languages.


There's some progress being made on static typing in Clojure, too: https://github.com/clojure/core.typed

Still fairly early days on that though (missing protocols and rest parameters are the big issues I think)


Oh my, the flame wars. Search for instances of the word "haskell" and "clojure" to see what I mean.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: