Hacker News new | past | comments | ask | show | jobs | submit login
Lisp: More is less (jameso.be)
113 points by jobeirne on Jan 19, 2014 | hide | past | favorite | 118 comments



This post repeats two memes that float around the programming language space. One is: "it's so powerful that it's bad". The other is: "it's ok, but not for large projects". I don't think I've ever seen any evidence attached to either. (If the OP contains any, I missed it.) But they're the sort of things that sound plausible and have more gravitas than "Here are my current preferences", so they get repeated, and no doubt the more they get repeated, the more they get repeated.

When I say "evidence" I'm not asking for formal studies; that's too high a bar for our field. But one can at least ask to hear about specific real-world experience.

Of the two arguments, the "too-powerful" one has the disadvantage of being prima facie absurd, so I think the "large-project" one is more harmful. So, where are the large Clojure and Common Lisp projects that have been harmed by this alleged language weakness? Let's find some practitioners who actually ran into this.

For what it's worth, I haven't. Since the best thing for a large project is not to be so large in the first place, applying language constructs to make codebases smaller is a great strength when the language lets you do it—and Lisp lets you do it.

Edit: [deleted off-topic bit]


> it's so powerful that it's bad

I had the pleasure of using Cascalog in production at work, which is written in Clojure. While it was in fact written by some very smart people, we had a very difficult time using some of its constructs that were very cleverly abstracted away behind macros.

The problem was that it felt nearly impossible to debug problems we had because of the long, impossible stack traces. Further, trying to get another very smart programmer to understand why some functions behaved in one way and some behaved in very different was very hard to convey. I'll reiterate that I thought the other guy I was working with was really smart, and I'm at least not an idiot, and we both felt like we had a really hard time unwrapping what the code was doing.

On the flip side, if it were written in Java (I think some parts are actually but more under the hood), you could point at the code and say "That's where the map function gets called on all the workers" (Cascalog is for Hadoop), or run the code and get some kind of stack trace where you could even begin to start figuring out what was going on. We weren't even doing anything cutting edge.

For me, I love the academic/fun endeavor. I have wasted countless hours playing and learning. But if you asked me if I would base any critical part of my production app on Clojure, especially when there are more than a couple people who weren't Lisp experts, I would have a really hard time justifying it after what I saw when I tried.


Finally some comment explaining a specific problem involving macros.

First of all Clojure has a problem with Stack traces, other lisps are much nicer in that respect (but do not have to face the JVM).

Anyway, macros can be difficult to debug, I am sure you have heard and used of the tools that usually exist in lisps, macroexpand etc. Nevertheless macros are Transformations of the AST and thus not as easily traceable as function calls.

Nevertheless, when working with ClojureScript on a web-app, I grew really fond of the possibilities macros offer, possibilities that are hardly possible with JavaScript (HTML templating within ClojureScript code, etc.). http://blog.getprismatic.com/blog/2013/1/22/the-magic-of-mac...

I wrote some macros to help with HTML5 canvas contexts and these made my code a lot more reliable and readable.

The problem with keeping languages less powerful is, that you often end up with something like Java: Surely quite understandable when you look at a few lines of code, but in the end you need a whole lot of complicated patterns and best-practices, now you get hit by a boomerang at the back of your head.

Take away message: Macros should not be used on every occassion, but they are really helpful in central places.


That's why some people prefer to use Lisp, it has better debugging tools for that and stack traces are easier to use.

Still, debugging macro-using code IS harder. The first thing I need is full and partial macro expansion in the editor. There is a bit more then. When all fails I use an interpreter (most Common Lisp implementation have both an interpreter and a compiler) to follow the expansion process in detail.

If a supplied macro creates errors which are hard to understand, then it is also possible to request better compile time error reporting from the developers.


It seams to me that if you give somebody Cascalog and native Java or any other Hadoop query language you will quickly see why macros are great.

Lets be honest nobody want to write hadoop jobs directly with Java. Clojure has a query language built in that feels natural and has the full power of the language.

Other people build things like Hive or Pig that come as comply different languages.


Racket is a lisp that has a specialized macro debugger to help in that very task. The point is that instead of throwing away a potentially good tool, there are people working on making that tool more reliable.


>This post repeats two memes that float around the programming language space. One is: "it's so powerful that it's bad".

Also known as "less is more", which is a well established point in programming, and with a lot of historical examples to showcase it.

>The other is: "it's ok, but not for large projects". I don't think I've ever seen any evidence attached to either.

Well, were are the sucesfull large projects written in Lisp (from either number of happy users or monetary success perspective)? How many are they compared to other languages?


I don't get how taking away macros is "less is more".

Since macros provide capabilities that no other feature does, it seems to me that taking them away is "less is less".

As I understand it, the intent behind "less is more" is to boil things down to a minimal number of 'things' (for lack of a better word) without sacrificing capabilities, which in practice involves getting rid of redundancies and overlap while coming up with orthogonal 'things'. Reducing the number of 'things' while also sacrificing capabilities seems to be throwing out the baby with the bathwater.

(Although to be honest, I've been unable to find a definition of "less is more" in the context of programming anywhere.)

Could you clarify what you mean by "less is more"?


>"less is more"

Less what? Do be more precise. I've heard that with regards to complexity – not so much with regards to power.


"Well, were are the sucesfull large projects written in Lisp[...]? How many are they compared to other languages?"

In our field, tools get chosen not by merit but by what's the current fad. It's unfortunate, but this fact makes those two questions unhelpful in moving the discussion forward.


>In our field, tools get chosen not by merit but by what's the current fad. It's unfortunate, but this fact makes those two questions unhelpful in moving the discussion forward.

That's an idealistic and elitist response.

Very removed from the empirical and scientific spirit, which would suggest that if people use other languages for large projects (say C/C++) there are reasons for this, besides them being "fashion victims" and "doing it wrong".

Some of those reasons would be the appropriateness of those languages for the computers of the 70's - 90's (at a time when Lisp machines were slow and resource hungry), or the availability of tons of library code afterwards, the better control over the memory layout needed for large scale projects like a broswser, an OS, Office or Photoshop, etc etc.

Notice how the response just moves the goalposts a little further, without trully answering. Even, for example, if you are right and languages are used because they are fads, you failed to answer why LISP wasn't picked as a fad itself.


I would think that 'fad' playes a role sometimes and in some areas, but that it is neither sufficient nor necessary to describe language adoption.

But there are mechanisms which may look like 'fad'. For example in the academic community a lot of progress is only incremental and people need something new to publish incremental results.

Industry demands from Universities to teach the language de jour.

'Industry analysts' give technology guidance and tell companies what to use.

Often it is seems 'modern to reinvent everything. Look at Clojure, a Lisp dialect which is basically zero backwards compatible. It allows people to reimplement the old stuff, sometimes in slightly different ways and claim some achievement. You also don't have to deal with the old people, which 'know it already' or with 'old' technology. The community is self-selected to newcomers and those willing to reimplement old stuff and to invent newish stuff.

It is also about communicating ideas. If one uses a language few speak, one gets less attention, mindshare, etc. Thus use something which in the hype cycle is on an increasing angle, where the attention of many is easier to get. If one wants to promote a new framework, better use a popular language underneath it. Otherwise it could be nicely engineered, but few will hear of it, few will try it and few people will use it.

Also some technologies are popular - like the JVM - and this allows to leverage engineering efforts by others. Popular technologies often seem to be ported widely and seem to have more active maintenance.


The idea that the core language is not what drives adoption is clear. There are a ton of other things that go into this kind of thing.

Tooling, Library, Schooling, existing base of people that know the language, CPU architecture, Memory constraints and so on. And of course all the nontechnical things like marketing.

So saying that there is not as much lisp as c++ code is not a argument that c++ is a better language.


That's true, but the fact that you can somehow seperate the core language from the other factors ("tooling, library, schooling, existing base of people that know the language, CPU architecture, Memory constraints") is a fallacy.

Might be possible for a totally academic or greenfield small time project, but not at all if you do commercial and pragmatic oriented development, with teams, constraints, deliverables etc.

So while I agree that "that there is not as much lisp as c++ code" is not a argument that c++ is a better (core) language, I also think that this fact shows that C++ is a better language+extras for more projects.


I would say it 'was' not it is. The amount of legacy java does not speak that its better right now. Only that it was better (or persived to be better) in the past. The argument for perseption vs actual 'goodness' is almost impossible to answer.

The intresting thing about java is that is was cleary worse then something and only became better because people used it so much. Java was adopted and developed around the same time that self was around as well. Now self at the time was owned by the same company, self was just as small to send over the wire, self allready very (very, very) performant (compared to java witch was grindingly slow) and had much better tooling.

The reason for all this seam that the people at sun just did not know understand what the technology they had laying around in some reasearch project.

Java was pushed and became what it is now, self was not and became what is now and thus proving that even with everything speaking for you at a point in time, you might not.

(PS, in the end it might have been a good thing that java was picked over self since self might actually have won over in the webspace and we would all be using propritary applets instead of the web we have today.)


> How can a static analysis tool keep up with a language that’s being arbitrarily extended at runtime? The prospect is daunting.

I don't really understand that point. In Racket, for example, programs macro-expand down to a very small set of primitives, such as `let-values` and `lambda`. This makes it easier to do analysis, not harder. For example, this is how something like Typed Racket can support every idiom in untyped Racket programs -- because they all expand down to a fairly manageable core set of forms. (Or if you need to analyze something no-so-primtive, you can stop expansion on whatever that is.)

Racket is descended from Scheme. I don't know if CL or Clojure expand down to quite such a small primitive core, but I imagine the story is roughly similar?

Anyway, writing such tools is not what the average programmer would do on a putative large project.

Any large project needs technical and social norms, mentoring, and leadership -- regardless of language. I think the language is the smallest part of it. Perhaps like how in security it's social not technical engineering that usually turns out to be the weakest link.


Furthermore: Lisps tend to prefer staged metaprogramming (eg. macros) over dynamic metaprogramming eg. python's __whatever__, Ruby's method_missing, and JavaScript's obj[foo + bar](). I'd much rather debug static metaprogramming techniques over dynamic ones any day of the week.

The fact is that Clojure/etc is much easier to analyze statically than the popular everything-is-a-dictionary scripting languages. That should be obvious given that the reference implementation is a compiler, not an interpreter. But stuff like Typed Racket and Typed Clojure should eliminate any remaining doubt.


and more importantly macros are expanded at compile time, not runtime, so static analysis is still possible, and this makes that statement a bit wrong.


And to drive your point home, Haskell code "expands" into System F. Macros in lisp do the same kind of transformation (AST rewrites), they just get to skip parsing.

I don't think the author understands that these things happen at different times. Especially because they said "at runtime" when macros expand much before that.

On the contrary, something like Rails is what I'd call extending the language arbitrarily at runtime. So the very lack of macros is what motivates shaky transformations at runtime.


Starting with "Goodbye, static analysis" it was pretty hard for me to take this article seriously. Macros desugar into primitive forms - this means most macro language extensions are amenable to static analysis! Clojure has Kibit, Eastwood, and Typed Clojure (a whole type system via ... macros!) for static analysis. In addition Clojure and ClojureScript now both have fairly powerful analyzers that catch bad style and errors today and can be extended to do even more.


Agree with everything you say, but it is worth noting that the insights you get from static analysis on desugared forms (which can be very large and complex!) is much harder to interpret than static analysis on the original forms.

So to make static analysis tools useful on macros, they really need some way to map back to the original source forms. Not all tools do this (either at all, or well) - and to the extent that they don't it is an big impediment for static analysis.

Also the killer challenge: macro expansion in Clojure can depend on a mutable environment at the time of macro expansion. This makes it impossible to do reliable static analysis, unless you are able to recreate the runtime environment at the time of macro expansion in your static analysis tool, which is hard/impossible in general.

This is part of the motivation for my little Kiss language experiment: with immutable environments you can keep the power of macros, but avoid the mutable environment problem. Ideally, macro expansion would be governed only by things that are provably compile-time constants (not sure how feasible this is while maintaining the dynamic flexibility of Clojure that we all love... but it's an attractive idea at least).


Good arguments for why Python is a great little language, but no convincing ones regarding Lisp: there are no large software projects undertaken in Lisp recently (or ever?) that have failed because of it's extreme flexibility making collaboration impossible, no lisp programmers complain that there are too many ways of doing things or about "conceptual clutter" (ok, after my encounter with CL I'd complain about it, but I never got any real work done in CL so my opinion doesn't matter - maybe the "conceptual clutter" would've ended up to be "invaluable expresivity/flexibility" in a real world project) etc.


Macros are used quite rarely in Clojure. In general they swing between two extremes, either as a means of providing some simple syntax sugar, or as what effectively amounts to a language extension.

The author cites Korma as an example of a library that uses macros, but Korma only uses macros to provide a small degree of syntax sugar. So instead of writing:

    (exec (where* (select* people) `(> :age 18)))
One can instead write:

    (select people (where (> :age 18)))
It's an extremely simple transformation, and can be expressed in a few lines of code.

There are some libraries, such as core.async or core.logic, that make more extensive use of macros, but these libraries are relatively rare, and take a lot of work to get right. It's something I'd expect to see in a dedicated library, not as part of a solution in a large software project.

Worrying about overuse of macros in Clojure is a bit like worrying about overuse of FFIs in Python. Sure, it's possible to abuse in theory, but it's not really a problem in practise.


The OP is going to have kittens when he finds out about hylang (https://github.com/hylang/hy)


I prefer YVFC:

http://www.chiark.greenend.org.uk/~pcorbett/yvfc.html

Which, as you can immediately tell, is a Lisp interpreter written in Python, as a single expression. No macros, though.


I recently found out about hy and I am seriously evaluating it for use with Django in production. Does anyone have experience using hy in production they could share?


Played with it for some Bottle apps and otherwise boring scripts (which are now half the size and easier to maintain). No ill effects so far, but dealing with classes is... Un-hythonic somehow :)

(Edit: typo)



If Debian thinks hy is stable enough to use in production I think it's good enough for me :)

P.S. I use Debian as my main OS for all my computers


What I value most in a programming language:

1. I'm able to concisely express what I'm trying to do.

2. I never, never have to copy/paste or otherwise do repetitive work because one case is slightly different than another.

Number 2 is probably more important to me. It's why I'm happy when a language has first class functions and closures, and why I'm unhappy when the language is Java.

I think the cases where you can't modularize things enough such that you have to essentially include two version of the same thing is a programming language expressiveness failure.

Finally, I strongly disagree that programmers aren't or shouldn't be language designers. Nearly every worthwhile program I've written that people have used has needed an API or scripting/query language. So any language had damn well be suited to writing a parser. LISP fills that bill too.

If LISP has shortcomings, its expressiveness and power are not them. If you want to complain about a lack of consistent implementations, lack of libraries, difficulty interoperating with any other language, lack of a canonical free implementation, no decent supported GUI toolkit, or the impossibility of distributing binaries real people can use without spending thousand of dollars on a professional LISP environment, go right ahead.


[edit: While I was typing this the title was changed. It was previously Why Lisp isn't (and shouldn't be) widely used in industry, which colors my comment.]

This points out some very real potential dangers of large-scale collaboration with Clojure (and presumably some or many other Lisps/Lisp-likes).

However, I think the conclusion is overstated. Yes, based on what's provided, it may take more discipline and better, more explicit processes for a team to effectively collaborate in Clojure than in Python (using the author's running comparison). Yes, if we take this at face value, it does appear that people who depend on static analysis might find Lisp lacking.

But does this tell us why Lisp isn't widely used in industry? If we assume that "widely" means "as widely as Java or Python," which seems to be the statement made here, I don't think it's valid to cite the provided complaints as most or even a large portion of the explanation. The fact that there are no or almost no mainstream educational institutions teaching new students Lisp seems to me a far more likely candidate for front-runner on this issue.

That it shouldn't be widely used is a little easier for me to agree with, only because I'm on board with some of the points here about the discipline and extra work it would take for a large team to effectively cooperate given the malleability of Lisp. I've worked with enough other programmers to know that kind of care and attention to process are very rare (and this isn't "all of you suck;" I know I have and will again cut corners and ignore protocol in situations where time or resources make it hard to do things perfectly every time).

Also, I think the author's last point is important to mention, because it'd be easy to miss it: He's not arguing that Lisp sucks. He states explicitly that it's great in at least some ways. I just don't think the black and white claims being made about its practicality are quite supported.


Although I love lisps, I'd rather jump in a very large Python project than a lisp one. Jumping into someone else Lisp code feels like a jungle to me. Jumping into someone else's Python code feels like my old good slipper. Part of it is due to the very strict Python standard of coding. But I think it's primarily because of the "One good way to do it" mentality. On the other hand, I feel telling a lisp programmer "This is the right way to do this" would be like an insult to their creativity. Obviously, it's not as much black and white, but hopefully you understand what I mean.


"Part of it is due to the very strict Python standard of coding."

I don't know about that. Good code is good code. It's sort of one of those, "I'll know it when I see it" things.

The ease of which you can code classes for the sake of classes in Python can make some really hairy code out of what should be simple programs. Was that necessarily the 'one right way to do it'? Who's to say. And all the static analysis tools and syntactic aren't going to undo those hairballs anytime soon.

You might just feel more comfortable with languages with lower code density. 'brandonbloom made a good blog about that[1]. I think it can be doubly applied to any situation where meta-programming is employed.

1: http://www.brandonbloom.name/blog/2013/06/24/code-density/


Take Javascript, if you want to iterate over a list, you have dozen of ways. One could hardly argue that one is better. Some prefer using built-in foreach, some .map, some underscore, some the native for(). As you say, great code is great code and as long as it's well written and understandable, it's good Javascript.

In python things are a bit different. There are agreed-on ways to do certain patterns. If someone uses a different method, I.e. [1,2,3].foreach(lambda x: something(x)) pythonist will all agree that this is no pep-8 standard and that for readability and maintenability's sake it should be changed to for x in [1,2,3]: something(x).

Another way to say that.. very good pythonists will agree on a "best way to do it", whereas in Lisp, very good lispers will agree that "both ways are very good and clean".

With all that being said, that's why I prefer to jump into a large python project (considering that pep 8 is strictly being used).


I hear a lot of bad things about large Java code bases.


I can imagine, but the strong typing does help keeping a bit of order.


How does static typing help you to tame the use of classes, reflection, configurations, rest interfaces, db interfaces, ...?

Just on friday I was talking to someone, where he said that his developers don't like the code and feel the constant urge to rewrite it... with the core functionality being a huge mess.


Most of the enterprise projects I work on, developers tend to "forget" about unit tests after the first few sprints.

Never coming back to them unless ordered to do so by management.

Without static typing the chaos would be even bigger.


s/strong/static/


Don't fear the macro!

"Lisp isn't a language, it's a building material" - Alan Kay.

Lisp is an opportunity to build a DSL that fits any given problem domain elegantly.


Most of the complaints in this article boil down to "macros are too powerful." I think this is the key part of the argument:

"A smart programmer is not necessarily an empathetic language designer; they are occupations that require different skillsets. Giving any programmer on your team the ability to arbitrarily extend the compiler can lead to a bevy of strange syntax and hard-to-debug idiosyncrasies."

There are at least 2 counter-arguments to this:

1.) in the simplest case, just restrict the use of macros. A team can easily adapt the rule that only the most experienced engineer on the team is allowed to write or approve macros. (And in my experience, the need for macros is fairly rare. I think my ratio is something like 100 or 200 normal functions for every macro that I write.)

2.) macros allow all kinds of interesting type checking, and data structure validation, and therefore they make it surprisingly easy to validate data and types as your data and/or vars get passed around your system. Consider all of these very interesting tools you can use in the world of Clojure:

Prismatic Schema which allows validation that a data structure matches a schema of (possibly nested) types:

https://github.com/prismatic/schema

and this now offers coercion, which makes this fantastic for importing JSON from other sub-systems or outside vendors:

http://blog.getprismatic.com/blog/2014/1/4/schema-020-back-w...

(I assume you could easily validate before giving data to Liberator to export your data while conforming to your schema: http://clojure-liberator.github.io/liberator/ )

There is work being done on an optional type system:

https://github.com/clojure/core.typed

Much effort has been made to make contract programming easy in Clojure:

https://github.com/clojure/core.contracts

But also I find the built-in syntax for writing pre and post assertions is clean and easy to use:

http://blog.fogus.me/2009/12/21/clojures-pre-and-post/

In short, there are an abundance of mechanisms available with Clojure which help facilitate the enforcement of any kind of schema or contract, and some of these tools are enabled (and their syntax is made clean) thanks to macros.

In short: macros can be used for evil, but they can also be used for good. They are very powerful, so everyone should be judicious about their use, but there is no reason to argue that macros render a Lisp unfit for programming in the large.

Having said all that, I'll remind everyone that the ultimate counter-argument is offered by Paul Graham, in his essay "Beating the averages":

http://www.paulgraham.com/avg.html

If that essay does not convince you of the value of macros/lisp, then nothing will.


The quote you pulled out boils down to something even simpler than that: People who are good at programming aren't necessarily good at designing API's.

I think there's a kernel of a valid point underneath. A macro necessarily has a larger (potential) interface surface than a function, because there's less you can take for granted about how it interacts with the rest of your code. And I agree, API design is a specialized skill that requires quite a bit of thought.

But I agree with your conclusion more than his: This is to take care with macros, not a reason to shun them altogether.


> just restrict the use of macros

Precisely. The problem with the lisps of old was entirely cultural. People would go do crazy wild things and then not bother to interoperate with the rest of the world. Meanwhile, the Clojure community has lots of experimentation, but ultimately produces a large number of stable, quality, reusable libraries.

Large scale C++ teams often require approval for operator overloading or "dangerous" features. Can easily do the same for macros. Moreover, we now have distributed version control and can utilize lieutenant workflows, so we can dispatch with this silly "commit bit" notion that means bad code sneaks in past domain experts with ease.


Lisp has had widely different uses:

* a teaching language

* a research tool

* an application programming language

Lisp has been already in times when the technology you are using today was still under invention. Lisp existed before Smalltalk, C, C++, Java, ... thus often technology was developed in an unstable surrounding where inventions are just being made. Lisp also had to keep track of the changing IT landscape. During the 70s people were using DEC PDP computers.

Thus you find evidence for everything.

There are well-documented stable, nicely reusable, code bases in Lisp.

Clojure is most of the time married to a small eco-system: Java/JVM.

Lisp has seen and supported many more eco-systems and will see even more in the future.


As a language geek, I tend to hunt the Internet for old papers and manuals related to OS and languages.

The actual mainstream situation could be so different if the Xerox PARC research in terms of programming languages and OS besides the GUI, had become mainstream instead of the AT&T ones.

The Interlisp, Smalltalk and Mesa systems were great computing platforms compared with what UNIX offered.


I tend to believe that with time, good genes that were previously trimmed for very pragmatic reasons, will reappear in more favorable contexts and spread.


Still Lisp was on Unix on day two and was always a very popular platform for Lisp developers.

You could even get Lisp Machines from TI with embedded Unix and Lisp Machines from TI and Symbolics which were embedded in Unix.

Several Lisp companies made their entire business from Unix: Lucid, early Franz, early Harlequin/LispWorks, ...


It might be, but I imagine having an OS where Lisp is the systems programming language is way different from a system where it is just another language.


Yeah, but that does not make it 'better' or more useful.

Lisp-based operating systems are no longer used, because they were quite complex (coming out of a research environment) and provided LESS functionality in some crucial areas (for example they were not multi-user - they were one-person, one machine, one 'world'). Additionally they were expensive.

The Xerox Lisp and Xerox Smalltalk workstations one could buy were severely underpowered (RAM, speed, ...).


I don't think you can credibly describe Java/JVM as a small ecosystem.... there's nothing else remotely close in terms of the combination of runtime platform capabilities and the number of available open source libraries.


> > just restrict the use of macros

> Precisely.

Enabling macros is the sole justification for the homoiconic syntax, which is often cited as the most offputting feature of lisps for uptake by large programmer teams. If you're going to have two "editions" of Clojure, one with the full feature set for language designers, and the other a more restricted sans-defmacro one for more general programmer use, then why not give that restricted one a more friendly syntax as well, or even just get them to use Java, Python, or whatever.


1) I genuinely prefer variadic prefix notation, even in the absence of homoiconicity.

2) Restricted does not mean "banned". It means that they need to be justified and subject to expert scrutiny.

3) Even if you never write a macro of your own, you're a beneficiary of the syntactic sugar and can leverage macroexpand to demystify otherwise opaque language constructs.


Pet peeve, sorry, but it irritates me that Clojure people act like this guilty-until-proven-innocent policy about macros is somehow original. It has been standard advice for decades. Chapter 8 of On Lisp (1994) is called "When to Use Macros" and its first section is "When Nothing Else Will Do":

By default we should use functions: it is inelegant to use a macro where a function would do. We should use macros only when they bring us some specific advantage.

I understand from a language marketing point of view why someone might say, "Oh, those other Lisps made wild and crazy use of macros. It was really bad! But we are enlightened and have restricted them." But it's a bogus way of playing to a bogus criticism. It would be better to just say that there's a tradition of how to use macros correctly.


Please name a language you use so that whenever you and I disagree I'll be able to cite the "Blub people".


Not sure what your point is? I'd be happy to be wrong here.


You paint "Clojure people" - I'm one - with a pretty broad brush. If one of us has said something you disagree with, then please cite it along with your rebuttal.

It is unlikely that you have an informed opinion of "Clojure people" on the basis of a few posts that give you heartburn.


Uh, I think maybe I'd better just quit while I'm behind.


Macros have its major purpose - they are means of creating advanced DSLs. Simple DSLs could be created just out of high-order procedures and list structure, but they will be bounded by the general evaluation rule, that all the arguments would be evaluated before actual procedure application, which is not always what we want.

Macros is the way to add new special forms to a Lisp or a DSL embedded in it. This is why macros are there.

To avoud macros is to restrict oneself from designing a program as layers upon layers of DSLs which is the most powerful paradigm. Just look at that Rtml DSL of ViaWeb system.


I agree wholeheartedly, and would add: The usage of macros is something that can (and should) be avoided in most cases, but in order for you to learn when one should or should not write macros, you should write programs large (or complex) enough that you would need to develop a domain-specific language for the problem at hand. The creation and use of small utility functions which reflect your growing understanding of the problem will remove some of the need to write macros, until you need to perform syntactic transformations. (Note that this is the case where writing a syntactic transformation is the simplest and fastest solution to whatever problem you're working on at the time). You usually don't know when this will happen until you need to do so, but it seems to be the safer option to use a Lisp, which allows you to do that very quickly.


I mostly agree but fwiw, the usual counterargument on #2 is that those should be language facilities instead, which allows them to be carefully designed and then taken advantage of by the compiler. For example, in Racket (formerly PLT Scheme) you have: http://docs.racket-lang.org/guide/contracts.html


This is a terrible example for your argument, because contracts in Racket are entirely implemented as a library, thanks to the power of -- you guessed it -- macros. In fact, just about everything in Racket: the class system, the generic function system, the unit system, the type system, the serialized continuations, the pattern matcher, keyword arguments -- all of those are implemented with macros.


The problem with this argument is, that you will never have the amount of independned development on every feature. There are usually only a relativly small amount of people at the core of a langague.

If I want a new feature in the language is very hard to get it in and once its in its in. With macros diffrent people can do there own thing and indepently develop it.

Look for example at core.match, in every other language something like it would have been a new language feature. Clojure now has a state of the art pattern matcher without Rich or anybody doing anything.


I'm not sure how well that specific example works. Are contracts built in to the language on a fundamental level, or are they just part of the standard library? Given that they're not included in racket/base, and given how flexible racket's core is, I would guess the latter, but I'm not sure.


"Are contracts built in to the language on a fundamental level, or are they just part of the standard library? "

One of the points of Lisp is that that difference doesn't matter.

But besides that, in this case it couldn't possibly matter. You write contracts for your functions. If they're violated at runtime you'll get a clear error that stops execution to contain damage, and assigns blame to the contract violator. At what point in that process does it matter whether contracts are "built into the language"?


To be clear, I'm not suggesting it's an important question per se.

_delerium presented an argument that the things people do with macros should be implemented as language facilities instead. Ve used racket contracts as an example.

I was saying that if racket contracts are not implemented as language facilities, it's not a very good example.


I wasn't excited about scala's macros initially [a], but Eugene's Burmako's recent talk [b] on the constraints the developers worked with to keep macros consistent and interoperable was illuminating, and has changed my mind to a large degree [c].

In the end, they only directly added support for 1) type-safe macros, aka "black-box" macros 2) ...invoked transparently as methods, aka "def macros"

They identified "white-box" (non-type safe) macros and quasiquoting as distinct from black-box macros, because the type signature of a black-box macro tells you approximately what it will do, meaning that you can treat it as a "black box". But additionally, in order to write a meaningful type signature, the input to a black-box macro must _already_ be valid scala code! This means that the addition of macros cannot actually result in new scala syntax [d].

[a] https://news.ycombinator.com/item?id=3709193

[b] http://www.infoq.com/presentations/scala-macros

[c] still hoping the existence of scala macros leads to the deprecation of a bunch of other features though

[d] see the explanation starting around 33m30s in [b]


I remember seeing the same kind of arguments about Ruby when it began to become popular. "Ruby is too dynamic. You shouldn't use it on large-scale projects."

Learning a language involves more than just learning syntax and semantics. You also need to learn how to write for maintainability. It sounds like the author is less certain about how to do that with Lisp, but instead of seeing it as a chance to learn more he writes off the entire Lisp family as impractical.


Exactly. I think it's part of a larger learning pattern that people go through: one starts learning about a new tool, library, or approach, and at some point, realizes that it's not perfect -- there are some cons.

I guess there are several ways to respond at that point: one is to persevere, learning more and coming to a better understanding of what advantages and disadvantages are, as well as the appropriate use cases.

Another is to just give up and write a proscriptive blog post, possibly also with a biased, incomplete comparison of the new thing to an old thing.

The latter approach is extremely frustrating for several reasons: 1) often, the authors ignore or fail to grasp both the pros of the new thing as well as the cons of the old; 2) the proposed solution is to throw out the baby with the bathwater, instead of to figure out how to improve the new thing; 3) it provides fuel for others' confirmation bias.


"OOP is widely-used and easily comprehended because it is a fairly simple way of modeling reality that is compatible with how human beings do it" I don't see how OOP is "fairly simple", "modeling reality" and "compatible with how beings do it". How inheritance, polymorphism, interfaces, classes, objects, types is a "simple" model of reality? You're simply used to think OOP way, that's all.


If I had the choice to develop a functionality as a Lisp macro or alternatively as an XML-based DSL for Java, I would know what I'd prefer...


Haskell, by its lazy evaluation, is basically a macro-only language, and people seem to be doing fine in that end of the world.

Granted, space leak issues are pretty difficult to analyse, so it makes the language seem hard to use in practice, but that's because all the low-hanging fruit like type errors are solved by how the language is designed, so you only end up with the hard bugs.


> Haskell, by its lazy evaluation, is basically a macro-only language

This is not correct. To understand why, please see Ryan Culpepper's answer to this SO question:

http://stackoverflow.com/questions/7046950/lazy-evaluation-v...


>Haskell, by its lazy evaluation, is basically a macro-only language, and people seem to be doing fine in that end of the world.

They aren't that many to begin with, so it could just be (self-)selection bias.

Forth people do fine using Forth too, but I don't see that as a point that it's an appropriate language for most projects and/or people.


There are more people in the #haskell channel on Freenode IRC than #clojure, #scala, #lisp, #racket, or #ruby.


didn't even know that.

BTW the people on #haskell are quite active, and nice :)

(maybe a pointless counter-example, I once went onto #ruby, and asked about an easy way to make a function name refer to a function( to be able to do things like list map f easily, without the superflous do |x| f x end ), and I got yelled at because I was trying to write "non-ruby code".


The Ruby community is very close-minded. I've been a victim of that behavior in #ruby as well. If you asked an equivalent question on #ror (the Rails channel) you'd get more than yelled at - no one would take you seriously from that point on.

To answer your question, because methods aren't first-class in Ruby, you can't pass them around the way you want to. I've decided I don't want my languages telling me what I can or can't do when I know what I want to do is a simple matter of making more types of pointers first-class.


You can pass Ruby methods around; the syntax is just ugly, and it isn't really used:

    def call_on_two fn
      fn.call 2
    end

    call_on_two 1.method(:+)      #=> 3
The problem with [1, 2, 3].map(:function_name) is just that map requires a block, and not a method or proc.

A method like that could be easily enough created, though it'd be kind of ugly:

    module Enumerable
      def map_fn fn
        map { |i| method(fn).call i }
      end
    end

    def foo num
      num + 2
    end

    [1, 2, 3].map_fn :foo #=> [3, 4, 5]
I don't know why there isn't an easy way to freely convert between methods, procs, and blocks; it's definitely something the language is missing.


I figured that out later on, that I should probably not think of ruby as functional ( my brain hardwires no parentheses languages to functional languages I think). After which the experience becomes slightly less frustrating


>There are more people in the #haskell channel on Freenode IRC than #clojure, #scala, #lisp, #racket, or #ruby.

Which doesn't mean a thing. How many of them are employed developers working in the language, as opposed to dabblers?

Very few professional developers I've known hang on IRC. It's 2014 already.


A large reason there is fewer large lisp projects, is that its pretty hard to get a team of lisp developers. Also, many programmers who do know languages like a lisp or Haskell are often going to be better programmers and will require a higher salary. Not everyone who codes in these languages is a great programmer, but I suspect there is some correlation.


Another reason: they don't tell you or it is in a domain which the average developer does not know anything about.

How large is the scheduler for the Hubble Space Telescope, which has been adopted for many other telescopes?

Who are the users of AllegroGraph?

How complex is PTC's CAD system which uses Common Lisp? A few years ago they mentioned 7 million lines of Lisp.


"OOP is widely-used and easily comprehended because it is a fairly simple way of modeling reality that is compatible with how human beings do it."

I disagree with this. Ask any non-programmer "is a square a rectangle?" Or "is a list of triangles a list of shapes?" and they will say "Yes!"

Yet as soon as either of these becomes mutable it all falls apart and intuition fails. You cannot change the height of a square independently of its width, you cannot add a square to a list of triangles (but you can add a square to a list of shapes).

Human beings reason "immutably".

"Fine only do OO with immutable objects!" I hear you say. It's a good idea buy you're now on the path (that I took a few years ago) towards functional programming.

This is why I think that OO, and more particularly mutable state, are quite difficult for new programmers to grasp.


The biggest problem with LISP is the lack of large organizations who use it.

C has all these problems as do C++, Java, Python, Ruby.


Google (ITA Software) is not large enough.


If one needs a large organization for a Lisp program, then something is wrong.


Oh I'm not disagreeing with the logic, it's just that our industry moves as a herd, if there aren't large orgs doing it it's hard to justify to manager types.

Manager types barely tolerate literate programmers, let alone someone who can make something that a mouth breathing moron can't understand.


As a tangent, we still don't have a good definition of 'large' software project. Clojure hasn't been around for 8 years, so we don't have 10 or 15 year projects to look at. And I think we'd have heard if there was a 1500 developer team using it. It sounds absurd, but there have been plenty of projects this size and they 'work' insomuch as they generate enough revenue the sponsoring companies paid for them to get that size.


I think the designers of Clojure intend it as a language for experts. In many fields non-experts can make nice progress, but they need to be aware that they may stub their toes. So, pull requests, code reviews, pairing, etc. are available and well established as means for helping people make the transition from beginner to journeyman (person) and beyond. If I know how to walk, but not how to ride a bike, I hardly think others should be denied nice bike routes.


> OOP is widely-used and easily comprehended because it is a fairly simple way of modeling reality that is compatible with how human beings do it.

Have we not learn by now that these systems are not easy to reason about. Are not all the things one first learns (ie Animal -> Dog) bullshit and should be avoided.

Why is it in every good OO book that, composition is better then inheritance. Why is every OO book full of examples about how to avoid mutabiltiy and make the system easy to reason about?

The idea that OOP systems (as generally) thougth of goes completly out of the window as soon as you have any kind of concurency, even just event handling.

> which rejects OOP

It does not reject, it takes the usful features like polymorpism and gives them to you. Protocols are better then interfaces, better then duck typing.

> In Clojure, if I want to define a symbol there are nine different ways of doing so.

There are a lot more then nine. But I would recomend rich or stus talks on simple vs easy. Just saying there is nine of something and thus its complicated is idiotic.

Java has only one thing, classes, does that make it simply, or does that just mean that its hoplessly overloaded?

Clojure is extreamly simply. State can only live in a var, atom, ref or agent. Every one of these has clear semantics, this includes clear sematnics in a multithreaded world. No other language has such clearly defined state management.

> Clojure claims to include these language features as a way to mitigate the complexity of parallelism; frankly, I’ve never found threading or interprocess communication to be any sort of conceptual bottleneck while working on some fairly complex distributed systems in Python.

Distributed system != Shared Memory

Nobody, really nobody can say taht distributed systems are easy. Just listen to the people that implment this stuff. But it is clear that a language generally does not really help you with reasoning about that system.

However when you run on a 16 core with shared memory and you have to do lock ordering and all this stuff,then you will defently be happy for the tools that clojure provides.

> Less is more (as long as “less” is sufficiently convenient).

Clojure is actually a much smaller and much simpler langauge then python every can hope to be. Clojure is simple, and strives for simplicity in every feature of the langauge. See here:

- Simplicity Ain't Easy - Stuart Halloway http://www.youtube.com/watch?v=cidchWg74Y4

- Simple Made Easy http://www.infoq.com/presentations/Simple-Made-Easy


To add on to the OO counterargument, here's a thorough debunking of object-oriented programming: http://www.geocities.com/tablizer/myths.htm

Note that this refers to the Nygaard interpretation of OOP, which is also the most widely used: rigorously class-based and in many ways retaining a procedural nature.

Smalltalk and Eiffel are different beasts, but they never really made it.


OOP isn't be all end all, and it isn't really easy to get into. But, that doesn't mean that modelling hierarchies is not necessary in some domains. E.g. DOM was a very big reason why Rust was considering adding OOP. The performance and the readability is hurt when you don't have to represent hierarchy.

Article simply says that giving ALL programmers power to design language leads to bad things. Lisp, Clojure, etc. And I can see why. People love making their own languages, it's fun, but a good programmer and a good language designer are two mostly unrelated things. Good programmer often needs to look at problem from a weird angle, while a language designer needs to find shared views. I'm not saying they don't have a lot in common as well, but I can see how programmers can design AWFUL languages.

Note: Good programmer means a good general programmer i.e. someone that solves various tasks in his favorite programmer language.


Two points

1. > But, that doesn't mean that modelling hierarchies is not necessary in some domains.

Agree but the addition of full OOP seams overkill to reach this goal. Look at this clojure code:

>(derive ::rect ::shape) >(derive ::square ::rect) > (parents ::rect) -> #{:user/shape} (ancestors ::square) -> #{:user/rect :user/shape} (descendants ::shape) -> #{:user/rect :user/square}

Clojure gives you hierarchy 'À la carte'. This means that you know longer tie the two things together, it easy in clojure for example to have many diffrent hierarchy that are independent but still dont get in each others way. Modeling the same with objects is hard. Just a example, for often good reasons multiple inheritance is not allowed in most languages, however if you use hierarchy as a domain model and not as programming model you generally want it.

2.

I agree with the articles point, people should not invent there own langauges for everything, however that is a terrible reason to discard the language for 'large scale' production use. Every language has features that generally should be avoided, every language make it easy to do the wrong thing. Macros are relatively easy to understand, compared some other language features I could name. Also the effect of macros is generally local, unlike say monkey patching.


> however that is a terrible reason to discard the language for 'large scale' production use

I think article by `large scale` means something that needs lots of people working on it. I can see how several programming departments might form their own lisp-tribes that can't speak to each other because they disagree over tiny details (or engaged in power play).


The same thing can happen in any language and with any detail. Power play normally is political not really about language.

Also one could easly argue that macros help with this situation because the 'right way' can be encoded in a macro and then you can require everybody to use it. That seams a better solution then long documents that explain in detail how X is done (because the langauge can reduce the code dublication). I remember such things in my (short) C++ experiance.


Or just use a language that has one way of doing things? C++ with it's pre-compiler magic, and several (three or four ) ways to define a variable is a rather bad example.

Things like this are bumps on a road, where your organization is a car with bad suspension. Sure, bad suspension will cause problems down the road, but no reason to drive your car through rocky terrain.


This is just fuel in search of fire.

"OOP is widely-used and easily comprehended because it is a fairly simple way of modeling reality that is compatible with how human beings do it."

Close, but no pants, Buckwheat!


You don't find large-scale Clojure projects, because Clojure is very concise.


Also I would guess that people who like Clojure's philosophy are much more likely to build simple / loosely coupled / composable services than a big monolithic project.


Using a Lisp does not necessarily preclude static anlysis.


I don't find the arguments against macros workable since the emergence of syntax transformation while modeling a problem shows many deep relationships between parts of the domain .. in the cases I've seen. Just want to share some stuff that I haven't seen others write about.

I don't code much lisp these days (though getting into clojurescript a bit now), but when I used to, the cycle went pretty much like this -

1. Express what you want to express as an s-expression, capturing known structures in the simplest way I can think of.

2. Figure out which aspects can be "functions" straight forwardly and which are macros and implement them.

3. Test and iterate a bit till I like the way domain elements are composing and the way the composition looks in code. Try to reduce the required concepts in each iteration.

4. Document the relationships that have emerged from this process so others can understand it.

5. Usually I'm done, but sometimes (the few) users of my "api" come back with questions, based on which I iterate a bit more.

I've mostly followed this in building the "editing style specification language"[1] part of the product "muvee Reveal" (an automatic video editor)[2], built in a custom scheme dialect called "muSE"[3]. (Full disclosure: I've happily worked for muvee Technologies from 2002 to 2011.)

Btw - most discussion on macros and lisp seem to first assume that there two things - a) functions and b) macros. There are more, depending on the kind of lisp system you're working with.

You could form a taxonomy of sorts based on whether argument forms are evaluated and whether the result form is evaluated -

1. Argument forms are evaluated before "apply", result is a value (i.e. not evaluated). => Function

2. Argument forms are unevaluated before "apply", result is code (i.e. is evaluated). => Traditional macro

3. Argument forms are evaluated before "apply", result is code (i.e. is evaluated).

4. Argument forms are unevaluated before "apply", result is a value. => Traditional macro (depending on system)

In the course of using the domain modeling approach above, I've written stuff like functions that create one or more macros, macros that evaluate to function values (not s-expressions) and such stuff that might be considered an "abomination" by the OP ... but in the context of the domain, the concepts are usually clear enough to be used without major issues.

[1]: https://code.google.com/p/muvee-style-authoring/ [2]: http://www.muvee.com/en/products/reveal [3]: https://code.google.com/p/muvee-symbolic-expressions/


The author of the blogpost is pretending to be "one of us" (people that get lisp) - in "Lisp devotees (myself once included)" - but apparently they never understood it if they're still thinking that lisps' advantages "make Lisp into an unweidly, conceptual sledgehammer that’s often out of scale with the problem being solved.". The word "often" there also makes me think they're making a point without experience or proper evidence.

They're a lisper who broke their teeth in Clojure. I think when people like me disdain Clojure's lispness is because we think it doesn't really teach you the philosophy behind it. This is not an argument, just my perception.

They're also in favor of code censorship (let's remember that censoring is detrimental to creative processes):

"Giving any programmer on your team the ability to arbitrarily extend the compiler can lead to a bevy of strange syntax and hard-to-debug idiosyncrasies. "

I use Racket in production along with my team and may I suggest a humble, easy solution: one person makes a pull request, another reviews the pull, and if there are new macros introduced we can discuss it with the team to see if it's necessary. It's so simple. The blogpost author is making a big deal out of nothing. To prefer a language that doesn't allow that power because the author has a problem trusting others instead of choosing to communicate with their team members is appalling.

The author also keeps mentioning Python's "simplicity". How can anything be as simple as (function args)? I'm yet to understand what people that argue this point mean by "simplicity".

Then the author talks about static checks. "How can a static analysis tool keep up with a language that’s being arbitrarily extended at runtime?". Simple, do macro expansion before static type checking, as Typed Racket does.

They're also still playing with SQL DSLs too. I think that's such a waste of effort. SQL is already a DSL for talking to the DB. I don't want another layer because I'm not going to be manipulating SQL in my code, because "SQL" has nothing to do with the problem domain I'm working on. At that point any SQL queries have already been abstracted away inside functions that have meaningful names like associate-product-to-customer or whatever. I don't want to talk about SQL ever in my problem domain abstraction layer. Using SQL DSLs as an argument against macros with the angle of static type checking is a poor argument because SQL DSLs are usually for people that use mutable code anyway. I use Typed Racket's DB library and its querying functions work together with the type system to let me know if I'm not handling some potential kind of value that might come from the database.

The author then mentions Unix's consistency. Unix couldn't even decide on a standard notation for command line arguments. Then onto that fallacy that reality is object-oriented. Objects can't possibly be as composable as functions because Objects break down into methods (which are not composable, are not first class, etc) whereas functions (lambdas) can make up everything and can really be thought of as an atom for computation (i.e. Lambda Calculus).

Complaints like "Clojure has nine different ways to define a symbol" are moot. Pick one that your team likes and go with it. On to the next thing. Also, to argue against Lisps by arguing against Clojure is like arguing against democracy by arguing against the Democratic Republic of the Congo.

I do believe the blogpost author is severely misguided in their criticism. To say things like "Python wants the conceptual machinery for accomplishing a certain thing within the language to be obvious and singular" while ignoring the fact that lisps machinery is obviously much simpler and obvious and singular - again, (function args) - is disingenuous. It does make me believe that all their SICP reading was for nothing (I've only lightly skimmed SICP and I don't pretend to have read it).

The author does acknowledge (en passant) that certain Schemes (they don't identify which) don't suffer from this complexity (which makes the whole post look more like a criticism of Clojure). I'd invite the author to look into Racket.

They say that lisps "impose significant costs in terms of programmer comprehension". My experience is that if you divide your layers of abstraction correctly you will be able to work in the problem domain layer where nothing is obscure. And that layer is built from smaller in the layer below, parts that are also clear in what they accomplish because they only do one thing in their abstraction level. I've found that following this rule of only doing one thing per function makes for code that is easy to understand all the way from the bottom to the top layers. Programming this way, however, is the classic, boring way to write code [1], and because it's not a fad I guess people aren't too much into it.

Also to the point above, having already rewritten a significant portion of a Rails legacy app into Racket with the help of my coworkers, it seems that lisps introduce more understanding and shed more light onto the code, precisely because it makes everything explicit (we code in functional style so we pass every argument a function needs) and does away with "Magic" that Rails and rails fans like so much. When something gets annoying to write we implement our own "magic" on top of it, not in terms of silly runtime transformations that lesser languages like Ruby need to resort to, but through dynamic variables (Racket calls them parameters), monadic contexts, etc, i.e. things that can be checked at compile time.

And finally: "I think it’d be irresponsible to choose Lisp for a large-scale project given the risks it introduces". Well, the only risk I've personally witnessed is the very real risk of your coworkers starting do dislike more mainstream, faddish languages like Python and Ruby, because they don't allow the same freedom and simplicity and explicitness that lisp does (lisp has a long tradition of making things first-class, which consequently makes these things explicit).

[1] We use top-down design to decide what the interface for a given abstraction layer will look like, and bottom-up to decide which functions should be written in the layer below; then we cycle that process by refining the layer below through the same process of defining its interface top-down and then the layer below it as bottom-up. And we use algebraic type checking along with contracts to enforce post-conditions and properly assign the blame to the right portion of the code to speed up debugging. These are all old techniques.


By "simplicity" they mostly mean "what I'm used to". Same applies for people calling "Windows simpler than Linux". It's the same argument over and over: illegibility (because you're not used to it), too much power to handle (because you're not used to it)...


Right on. Jeff Raskin's equation is pure gold -

    Intuitive = Familiar


> Complaints like "Clojure has nine different ways to define a symbol" are moot. Pick one that your team likes and go with it. On to the next thing. Also, to argue against Lisps by arguing against Clojure is like arguing against democracy by arguing against the Democratic Republic of the Congo.

It's a valid complaint. I think the quote by John Carmack: "Everything that compiler will allow will at some point be written" Describes the situation well.

In other words, since Lisp pretty much allows all, anything will be written in Lisp. When you look at Lisp function, it might do what you want, or it might not. There are no guarantees it does what it claims to do. So, chances are you either have supreme belief the guy behind it did his end of the deal, or you are writing your own. And with all the power Lisp offers why not write your own? And this is generally what I think is the failing point. It brings NIH of the highest magnitude to software development.


> When you look at Lisp function, it might do what you want, or it might not. There are no guarantees it does what it claims to do.

Tests, contracts, type checking - those are the guarantees. It's not unlike any other language.


Is your team hiring?


We're a team of 6 programmers, 3 of which code Typed Racket. We all started here as Rails devs and got tired of it. I'm always pushing to invest time in research for better solutions and our team seems to be open to that (after weeks of discussion, though - we're no heaven). Typed Racket is the solution we found after surveying the field. Our management has recently told us they believe we're now a good-sized team given the projects we have to maintain. If it gets out of hand I'll be glad to know that there are people we can hire to work with these technologies. We're based in NYC.


Since you don't have any contact info in your profile, I'd love to talk to you about your use of Typed Racket, and whether there's anything you can tell us about how to improve it. Feel free to email me at samth@cs.indiana.edu

Always happy to hear of people using my software. :)


We've talked a bunch on IRC - thanks for making my life easier. :)


How are you liking Racket instead of Rails? I'm assuming you're using it for web development.

I was a Rails dev that moved to Clojure and I don't know much about Racket at all, but how is the library situation? Of course in Clojure if I ever need something I can count on being able to find it in Java and interop with it.


To be fair and not to mischaracterize the situation, I have been programming in lisp for at least 5 years in personal projects, so it's not like I had to learn it now and I also did not convert from Ruby to lisp.

I never liked Rails. I don't like anything that focuses on files, because in my mind the fact that code needs to be saved in the filesystem is simply incidental, so no code should rely on that fact; but Rails builds on top of that, telling you where to put your code (folder structure), taking control from you over what code sees what code (MVC), giving you command-line tools to use when it could simply provide functions at a REPL (gems, migrations, tests), etc. It simply doesn't get it at all.

In my opinion, a framework (loosely speaking) that solves only the easy problems (how to organize code) is pointless. But something like an FRP library that allows me to do GUI by focusing on how I want to transform data, and liberating me from thinking about events and callbacks - THAT to me is something that solves a hard problem.

So to answer your first question, I'm happy not to need to touch Rails again - right now I only need to look at it, rewrite it in Racket, and delete it.

Racket is a much nicer system than Clojure, it has immensely intelligent people behind it, the documentation is stellar (unbelievably so, in my opinion), there are libraries for everything you can imagine and more (like an FRP library). It also does native GUI everywhere, effortlessly. For things it lacks, it is easy to write an FFI.

I would never touch Java nor trust any library written in Java. The more I code the more I distrust code that relies on mutability. I haven't reached 100% pure code yet but I work towards that goal, not away from it.

EDIT: Yes, I am using Racket for a webapp that connects to a database and perform general CRUD operations.


I would like to hear more about your reasons for choosing Racket over Clojure.


Clojure is Java. I fundamentally disagree with everything Java. Java is also clunky. The environment that needs to be installed and maintained for Clojure is a lot more complex and subdivided than the one for Racket (which is an all-in-one batteries-included deal). I believe Clojure might be a good option for those coming from Java, if they want to breathe a little, but if I'm already free, it would be a step backwards for me. Racket is a true Scheme (regardless of whether they like being called that).

Also, I already knew Scheme and its simplicity is very appealing to me. If there's a feature lacking, I can implement it. Not true for Clojure, which lacks several important foundational features (TCO, continuations...).

Why would you choose Clojure over Racket? (I'd only be interested in hearing the reasons someone that isn't a Java programmer would have).


I was under the impression that Clojure has been designed around immutability, and Racket not so much... ?


But it sits on top of mutable Java libraries.


A lot of confusion comes from regarding Clojure as a Lisp (in my opinion, to call Clojure a Lisp is the same fallacy as to call Ruby a Smalltalk) and especially thinking that Clojure is "what a Lisp should be".

On the contrary, beauty of Lisp comes from being accidentally discovered minimal set of unique interweaving features (very few special forms, high order procedures, list structure, to glue code and data together, macros, the way to use Lisp as a meta-language to itself, type-tagging of values, instead of declaring variables, and, the numeric tower, and lexical scooping, as recent addirion).

This set of features is very balanced and good-enough. Adding more "foreign" features actually ruins the balance.

Having "just this" and following the paradigm of layered DSLs embedded in a Lisp, popularized by SICP, lots of complicated things could be created, including CLOS which is nothing but a DSL (a bunch of macros and procedures).

This notion of what a Lisp is actually helpful to understand not just Clojure but also Haskell, which is, in some sense, also a small kernel based on Lambda Calculus and tons of syntactic sugar.

By adding more and more features without breaking the balance we got Common Lisp, while an attempt to keep the balance produced R5RS.

If we switch the perspective on what Clojure is to the notion of a scripting language for the JVM (a-la Ruby) with sometimes looks like a Lisp and sometimes behaves like a Lisp, but strictly speaking not a Lisp, because the unique balance was ruined by adding stuff, everything, it seems, falls into its places.

Arc is a dialect of Lisp, Clojure is language of its own, but marketed as a "modern dialect of Lisp", which makes no sense.


The difference between Ruby and Smalltalk is clear, but your analogy is lost on me. I consider Clojure a lisp by any common definition. Why do you not?


Take a look at arc.arc and news.arc - that is why.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: