Hacker News new | past | comments | ask | show | jobs | submit login
Not Lisp again (2009) (funcall.blogspot.sg)
234 points by coldtea on March 14, 2013 | hide | past | favorite | 175 comments



This is a fun demo of functional programming, but Lisp isn't so special anymore. In Python:

  >>> def deriv(f):
  ...   dx = 0.0001
  ...   def fp(x):
  ...     return (f(x + dx) - f(x)) / dx
  ...   return fp
  ...
  >>> cube = lambda x: x**3
  >>> deriv(cube)(2.0)
  12.000600010022566


This is my hang up with Lisps over and over again. I read through SICP and did almost all of the exercises. I've played with Clojure and done some small projects in it. I've recently started dabbling in ClojureScript. And I just can't seem to get to the point where Lisp becomes readable and quickly parsable by my brain. That snippet of Python you wrote is immensely more readable than the Lisp in the article. The equivalent snippet in Ruby, JavaScript, C#, Haskell, whatever, would also be immensely more readable than Lisps.

Does it just take some serious perseverance? Or am I just Lisp-dumb?


Here is my theory.

When we read code, we use our language abilities. Our ability to read syntax uses some sort of built in grammar recognition, which has a maximum capacity that it can handle and doesn't abstract well. Thus in "normal" programming languages a certain amount of flow of control is pushed into syntax and grammar, and the offloading of that work makes it easier.

Lisp goes the opposite way. There is pretty much no syntax, thus all flow of control is automatically pushed to your higher reasoning area, which suddenly does a lot more work and you notice it. Thus it feels like more work. The tradeoff is that everything is more flexible and easier to build abstractions on. But after you have programmed enough Lisp, the "flow of control" key words get wired properly into your grammar processing, and you stop noticing the effort again. But it takes longer for your brain to start processing things this way.

Something similar happens in normal language. The same grammar areas of the brain pay attention to pauses, speech emphasis, and so on. Which in written form we notice as punctuation. But it is also wired to pick up and process small connective words like and, not, or, etc.

Note, I have no evidence for my theory other than "it makes sense to me". However it is based on my limited knowledge of how brains process speech, which we have learned about because of the fact that when strokes take out specific areas of speech, we wind up with specific speech impediments.


It's not so much that Lisp has no grammar, but that it's a VSO language (or it has the tail-first/head-final parameter set, if you prefer), where infix programming languages more closely resemble the SVO natural languages most of us in Europe or North America are used to. I'm sure that something more "normal" like Python, C, Pascal or BASIC would be as hard for uninitiated speakers of verb-first languages to wrap their heads around.


That's a fair point, but it's a different point than the one btilly is making. (I'm not trying to be rude, I just think btilly's theory is interesting enough to merit some defense.)

Consider the formal grammar that might be used to implement a programming language. When we use the word 'grammar' here, it's obvious that Lisp has less of it than, say, Python. It's roughly equivalent to saying that Lisp has less syntax. So, his argument is independent of any VSO/SVO distinction.

Ben is arguing that Python's additional syntax, by formalizing common structures, allows us to (in some sense) externalize the inherent complexity of a problem (i.e. out of our brain). The downside is that it introduces some rigidity into the language. Lisp makes a different tradeoff: we are forced to handle all of the complexity ourselves, but in return the language is flexible enough that we can build exactly the right abstractions for our problem. There's a kind of conservation law.


Lisp has a lot of syntax. Check out the Common Lisp Hyperspec and the syntax for DEFMACRO, DEFCLASS, LET, LABELS, LOOP, ... and others.


Yes and Clojure has a lot of syntax too. However that is still nothing compared to the syntax of other languages, like Java. The amount of syntax in a langage like Java is just maddening: even keyword (like final) have totally different meaning depending on their context.

So while I think that it's somehow wrong to always paint Lisp dialects as "having almost no syntax", I do also think that it's totally correct to say: "Lisp dialects have almost no syntax compare to mainstream language".

P.S: don't get me started on JavaScript where syntax gets inserted for you automagically, leading to bugs that can be very hard to track (re- Crockford on the error that inserting automated semi-colons in JavaScript was).


Have you ever looked at Common Lisp syntax?

Just the LOOP macro:

http://www.lispworks.com/documentation/HyperSpec/Body/m_loop...


I meant to say "no syntax" there. I've now changed it.

I wouldn't say that the VSO vs SVO distinction is Lisp vs infix languages. Because in an infix language I can and do write things of the form do_this(some_thing, various, parameters).

However it does come up with OO programming where we have some_thing.do_this(various, parameters). Versus the CLOS (do_this some_thing various parameters). People seem to prefer the standard OO syntax even though the CLOS approach is more general (since do_this will dispatch to the correct method based on its parameters, and can pay attention to more than one parameter to do so if needed).


CLOS is more:

    (do-this-process object1 object2 .. object-n)
Like:

    (collide ship1 ship2)
or

    (render document1 printer1)


And then people call Lisp unreadable :-(

Even though I started out with Algol-like languages I never liked all the different rules for braces, parens, comma's, square braces, assignment, variables, etc. etc. so Lisp was a natural fit for me.


Modern English is not exclusively an SVO language. You probably aren't aware of how often you use VSO sentences. For example, it is very common for interrogative sentences (questions) to have an inverted order. For example:

  Where is the (object)?
  When was your last birthday?
  Whither wander you? — Shakespeare
All of those sentences use inverted word order. One of the reasons that English is so expressive is that it uses both inflections and word order to determine the meaning of a sentence.

One of the great things about Lisp is that it allows you to define your own infix operators.


Sorry, but these are not actually examples of VSO word order. "Is," "was," and "wander" are the verbs in your example sentences.

"Where," "when," and "whither" are interrogative pronouns. They are the objects of the "to be" verbs. These sentences are actually OVS.


Thanks for catching my mistake. My mistake is especially embarrassing because term "inverted order" refers to sentences that have the verb second and the object.


On the other hand, I think that a prefix notation more closely resembles the way natural languages are spoken in the aforementioned parts of the world. For example (+ 1 2) if read aloud would become "add 1 and 2", which sounds like a verbal instruction. Reading infix notation aloud in many ways strikes me as less natural, perhaps it's something that is learned from an early age and thus appear natural.


Through the lens of this example, Lisp does not offer any advantage. In fact, I'd argue, that Python, Ruby, JavaScript, C#, and even Haskell are simply explorations of this example, and others like it, with intentional ignorance of some of the other advantages Lisp offers.

The one thing that all those languages drop from Lisp is Homoiconicy [1]. There are entire other categories of "Eureka Moments" that you can experience with Lisp through the lens of homoiconicy. This is at the heart of the utility of Macros, but macros can exist without homoiconicy and there are other benefits besides easier macros.

What makes Clojure, in particular, interesting is that it's Homoiconic in terms of more than just sequences: There's also maps, sets, etc. Common Lisp & Scheme encode maps, sets, etc in terms of lists.

If you're having a hard time seeing past the syntax (or a lack thereof), I suggest exploring Mathematica. Download the demo [2]. The Homoiconic parts of Mathematica are hidden behind more traditional syntax. Use the FullForm[...] function to see the Lispiness leak through. I suggest exploring the Rules & Patterns documentation [3] to get a feel for a non-macro approach to homoiconic transformations. Rules & Patterns makeup a rewrite system, which are strictly more complex than macro systems, but often easier to grasp during initial exposure.

[1] http://c2.com/cgi/wiki?HomoiconicLanguages

[2] http://www.wolfram.com/mathematica/trial/

[3] http://reference.wolfram.com/mathematica/guide/RulesAndPatte...


>Common Lisp & Scheme encode maps, sets, etc in terms of lists.

What do you mean by this? You do know that I can write some reader macros to get the same syntactic sugar for dictionaries and vectors as in Clojure, right?


Of course you can, but almost no one does. If people started to do it a lot, it would be just as interesting as it is with Clojure.


My personal "wow" moment with Lisp was this:

(+ 1 2)

I was thrilled in the moment when I realized that all of the three parts of this banal form can be an arbitrarily complex forms themselves. For example, you can in place substitute "+" with a long code that contacts a web service and in the end, after 100 lines of code, returns "+". This 100 line function is made of forms made of forms made of forms and each can be replaced by the code that you might need. It's the beauty of the conciseness of the ternary operator in other language (a = b? 1 : 2) taken to a possibly infinite extreme.

This can sometimes be achieved in other languages with the use of temporary variables, one-shot functions if the language doesn't have lambdas (or multi-line lambdas like Python) and in the end litter your soul with the use of "eval"

This also leads to the other wow moment, when using Lisp makes it appear other languages' syntax so inelegant and cumbersome. At its core everything in Lisp is just like this:

(FUNCTION argument1 argument2 …)

When it clicks, it really hits you with the beauty of its perfect mathematical purity and you wonder how can you feel so comfortable with the average C-like language, where for has a syntax that is different from while, switch case is a beast of its own, sometimes you use curly brackets, sometimes square, sometimes regular or even angular, you don't have to forget commas and semicolons, you use colons and dots and question marks and myriads of keywords each with its own syntax and some things are functions and other operators and equal means something different from the double equals and so on.


I like the fact that you can substitute a more complex expression anywhere, even in the first position (the function to be called), but everything looks so similar that it's hard to see what's going on. I read more code than I write (I like to read about programming languages) and this:

((lambda (x y) (+ x y)) 3 4)

looks much harder to read than this:

(function(x, y) return x+y end)(3, 4).

In the Lua version, you can easily skim the code: here is a function (rather than a Greek letter), the parameters are inside parenthesis and separated by commas, the body comes next until the "end" keyword, it produces (returns) a value and then you pass the parameters (inside parenthesis and separated by commas again) to the function that was just created.

In the Scheme version, how am I supposed to know where this lambda ends other than counting parenthesis and having a lot of practice to recognize that (....... thing thing) is a function call where the function is more than a single name?

Similarly this:

(define (fn x) (lambda (a) (+ a x)))

IMHO looks much harder to read than this:

function fn(x) return function(a) return a+x end end

The second one is much clearer: "return function", you are returning a closure to the caller. The Scheme one makes me think: a function, another function, parenthesis, an expression, then it ends abruptly and I don't know what the hell the code supposed to do. Again, I CAN read that code in Scheme, but only after learning the core concepts in Lua.


But I'd argue `case` in Lisp does have different syntax from `if`. They are built out of the same primitives, but you have to know how to compose those primitives. It's much like saying all functions are the same, but in reality you need to know the arity, order of parameters, etc before calling a function.

To me, Lisp often felt like if the English language only had commas for punctuation. We then formulated all possible scenarios out of words and commas. Sure it's simple, easy for a computer to parse, etc, but it off loads a lot of work onto us as we read.

I'm also reminded of when I took a typography class in art school. My professor was adament about typography not being "grey". He really wanted us to use weight, contrast, spacing, to create and enforce a hierarchy within our typography which would guide and aid the reader. I'd argue Lisp is completely "grey", while other languages are not.

Don't get me wrong, this thread has been a great read and I've gained new respect from Lisp from it. I'm inspired to give it another go.


Having access to the compiler at run-time is really neat.


It seems some people are able to read it, some people aren't. I made it a personal goal a few years back to learn Common Lisp, and while the first month after I started learning was hard, I've never really looked back since then.

If I was to go back before that point, I'm sure I would find most languages more readable than lisp, but now that I know it fairly well, the syntax/parens don't bother me at all. I can parse it really easily with a passing glance. Maybe it's just a matter of experience.

In the end, a language is a language...if you don't like one, you can use another. I happen to love lisp, but it's a personal choice, and I can understand why people wouldn't like the syntax.


Did you also learn emacs in parallel with Common Lisp?


No, I was learning Vim at the exact same time, so ended up using Slimv (vim equivalent of Slime). I think in my entire life, I've spent about 10 minutes in emacs. Not because I don't like it, but more so because I never took the time to sit down and learn.


Nope, you're not. Readability is crucial for those of us not wired the same way as those who seem to be more comfortable in Lisp. Can you get a lot done in Lisp? Sure, if you can scan through and make sense of it. I've tried for a long time, almost to the point of barging in on my work time, but I haven't been able to get comfortable. The innumerable parentheses, for example, make perfect sense to those with the Lisp brain, but since I'm reading it in the context of English, where too many of those are frowned upon, it's nearly impossible for me to follow quickly.

Whenever I bring this up among my colleagues, I get shot down by the proficient folks. The argument then turns into how I'm deficient somehow or haven't tried hard enough -- the word "stupid" came up a few times -- and that just completely turned me off the language.

Religion is a touchy subject.


Just an observation about the parentheses... Although it can be hard to make sense of the large number of parentheses, using good indentation (emacs does it for you) helps a lot.

Of course Python makes it easier to figure out what code belongs where, it was made with this goal in mind after all.

I believe it is a tradeoff, Lisp's syntax gives you flexibility and an easy (in terms) and powerfull macro system at the expense of code readability (when compared with some languages).


If you're using indentation anyway, why not allow indentation to IMPLY parenthesis.

so that say:

    foo 1 2 3
    bar 1 2
    if (eq qux 3)
      fizzbuzz 7
is translated by the parser to:

    (foo 1 2 3)
    (bar 1 2)
    (if (eq qux 3)
       (fizzbuzz 7))


This is is suggested very, very often. It's been implemented a few times, but has never gained much currency. As best I can tell, it's one of those ideas that sounds good but rapidly grows either complex or kinda useless as you throw more cases at it and you just end up with lots of weird syntax thrown around instead of lots of parenthesis. It works best in a Lisp that was designed with this syntax like Dylan rather than as something you bolt onto a paren-dependent Lisp.


I think the problem is when you get into more complicated nesting of code than a simple function. For instance:

(setf new-links (append new-links (link-scanner (cu-body url-hash) (parse-uri cu-url url-hash)))))

That's a quick one-liner I wrote for part of a webcrawler. In Lisp you don't need to know anything much about parsing that syntax except that after an open parenthesis, you're going to have a function/macro followed by parameters.

Going by your example I could either do:

setf new-links (append new-links (link-scanner (cu-body url-hash) (parse-uri cu-url url-hash))))

or

  setf new-links 
    append new-links
      link-scanner 
        cu-body url-hash 
        parse-uri cu-url url-hash
or

  setf new-links append
    new-links link-scanner
      cu-body url-hash
      parse-uri cu-url url-hash
In the first example I've hardly improved anything, just removed a single pair of parens.

In the second example, I've split each new nested function call onto a new line and indented. This leaves "raw" parameters on the same line as the function call, whilst moving function params to the next line. (Note that Lisps don't make any distinction between the two usually).

In the third example, I've gone for some unholy mix where I've left the first nested function call on the same line as the first "raw" params, but moved the next params to the next line and repeated with any following functions. This is a mess...

Personally I think the 2nd example is actually quite readable. But here's the catch; this was 1 line from a 60 line function. If every line was multiplied by 5, is a 300 line function still as readable? And that's before you even get into any considerations of how to implement macros and other potentially complicated constructs.

EDIT: Admittedly, not every line is going to grow by a factor of 5, but it'll still be enough to turn a one page function into a multi-page function.


It's been proposed a few times, and some people have gone pretty far with it, http://www.dwheeler.com/readable/sweet-expressions.html

It's never really caught on though, I would guess due to inertia. Most Lisp programmers just get used the parens, and maybe those that can't end up not using Lisp.


As a non-lisper reading over some of those examples I definitely prefer his "Sweet-expression". Seems much more approachable.

Once I make a serious foray into Lispland maybe that'll change...


Change your IDE's color scheme so that parens are rendered in a low-contrast font color. The parens will be less-obvious.

Emacs' automatic indentation to The Proper Place is tremendously useful. I find Lisp's indentation to be just as easy as Python's, for example -- likely as I have used a Lisp for six years before coming to Python.

Once you've used it for a while, you don't think of it as much different from using {} for control blocks, () for method calls, and [] for array indexing in other languages. For me, the parens "go away", in that I follow the program control flow more by indentation than by counting parens.


That's unfortunate, I think.

It would really open up the list world, especially to those of us comfortable with python.

It seems like such an obvious upgrade to me, since:

A: In no case will it break code that works now. B: It would always be possible to translate to and from it unambiguously, so if a team wanted to keep all of their code one way or the other, each individual developer could still see things with their own preference - imagine a tool like `gofmt`.


I don't see too much gain doing that though, and there are cases where using indentation would look odd. For instance:

  ((foo 1 2) 3)

  ((bar 1) 2)
would be

  foo 1 2
          3
  bar 1
        2


Not at all. It wouldn't be mandatory.

May I propose:

    (foo 1 2) 3
    (bar 1) 2


Then it misses the point, don't you think? There is not too much gain between your version and original one. On the other hand, we are used to the use of parenthesis in formulas and people don't complain too much on that.

I think the main annoyance is not the syntax per se but the fact that calls are way more nested than in an imperative language counterpart.


No, there is plenty to gain whe you have stuff that's nested 3 or 4 levels deep.


You don't need to if you don't want to. It actually kinda forces you to break the problem in smaller pieces or use macros.


I have no doubt as to the power of Lisp, I've seen the end results with my own eyes. Haven't tried reading it on Emacs yet, but I guess it couldn't hurt to try again. It's just extremely difficult to turn off my semicolon thought process and turn on parentheses.


For me, reading Lisp is a tactile experience as well as a visual one.

Lisp code is a tree, and emacs has a lot of neat commands to let you move around the tree (up one level, down one level, leaf forward, leaf backward), and I find myself doing this whenever I have a piece of Lisp code open to "feel out" its structure.

It's a little different, and maybe less like "reading" than understanding a piece of Python code, but I like it.


There's no excuse for people calling you stupid.

But see orthecreedence's post. It can be learned with practice.


Thank you. I should make it clear that I'm still on OK terms with those people. It wasn't so much a direct "you're stupid" or else I wouldn't speak to them again, but more of a "that's just stupid that you can't get it". Which I guess is a long about way of saying almost the same thing, but I think they held back.

I can try to do something with it again if I have time, but I feel it's going to be a long and laborious process before I start to get comfortable. I used to unconsciously put semicolons ;)


Interestingly, I now unconsciously do this switching back to C:

(printf "%d" i)

Before I forget and switch back. I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time. Practicing at night while the majority of your time is still spent with some other language is not the same thing. This is also true for learning human language and is why immersion programs are so successful.

Further, I absolutely do not buy the claims that people have that they can understand haskell easily but not lisp. Haskell might look more familiar at a casual glance, but its syntax takes a substantial amount of time to really understand.


> I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time. Practicing at night while the majority of your time is still spent with some other language is not the same thing. This is also true for learning human language and is why immersion programs are so successful.

I think this is spot on. I have a minor quibble that can't adapt ought to actually read find it difficult to adapt for the very reasons specified--it's not a matter of can or cannot, but a reflection of how much time one is actually able to invest in learning, like a human language. Random, sporadic investment into learning French vocabulary won't help much in improving one's ability to actually speak French. Much will be forgotten that way.

Immersion is definitely the best model for picking up any new [programming] language. Each language I use with measurable proficiency--Python, Objective-C, C#, JavaScript, French, English, etc.--is in that list because I actually took on tasks in which that was the language I had to use. Each language I've messed around with on the side for fun/experimentation/curiosity--Mandarin Chinese, Lisp, Haskell, Java, etc.--is barely worth mentioning.


> I strongly suspect that the people who can't adapt are having difficulty because they are not programming with the new syntax full time.

That is very interesting. It took me indeed may months of night-time programming in Clojure and elisp to get at a point where I could feel confortable with the syntax. I had a series of small "enlightenment" during that journey: the last one was not long ago, when I realized how I could "carry" state through a high-order function.

But to stay focused I kept reading and re-reading great writing about Lisp by Paul Graham and kept viewing and re-viewing amazing videos by Rich Hickey.

It's as with everything: keep your eyes on the goal, not on the obstacles.

And at one point things are going to click in.


If you don't like the parents, check out Julia. It's basically Common Lisp with an infix syntax + cleanups.


>Python..is immensely more readable than..Lisp

fwiw, it is immensely easier to build a parser for (= z (+ y x)) than for z=x+y

you might ask, why should that matter ?

Ad outside the eyeglass store: why do we have ears ? so we can wear spectacles!

The point being, we first evolved as creatures with ears and later on devised spectacles so we could wear them over the ears. But we've now taken the ears so much for granted we think of spectacles as the innovation! Parsing should have been the whole point, but we now prize readability. We then add fluffy monikers like "serious perseverance" and "Lisp dumbness" to further tilt the balance in our favor.

http://briancarper.net/blog/442/lisp-syntax-doesnt-suck


Why would I need to write a python, or lisp, parser myself from scratch?


It isn't about writing Lisp parsers so much as about writing a parser for your particular language that solves your particular problem. Instead of writing a python program that plays minesweeper, you write a minesweeper language where minefield, detonate, mine, digit, grid are keywords in some sense, where minesweeping can be explained in terms of your minesweeper language itself, not in terms of python & variables & functions & all that.

I will let the master explain -

http://lib.store.yahoo.net/lib/paulgraham/onlisp.pdf

Read Section 1.2 - "Change the language to suit the problem. In Lisp, you don’t just write your program down toward the language, you also build the language up toward your program. Language and program evolve together. Like the border between two warring states, the boundary between language and program is drawn and redrawn, until eventually it comes to rest along the mountains and rivers, the natural frontiers of your problem. In the end your program will look as if the language had been designed for it. And when language and program fit one another well, you end up with code which is clear, small, and efficient.

The greatest danger of Lisp is that it may spoil you. Once you’ve used Lisp for a while, you may become so sensitive to the fit between language and application that you won’t be able to go back to another language without always feeling that it doesn’t give you quite the flexibility you need.".


You might not want to write parser from scratch, but you might want a new syntactic abstraction, and that usually requires a language change. For example, Python's "with" statement. If Clojure did not already provide the analogous `with-open` macro, you could create it yourself.


parsing is not the point for people who have to write and understand programs, just the point for a parser. Which is more important?


Parsing is actually a big issue for people who have to read and write programs; that's why most languages need complex rules for operator precedence and programmers have to memorize these rules or risk introducing subtle bugs.

In Lisp, you never, ever have to worry about operator precedence because the order of operations is explicit in the syntax; you just start from the most deeply nested parentheses and work outward. I'd say that's a worthy reward for learning to read code in prefix notation.


I have a similar experience, but it applies to Haskell, too. I understand functional programming just fine, but I can't read the source code at all. I can slowly grind my way through a snippet and work out what it means, but I can't read it. I wonder whether this experience has anything to do with one's comfort reading mathematical notation - I basically can't, and skip over all the formulas when I'm reading a paper. It's at least 100x easier to just infer the principle from a working example.


Interestingly, I like Haskell for exactly the opposite reason: not only can I read it, but I don't have to. Haskell is the best language I know for getting the gist of some code at a glance.

This is the same advantage as mathematical notation has over paragraphs of text: I can get the general idea from a formula or diagram without reading it in detail. In a sense, I can infer the "shape" of the notation, which is what lets me avoid actually reading everything.

Getting to this point with both mathematical notation and Haskell took a lot of practice, but it's well worth it: both notations have exceptional information density and allow me to go through more information faster.

Sometimes, for some completely foreign ideas, I do have to read the mathematical notation/Haskell code in much closer detail. And this does take much longer than you'd expect: a single page can take something like half an hour or more. But this is not much of a surprise: if the notation was expanded to prose, it would take up several pages, and not be an easy read by prose standards either.


It's remarkable how differently adapted our learning mechanisms are. If you gave me the choice between one page of math symbols and ten pages of English prose, I'd take the prose and call it a bargain.


I get a distinctly Perl-esque vibe whenever I try to read Haskell. So many pieces of type-system plumbing with non-alphanumeric names and uncertain precedence.


What is the deal with haskell? It seems it has some sort of weird property where it causes seemingly rational people to invent crazy nonsense whenever they talk about it. I can't even imagine what type system "plumbing" would be, or how having a type system would make code harder to read. Would you mind providing an example so I know what you mean?


'<*>', '>>=', '=<<', '>>>' off the top of my head.

The first has something to do with Functors, the second two something to do with Monads, and the last something to do with Arrows, and I have no idea what their precedence is. The first three all more or less solve the problem of "use a function I'd normally use on values of type `a` on values of type `m a`, where `m` is a Monad or a Functor.

The reasons this stuff makes code hard to read (for me) are:

1. They're all infix and I don't know the precedence.

2. It's not restricted to standard-library code. Oftentimes, learning a new Haskell library involves figuring out which part of the infix line-noise namespace it's staked out for itself.


Applicative and Monad and their operators are really something I would consider required Haskell knowledge. It's not surprising that you can't read Haskell code if you don't know what those are, because they are everywhere by virtue of being such useful abstractions.

Granted, Haskell people seem to have a sometimes unhealthy attraction towards having operators for everything. The Lens package in particular is rather awful in this regard IMO. (Fortunately there are non-operator equivalents to everything)


>'<>', '>>=', '=<<', '>>>' off the top of my head.

None of those have anything to do with the type system though. That's why I called your statement nonsense. They are just ordinary operators like + and -. The habit of claiming everything you dislike about haskell is somehow related to the type system is quite common and rather bizarre.

>The reasons this stuff makes code hard to read (for me) are

The reason is because you haven't learned haskell. If you had never learned arithmetic then 5 + 3 * 7 would make no sense too. That doesn't mean math is hard to understand, it just means you need to take the time to learn it.

>They're all infix and I don't know the precedence.

Use :info in ghci, or look it up on hoogle. Just like you would with a function you aren't familiar with.

    :info (<*>)
    infixl 4 <*>
Left associative, precedence 4. Addition is 6, multiplication is 7.

>It's not restricted to standard-library code.

Lots of languages let you write new operators. If a library creates tons of operators that reflects on the library, not the language.


How long have you been using haskell? I had the same sort of thing when I started with it. I can't really remember when exactly that went away, but I gradually just sort of gained haskell literacy over time just from practice.


I don't use Haskell, I just read about it.


Most Lisps aren't so great with arithmetic. Afterall arithmetic notation has been refined through 100's of years to be readable and concise, and I've never seen anybody use prefix notation for operators voluntarily. But infix notation can be added easily: this is a quick'n dirty solution in Clojure with left associativity, no precedence and many gotchas. Incanter has a much better implementation called $= with less gotchas and many more features:

  (defmacro arith [& args]
      (letfn [(res [v] 
                (cond 
                 (vector? v) (apply arith-body v)
                 (list? v) (map res v)
                 :else v))
              (arith-body [s & args] 
                (reduce (fn ([expr] expr) ([expr [op arg]] `(~op ~expr ~(res arg))))
                        (res s) 
                        (partition 2 args)))]
        (apply arith-body args)))
 
    (defn deriv [f]
      (let [dx 0.001]
        (fn [x] (arith 
          (f [x + dx]) - (f x) / dx))))

    ((deriv (fn [x] (arith x * x))) 3)
Another common problem is the piping of results from one function to the next:

     (f 
       (g 
          (h a)))
where you usually want to start reading the code in the lower rightmost corner and go upwards towards the left. Very messy in many lisps.

Clojure has solved that problem with the threading macros, yielding postfix notation when you need it the most:

    (-> a h g f)
Piping subresults between functions really doesn't get much better in any language.

And that's of course some of the appeal of Lisp: The syntax may start controversial, but you can choose most of it yourself and make it depend on the problem, your tastes and of course, your readers.


Well, Lisp has excellent arithmetic capabilities. Bignums, Floats, Ratios, Complex numbers built in.

Infix notation is provided by macros or read macros.

Infix notation is provided by math packages on top of Lisp: Macsyma, Reduce, Derive, Axiom, ...

Those provide REAL extensive math syntax capabilities, not just what typical programming languages provide.


Sloppy me. I would of course be referring to arithmetic syntax or notation. And you make my point exactly: there is many ways to express arithmetically heavy operations in a natural, readable way.


>Piping subresults between functions really doesn't get much better in any language

Piping operations like this can also be found in OCaml, F# and Perl6.

For eg. in Perl6 this is called a Feed Operator (http://perlcabal.org/syn/S03.html#Feed_operators) and your example would look like this:

  a ==> h ==> g ==> f;


Yes, many languages support something eqivalent. Classically object oriented languages not the least:

a.h().g().f()

I feel that prefix notation is one of the greatest impediment to the uptake of Lisps. And I think it's important to underscore that you can (idiomatically) choose prefix, postfix or infix depending on the character of your problem.

If the code is (unnecessarily) hard to read, then express it differently. The language encourages it.


Actually the equivalent of the above Perl 6 code would be:

    f(g(h(a)))
It can get kind of gnarly when any of the values are non-trivial.


Translated directly, yes.

But when you have objects and no piping operator, library designers will often choose to implement pipe flow as method chaining. See e.g. jQuery, Scala's collections or many ORMs, like SQLAlchemy.


I read more about Lisp than I program in it, and I feel just like you about the language.

Everything looks the same. Some time ago I read that Common Lisp had multiple return values. I was expecting something like Lua, where a function like this:

function fn() return 1, 2, 3 end

Is called like this:

a, b, c = fn()

But in Lisp, it's just the same old

(some-magic-words-here (other-things) (other-things))

Could I have some commas and equals signs to separate things at least? :-(

Then I heard various Lisps have classes, structs, objects and what not. Does it look like this?

{ name: value, name: value }

Or this?

class Name { Type name; Type name; }

No...

It looks like this:

(def-something ((something-here)) (something-else) (((something-else-2))))

If you are lucky you have a few :keywords or &keywords so your eyes have something to hang onto. Other than that, everything looks so... positional, and a closing parenthesis could mean anything from the end of a simple expression like (+ 1 1) or the end of the declarative part and the start of the executable part of a complex structure.

You don't even need to go that far to find these problems: just look at:

(let ((new-var-2 value) (new-var-2 value)) (some-code-here))

(if (condition) (runs-if-true) (runs-if-false))

You'd better use a very consistent indentation, because there is almost no visual separation between the "runs-if-true" and "runs-if-false"!


> Does it just take some serious perseverance? Or am I just Lisp-dumb?

I've had the same difficulty with Lisp, but I can read Forth and Postscript (another postfix language) easily. I am really not sure what that says about how brains get wired other than I encountered both postfix languages early in my career.


You crushed SICP and still aren't extremely comfortable with LISPs? That's worrisome. I'm in the process of working through SICP and hoping that there will be some kind of fundamental change in the way I think about code.


i can tell you that after going through only a few chapters of SICP, it radically changed the way i wrote code. i took the MITx 6.00X programming course after and was incredibly uncomfortable at using destructive methods (to give an exmaple), even though i used to write and think code in that way.


I am with you on this. I wouldn't want to use this type of syntax. It obviously serves a different purpose to Java or Ruby though, it might have to do with a different mindset.

I doubt that there is any syntactical advantage though. In the end it is just a matter of opinion.


It takes some time. You have not experienced Lispy syntactical nirvana yet. I was in your place when I started out. I kept on going. Now, even English seems artificially hard compared with s-expressions :D


It took me perseverance. Not that I'm some Lisp god or anything, but I can sure follow 90% of it from reading it. It's like learning Japanese when all you know is English.


Perseverence. Time and exposure is all you need. Eventually you will be comfortable with it. I know I am, and it probably took about a year.


A year! How did you have the patience to plod on for that long?


Time flies when you're having fun?

Maybe we don't realize how long it takes to acquire a new skill.

If someone had told me it would take a year to just become comfortable with emacs, I would never have picked it up. I already have sublime installed, and textmate!

But I picked it up because I respected people I knew who were wielding with great skill, and I wanted to try org-mode for TODO lists.

I'm glad I did.


You think that's magic? Wait till you learn about "automatic differentiation," in which you can mechanically and exactly take the derivative of a computer program (function) without a kludge like setting dx=0.001. In the "old days" this took the form of a transformation of the (say, Fortran) source code. In dynamic languages it can be almost trivial to implement.

The relevant wikipedia article is here, but it makes the process sound way more complicated than it really is: http://en.wikipedia.org/wiki/Automatic_differentiation

Here's a blog entry about it: http://nibot-lab.livejournal.com/65962.html?thread=108202


Most of SICP can easily be implemented in Python since SICP focus on computational ideas and not on languages features (it doesn't use lisp macros, for instance).

The first 3 chapters should look good in most languages with support for high order functions. It starts to get tricky in chapter 4 since Lisp is homoiconic.

I started writing about learning SICP using python [1]. Maybe I should continue it.

[1] http://pedrokroger.net/2010/08/sicp-in-python-1-1-the-elemen...


Can we please avoid turning this into a "My favourite language can do this too!!"-thread?


A great foreign movie might not be as "great" given English subtitles, but I'll still appreciate it's greatness when I have access to it in my favorite/native language.

Stretching the analogy..., since Lisp isn't my native language, I very much appreciated the Python "subtitles" provided by the grandparent.


Well OP was talking about his experience back in 1983. Scheme is around since 1975 so Python catch that up 16 years later. Beside the fact that you can do that now even in C++, but that's not the point.


Except Python didn't really catch up. As mentioned below, Python doesn't have tail call elimination so you won't be able to do the same example with the factorial function.


Or even Perl:

    sub deriv {  my $f = shift;
        my $dx = 0.0001;
        return sub { my $x = shift;
            return ( &$f( $x + $dx ) - &$f( $x ) ) / $dx; }
    }
    sub cube { return $_[0] * $_[0] * $_[0]; }
    print( deriv( \&cube )->( 2.0 ) );
12.0006000100226

--------

"Lisp has all the visual appeal of oatmeal with fingernail clippings mixed in." -- Larry Wall (but see also http://www.perl.com/pub/2000/01/10PerlMyths.html#Perl_looks_...)


Larry Wall must be one of the most ironic people in the open source world, on purpose or otherwise. He also made a statement like "Lua doesn't have real OO, they just use a hash with a magic function" or something like that....


The actual statement Larry Wall gave wasn't that bad...

I thought Perl 5 had an oversimplified object system till I saw Lua. In Lua, an object is just a hash, and there's a bit of syntactic sugar to call a hash element if it happens to contain code. Thats all there is. They don't even have classes. Anything resembling inheritance has to be handled by explicit delegation. That's a choice the designers of Lua made to keep the language very small and embeddable. For them, maybe it's the right choice.

ref: http://www.perl.com/pub/2007/12/06/soto-11.html

Also IIRC Wall's original Lisp quote starts with him praising Lisp and then ending with this infamous|humorous quip.


This description sounds to me almost exactly like what Perl does with it's OO (Perl's reall OO. Moose and all that are libraries built on top of Perl).


It's very similar in that they both expose the OO inner workings but after that they're quite different.

Perl requires a blessed reference (which doesn't have to be a hash but often it is the best option!) which only contains the objects state. So unlike a Lua object hash the behaviour is maintained by Perl in dynamically scoped packages/subroutines (classes/methods).


But even Perl's barebones (non-Moose) OO has inheritance. Once you add in Moose, you now have an API to your OO which is unusual.

Personally, I think Lua is amazing. Rolling your own OO system in the language itself is pretty awesome, if you like that sort of thing. Everything has to be implemented in -something-, right?


or even TMTOWTDI :)

Here in perl5i (https://metacpan.org/module/perl5i):

  func deriv ($f) {
      my $dx = 0.0001;
      func ($x) { ($f->($x + $dx) - $f->($x)) / $dx };
  }
  
  my $cube = func ($x) { $x ** 3 };
  say deriv($cube)->(2);
And also in perl6:

  sub deriv (Code $f) {
      my $dx = 0.0001;
      -> $x { ($f($x + $dx) - $f($x)) / $dx };
  }       
          
  my $cube = -> $x { $x ** 3 };
  say deriv($cube)(2);


I had fun implementing "Why Functional Programming Matters" [1] in Python [2], and it turned out to be similar to what you have above.

[1] http://www.cse.chalmers.se/~rjmh/Papers/whyfp.html

[2] http://sage.cs.drake.edu/home/pub/62/


> First of all, a disclaimer: standard R5RS Scheme can not be executed efficiently. Period. Every implementation must bend the semantics of the language in more or less subtle ways to do a minimum of optimization and there are several approaches to do this[0]

In my experience, lisp developers care a ton about performance. Most common lisp and schemes provide numerous ways to get good performance. With lisp you get power, flexibility, and performance. It gets even better when you use something like chicken scheme, which is compiled to C and allows you to embed C code.

I run multiple web servers written in chicken scheme and they perform very fast, much faster than nearly any comparable web framework, like rails.

[0] From a chicken scheme guide on programming for performance: http://wiki.call-cc.org/programming-for-performance


I've heard that the Stalin compiler is very good, I don't know much about it though.


Stalin compiles a subset of Scheme.


Well, gee, thanks for reminding me AGAIN that I have this itch for diving into LISP that never subsides, no matter how strongly I put it off as overly academic or otherwise removed from reality. Upvoted.


In case you don't know about this and you want to scratch your itch... you can experience almost the same class this blogger did, in all its glory, for free. Video lectures with the same professors and authors of the famous book.

http://mitpress.mit.edu/sicp/


Maybe I'm an idiot, but I don't see a link for video lectures on that page.

Edit: But they are here: http://ocw.mit.edu/courses/electrical-engineering-and-comput...


Oops you're right, thanks.


You may want to check out Clojure.


I was actually checking out Clozure. Not a big fan of anything Java or Java adjacent.


Clozure is a good choice of Lisp to get something done. It's got a really neat FFI (Foreign Function Interface), including an Objective-C bridge that lets you talk with Mac OS X, so you can interface with C libraries to handle all those wheels you really don't want to re-invent.

That said, try to get over your Java prejudice with Clojure. Java libraries are nice.


Java libraries are nice and all, but I could sure do without java.lang.StackOverflowError.

I recently started learning Clojure and was floored when I found out it lacked tail-call optimization. I thought that was pretty much a requirement for a functional language. I understand that it's a limitation of the JVM and that there are workarounds, but they all seem kinda hacky, so I don't really plan to use recursion a whole lot in Clojure. On the other hand, its libraries for building and processing collections are so good that I don't think I will really need to.

No car, cdr, or cons is just weird though, for a language that's considered a Lisp.



I already have an overly healthy dose of self-loathing since I'm mainly programming in PHP. I guess that's why I'm mostly staying away from Java ;-)

And thanks for the further insight on Clozure!


As I see it, OOP PHP 5 is Java without static types and without a proper standard library. I'd regard it as an upgrade coming from PHP. Check out the Play Framework.


And that's exactly why I'd like to try out something that is a lot more different, not just slightly different ;-)


Clojure is not Java. It runs on the JVM. It can use Java methods. But it is not Java.


That's my problem, too. There are a lot of cool languages for the JVM, but the target group are people who know Java, the JVM, and the whole Java ecosystem.


There's no reason to avoid the JVM ecosystem just because you don't like Java, especially if you've never done enough Java to develop a strong and well-founded hatred for its verbosity and excessive emphasis on mutability rather than for its unfamiliar object system and static typing, which tends to be what people starting with Java seem to dislike.

You can start to use Clojure effectively without ever importing a Java library. (Even without java.lang.*, although that gets implicitly imported anyhow). The rest, you can pick up as you go. It's not too bad.


Its a bit redundant with the article, but I think everyone should at least check out the Lambda the Ultimate papers written by the guys in the MIT Scheme group. In particular the "GOTO" one amazes me a bit because even today the "expensive procedure call myth" still exists somewhat.

http://library.readscheme.org/page1.html


The Mercury language (http://www.mercury.csse.unimelb.edu.au/) does one better. It can use the fact that multiplication is associative to automatically transform the non-tail-recursive function into the tail-recursive form.


For that matter, so can gcc.


Wow, I didn't know this. It's such a great feature, they really should advertise it more.

(Examining assembler output shows that a naïve implementation of fact() also gets the autovectorization treatment as well as a fair bit of loop unrolling. Very impressive.)


Fortunately there're a lot of these 'magical' languages nowadays, some that comes to mind is Haskell and Shen.

And yeah, functional feels much cooler than imperative, who would've thought that the teenagers were right? Being lazy and doing everything at the last minute is cool.


...would it be wrong to point out the Shen is a Lisp too?


The cool thing is not postponing stuff. The cool thing is not doing anything you don't need to.


"As soon as the lecture ended I ran to the Scheme lab. Hewlett Packard had donated several dozen HP 9836 Chipmunks to MIT. Each one was equiped with a blindingly fast 8-MHz 68000 processor."

That brought back a lot of memories ;) We had similar machines (I'm sure they were newer as this was 1983 and my class was circa 1990) at the U of Mn where we did Motorola 68000 assembly and Scheme :) They were HP of some kind and I remember them!


The blog post mentions Lisp, but it should be pointed out that what they're using in that course is Scheme, which should not be confused with Common Lisp.


Both Scheme and Common Lisp are dialects of Lisp.


Which means little. Scheme and CL have been going in very different directions pretty much since the beginning. Idiomatic Scheme code and idiomatic CL code are very different.


Good point. OP just followed the style used by Sussman and Abelson which use Lisp and Scheme interchangeably.


gnosis mentioned that the mentioned code is scheme. FWIW, a Clojure transliteration of the derivation example is pretty trivial if that's the sort of REPL you happen to have close at hand:

    => (def dx 0.0001)
    #'user/dx
    => (defn deriv
         [f]
         (fn [x]
           (/ (- (f (+ x dx)) (f x))
             dx)))
    #'user/deriv
    => (defn cube [x] (* x x x))
    #'user/cube
    => (def cube-deriv (deriv cube))
    #'user/cube-deriv
    => (cube-deriv 2)
    12.000600010022566
    => (cube-deriv 3)
    27.00090001006572
    => (cube-deriv 4)
    48.00120000993502


Lisp is "cognitively" different from, say, c, Java, and python. It's like learning a new language. How long will it take you to be fluent in Arabic? It takes 5 years. But the best part is that after 5 years, you won't forget it.

I've managed to work for 10 years in Lisp. Then, I had to work in Java. After another 10 years of Java, I took up Clojure very easily.


"who cares about something as mundane as that? I want to do magic." I like how the word "magic" keeps poping up on each Lisp-related article I read. And the article don't even talk about macros!


"Any sufficiently advanced technology is indistinguishable from magic" -Arthur C Clarke

Whenever "magic" is mentioned in articles like these, you can probably safely substitute "higher-order abstractions that aren't available in the language(s) I'm accustomed to". Of course it's easier (and more fun) to say "magic".


The best use of "magic" I've seen came from a kids TV show where the protagonist would simply refer to new or mysterious things as "magic" even if the other characters offered him an explanation of how the thing worked. To me it was a perfect example of black-box abstraction.

Its magic because we know what it is doing, but the how is not intuitively obvious.


And, as long as we know how to invoke it reliably and properly, we don't __need__ to understand how. (Though, being nerds, we often want to.) I agree: "Magic" is one of the best shorthands for "We can explain that later but for now let's assume this works".


It's just Abelson and Sussman's influence; it's their chosen metaphor for explaining programming in 6.001 + SICP (called, appropriately, "the Wizard book.")


poor, poor prolog... (macros? unification is the real black magic stuff!)


I would just like to point out that the example is not doing calculus, it is not taking a function x^3 and returning a function 3x^2 it is using a repetitive process to narrow in an approximation of the result. It is a generally applicable method for any value that narrows in on something. The same thing can be done in C with function pointers, whilst a great algorithm it is not call to functional programming. I love functional programming and wish people would stop being entranced by maths examples.


LISP is S-expressions.

Mathematica is M-expressions, kinda...

Macros are considered harmful as they can lead to unmaintainable code.


It’s not macros that are harmful, it’s abuse of macros.


The iterative version's magic can be explained by Tail Recursion Elimination. Python creator Guido's post on why he left it out of Python might be an interesting read: http://neopythonic.blogspot.in/2009/04/tail-recursion-elimin...


This is so far my biggest disagreement with Guido and Python.

Comments like "Third, I don't believe in recursion as the basis of all programming." and recursion "is just a nice theoretical approach to fundamental mathematics (turtles all the way down), not a day-to-day tool." Makes me wonder how long he was exposed to Scheme. For me, programming in Scheme is a wonderful experience, way more pleasant that in Python.

In Scheme is really clear what is an efficient or inefficient piece of code. Rather in Python, where I found myself going back and forward trying to figuring out what is happening behind the scenes to see if I can do it in a more efficient way.

Don't get me wrong, I will take a slow Python against something faster like Java any day. But I have Scheme in the top of my preferences and that hasn't changed since 17 years ago when I was introduced to Scheme for first time.


I'm not missing Scheme-like recursion from Python that much since Guido borrowed features from Icon and other languages that helps me do much of the crap that used require nasty loops. (Love me some iterators!)


Maybe not, but perhaps you'll later run into something you are missing from the language and Guido (PBUH) won't add add it. With Lisp (incl. Scheme), you can add it yourself.


I like Scheme, but I have this desire to be productive without reinventing the wheel or most of the automobile. If it's not in the core Python language, chances are it's a feature in one the almost infinite selection of third party modules. Or can be done in C and called from Python.


Maybe you should give Clojure a try, which is a pragmatic Lisp in my opinion. You don't have to reinvent the wheel most of the time since you have access to a vast array of Java libraries.


>Unfortunately Clojure is one of the few lisps without proper tail calls. (I love Clojure I just thought I should point this out in the context of the discussion).

Fortunately, thanks to Clojure being a Lisp, you can add it yourself:

https://github.com/cjfrisz/clojure-tco


Thank you for posting this! I wanted for a long time to see a concrete example of adding a feature to a Lisp language, I will study that code.


Unfortunately Clojure is one of the few lisps without proper tail calls. (I love Clojure I just thought I should point this out in the context of the discussion).


Iterators are great! I think I prefer thinking in terms of recursion because it helps me break it into subproblems better. That doesn't invalidate anyone's preference for iterators or generators. And you Python devs have nice list comprehensions as well, so I don't doubt that you have no real need for recursion.

But I will say it seems really weird to me to refuse tail-call recursion flat out. People even do it in C as an optimization (http://llvm.org/docs/Passes.html#tailcallelim-tail-call-elim...).



Uh, recursion doesn't have to be the basis of computation. It certainly can be, but it's not how you have to think of stuff. Iterative algorithms are perfectly cromulent.


Guido mentions that you can't get an accurate stack trace in the presence of tail-call optimization. That alone makes it an absolute no-go for just about ANY production code (where ease of debugging is orders of magnitude more important than ease of writing in the first place).


That makes it sound like a. you can never get an accurate stack trace, and 2. the stack traces you do get that are "inaccurate" are worthless.

I can't speak to every optimizing interpreter or compiler, but in the few that I've used, TCO doesn't do anything to code that doesn't have tail calls, so lots of the code has "accurate" stack traces.

And when it does perturb the stack trace, it does so in a very obvious way, it abbreviates the tail calls. Such stack traces no longer have a 1:1 mapping with the function "calls," but are still quite informative for debugging purposes. Not as informative as they would be if you turn TCO off just to debug that code, but informative enough that I rarely had to turn it off.


Never mind the fact that tail cails are just fundamentally loops, which don't generate any stack frames anyway!

If you're application is complex enough that inspection won't reveal the source of the bug, then stack traces are almost strictly less useful than logging.


Tail calls can be used to increase the efficiency of loops simulated using recursion. However, not all tail calls are loops, e.g. void foo() { bar(); }

A good backtrace for a crash or exception can tell me the entire call stack. Logging at that granularity (i.e. every time a function is called) is not a good idea.


Exactly, it's not like a loop counter variable will show up in a stack trace (even if it is a perl style trace, with values, rather than a Java style, line numbers only, stack dump).

This whole debug dump issue could be avoided by saving a copy of the initial entry frame for debugging and the current frame with elipses shown between them when TCE was in effect.


Lua does TCO and implements it with a special opcode that tracks the number of times a function is called, and you can get an accurate stack dump.

    function r(x)
      if x == 1 then
        return 1
      else
        return r(x-1)
      end
    end

    r(100000000)

    [spc]saltmine:/tmp>lua t.lua
    ^Clua: t.lua:6: interrupted!
    stack traceback:
            t.lua:3: in function 'r'
            t.lua:6: in function <t.lua:2>
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            ...
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            (tail call): ?
            t.lua:10: in main chunk
            [C]: ?
    [spc]saltmine:/tmp>
But one reason why I like TCO is in writing code to handle protocols. Create a state machine that describes the protocol. Convert each state to a function. Transitions to a another state is a function call. It works something like:

    function start() 
      i = getinput() 
      if i == A then
        return state_A()
      elseif i == B then
        return state_B()
      else
        return state_error()
      end
    end
    
    function state_A()
      return state_B()
    end
    
    function state_B()
      i = getinput()
      if i == MORE then
        return state_A()
      else
        return DONE
      end
    end
    
    function state_error()
      return ERROR;
    end
Because of TCO, each function call is effectively a GOTO. I've used this method to implement TFTP.


This is something of a pet peeve, so I'll reply. Let's say that the only stack trace entries TCO will eliminate are not only useless, but actually impede debugging. Let's say 'rf' is a recursive function. If it dumped core, without TCO you would see something like 'rf rf rf rf ...'. With TCO, you would see 'rf' once, just as if it dumped core while on a loop. Which is actually what TCO means: translation of recursive functions into functions that use loops, at the AST level. The result is code that's easier to debug (because it's not recursive at the stack trace level), easier to understand (because it's recursive at code level) and that performs better (because it's a loop at machine code level).

So yeah, you could make the case that it's not "an accurate stack trace", because it isn't. But, you don't want that. You want TCO. Trust me on this.

Preemptive strike for the bikeshedder crowd in the back :) : if you think you can deduce some useful fact from a recursive function that's stacktraced, like for example where in the loop the function gave an error, then you're wrong. Go learn to use a debugger.


> Which is actually what TCO means: translation of recursive functions into functions that use loops, at the AST level.

That's not correct. It's not TRO, TCO. So as it's normally understood any call in tail position would be eliminated.

In Python today, this program:

    def foo():
        bar()

    def bar():
        baz()

    def baz():
        raise Exception('OH NOES')

    foo()
will output:

    Traceback (most recent call last):
      File "raise.py", line 10, in <module>
        foo()
      File "raise.py", line 2, in foo
        bar()
      File "raise.py", line 5, in bar
        baz()
      File "raise.py", line 8, in baz
        raise Exception('OH NOES')
    Exception: OH NOES
If Python did TCO, you'd get:

    Traceback (most recent call last):
      File "raise.py", line 8, in baz
        raise Exception('OH NOES')
    Exception: OH NOES
Not wanting to lose helpful information like this is a valid cause for concern. I used to think Guido was wrong for not wanting TCO in Python but I've yet to see anyone give a clear example of what having it would let you express more easily than you can now with iterators/loops/yield etc.

TCO is a fine feature in languages where its deeply engrained and part of its natural idioms. Bolting it on twenty years later doesn't seem that helpful to me.


Off the top of my head I can think of a doubtless unsophisticated way to recover the "stack" in this situation: namely, the same way you recover the "stack" in the factorial example, by using an accumulator hidden in the interpreter. When foo calls bar(), the interpreter passes along, when it makes the tail call, an explicit list of predecessors (in this case ["foo()"]); when bar calls baz(), the invocation of baz receives ["bar()", "foo()"], etc.


Some Schemes (like MIT Scheme) do this with a circular buffer of bounded capacity. Another Scheme interpreter (a little one by Jake Donham) had a switch to turn off TCO to help debugging. Yet another approach is full omniscient debugging (being able to go backward in time).


You are recovering the stack by creating another one.


Yeah. But you could decide how much of the stack to keep, at least. Nevertheless, I know that some smart persons (I just can't remember who!) have addressed this issue in a significantly subtler way---wish I could find the reference.


That seems to defeat the main advantage (not eating up lots of memory with deep recursion)?


Parent wasn't suggesting to keep full stack frames, but something like (file/function name/line number) tuples, that's not the end of the world. By making it a fixed-size list, you can ensure your memory usage won't blow up.


It's largely a red herring. The cases where TCO drops stack information are mostly the cases where you would've written the code as a loop in a language without TCO. So the equivalent is losing iteration history, which you don't have anyway.


I'm afraid this is just not true. With full TCO, if function A calls function B in tail position, and B gets an error, the frame for A will not be on the stack; it will in general be harder to tell how the code got to B.

Although TCO is often essential and I have written large blocks of code that depend on it, I have also long advocated better heuristics for when to turn it off so as not to impede debugging. The compromise taken, for example, by some Common Lisp implementations, where calls from one top-level function to another are not tail-optimized but local calls (calls to functions defined with LABELS or FLET) are, in my experience works very well most of the time.


Which is why I said "mostly." I don't think incidental tail calls come up often enough to be a major impediment to debugging. Moreover, often times when non-loop tail calls do come up, I don't want to see the intermediate calls anyway. E.g. proxies or thunks.


I think it's a lot easier to ignore uninteresting stack frames than to reconstruct those that have been optimized away.

But for experienced programmers, I agree global TCO doesn't quite rise to the level of a major impediment, though I have found it an annoyance at times. It particularly bugs me when the code I'm debugging is not recursive, so I'm not getting any benefit from TCO in this particular case.

For novice programmers, however, I think it could be a serious barrier. I think van Rossum was right not to have Python do it by default. But I think he should also have provided a way to define a set of mutually-tail-recursive functions.

I have long experience with Franz Allegro CL, which does TCO only on local calls. I think it's the best of both worlds: debugging usually isn't interfered with, and on those occasions where I really want to write a set of mutually-tail-recursive routines, it gives me a way to do that. I have certainly also used CL implementations that do global TCO -- I think that's all of the major open-source ones -- and they're certainly usable, but I prefer the Allegro compromise.


Most heavily optimizing compilers do strange things to the code. You wouldn't want to debug an -O3 program, just as you probably wouldn't want to debug with TCO turned on.

Luckily, in the lisp and scheme families (possibly also other image based PLs), it's easy to mix compiled (and optimized) and interpreted code. Whenever i run into a bug such that i need the debugger to fix it, I always replace the compiled function with an interpreted version first, making it much easier to debug. What you get in the stacktraces is the same as you read on the screen.

In most lisp implementations, this maneuver can be performed completely on line, you don't have to restart the program or recompile anything but the function under inspection itself. When the runtime signals an exception or fault, just tell lisp to interpret the suspect function. Then move up the stack to the function that called the suspect, and restart that stack frame et voilá, bob is your uncle.


An optimized tail-recursive call is just a loop. As far as I know, there are no stack traces for loops in imperative languages.


That's bullshit, however, because:

(a) there are techniques to deal with that, and (b) where's the accurate stack trace from a state-maintaining while loop, or a trampoline?


I think you could chose a better analogy.

In any production code I would rather prefer performance than debugging. Isn't that the point why C/C++ programmers have debug and release configurations? to turn off debugging features on a production code?


I guess you never had to analyze stack dumps from optimized release builds.


OK, but you also get no stack traces when you use a loop. I am not really seeing how this is an argument against TCO, it sounds more like an argument against anything other than unoptimized recursion.


And that's one reason why loops are hard to debug!

Losing stack frames for recursive functions is not so bad. The real violence that TCO does is to non-recursive tail calls, which form the majority of tail calls, at least in non-functional languages.


Isn't that an argument against many optimizations, such as function inlining?

In any case, it's possible to have both, if the compiler outputs some (optional) extra debug information. For example, gcc has options that let you reconstruct a 'virtual' stack trace in gdb even for inlined functions.


But it's not true.

As someone said in the comments below Guido's post, providing meaningful stack traces in presence of TCO is possible, it just requires a little more bookkeeping on language part.


Exactly, that's why you can't have loops in production code!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: