Hacker News new | past | comments | ask | show | jobs | submit login
Lisp vs Python (amitp.blogspot.com)
76 points by gnosis on Feb 20, 2011 | hide | past | favorite | 89 comments



It's easy to read. You can determine how to interpret something—a string, a list, a function call, a definition—just by looking at the code locally.

I actually find that Lisp scores some local-readability points over Python in a few cases, mainly because Lisp is more explicit about scope, e.g.

    (let ((a (something))
          (b (other-thing))
      (frobnicate a b))
    ...other code
versus

    a = something()
    b = other_thing()
    frobnicate(a, b)
    ...other code
is more explicit about the intended lifetimes and consumers of the data "a" and "b". Lisp also has the "with a = (blah)" keyword for "loop" so the assignment can be made specifically in the context of loop preparation.

All in all I'm actually happier with my ability to signal my intent (and gauge other programmers' intent) in Lisp than I am in Python.


Python is explicit about scope by using the same indentation block. I can glance at python code and immediately tell what the scope is because it's in the same indentation block (removing for the moment things like the global keyword, which tend not to be a problem in practice).


Can you increase indentation arbitrarily, though? Like, say I want to initialize some data for a loop, that I wouldn't need later, could I:

    main_scope_stuff()
        c = create_context()
        for foo in foos:
            do_something(foo, c)
            do_other_thing(foo, c)
        accumulate(foos, c)
    more_main_scope_stuff()
to emphasize that c was only important to the loop and the call to "accumulate"?


Yes:

    main_scope_stuff()
    with create_context() as c:
        for foo in foos:
            do_something(foo, c)
            do_other_thing(foo, c)
        accumulate(foos, c)
    more_main_scope_stuff()
That signifies that "c" isn't supposed to be available after scope of the "with" statement. The variable will still be bound, but conceptually it's unavailable, since it's not supposed to be used.

Also, the object in create_context() needs to support being called like that (i.e. it should contain __enter__ and __exit__ methods).


> The variable will still be bound, but conceptually it's unavailable, since it's not supposed to be used.

Which, in all fairness, is the same as saying that this does not indeed create a new scope. If you're going to rely on convention, you might as well dispense with the 'with' statement at all and use blank lines.

An explicit and safe (if not very elegant) way to put a variable out of scope is just deleting it after you're done with it:

   main_scope_stuff()
   if True:  # Let's say we _really_ want indentation here..
       c = create_context()
       for foo in foos:
           do_something(foo, c)
           do_other_thing(foo, c)
       accumulate(foos, c)
       del c
    # something_or_other(c)  <--- raises UnboundLocalError
    more_main_scope_stuff()
That said, if it got to the point of going to this trouble I'd just make a nested function instead.


That's nice, I didn't know about that, thanks. I'll definitely be using it the next time I have to use Python.

It's a little bit unsatisfying, since all of the context has to be wrapped in a single object, but that does cover a common class of Lisp macro use cases.


Of course it's possible to increase indentation arbitrarily: simply create a new function. Any programmer should know that too many immediate levels of lexical context is bound to confuse the reader anyway.

I'm not really sure what you're trying to show with that code, though. Is your objection that every do_ function is requiring a context?

Edit: I see what the parent was saying now, see bottom comment. Also, see StavrosK's comment if you're fine using Python 2.5 (released 2006).


I want to communicate, using indentation, that more_main_scope_stuff() and subsequent code do not depend on c. c is conceptually part of the loop, and not of the surrounding code. (Like the initialization statement in C/C++).

The code I wrote does this (because c is 1 indentation level deeper than the main_scope functions). I do not know if it is legal Python to simply increase indentation level to demarcate a new scope, but I suspect it is not, and it is certainly not common practice.

This is really a minor gripe - but then again so are the article's - it's pretty easy to tell a binding from a function call.


Ah, I see -- this is a really good point, thanks for bringing it up. You're right, one can not arbitrarily increase scope in Python. The most common way I've seen of demarcating this kind of structure is actually using a blank line. For example, the code would instead look more like:

  c = create_context()
  for foo in foos:
    do_something(foo, c)
    do_other_thing(foo, c)
  accumulate(foos, c)

  do_other_stuff
  
One could also throw the separate stuff into a new function.


One could also throw the separate stuff into a new function.

That's the elephant in the room in all of these "my language is more readable" discussions, isn't it?


To be fair, that's what Lisp's let is syntactic sugar for, too.


This isn't true, consider:

  if 1 < 2:
      c = 2
  else:
      c = 3

  print c
(I'm retarded and don't know how to do comments that have new lines in the, but hopefully you catch the drift. The 'print c' is at 0 indentation)


Add at least two spaces at the beginning of each line and it will be treated as a preformatted block.

What you try to do is not legal Python. The else clause can't go in the same line.

I get your point, though. Python's scope is not in terms of literally 'indentation' blocks. Scope units are: module toplevel, and (possibly nested) function and class definitions. To create a new scope you create a new function. This is what Lisp does too (you can define let as a macro over lambda).

I have oscillated between liking either way better and at present I find Python's sloppier way more practical.


Fixed, thank you for the tip. Yeah the point I was showing was that the 'c' exists after the if or else, showing that scope is not bound to indentation level. On the subject of scope, this behavior has gotten me a few times (I find it unintuitive although I understand why it works this way):

  x = 2
  f = lambda : x
  print f()
  x = 3
  print f()


Yes, the first assignment works as a Scheme 'define', and all subsequent ones as 'set!'s. The whole function/class/module is a monolithic scope, inherited/superseded/extended by functions and classes defined within.


(f x) is not different things. It is just that the meaning of (f x) depends on a context and is only meaningful in a context.

    (f x) alone is a call, a macro or function.

    (let ((f x)) ...) , here (f x) is an item in a binding list.
What a Lisp programmer sees is a pattern introduced by LET:

    LET ((var value)*) body
There are a handful of basic macro code structure patterns. Once they are learned and attached to the symbol of the corresponding macro, the understanding of code is much improved.

Reading Lisp code requires to identify the visual marker - the symbol in front. Depending on that there are code patterns that can be visually destructured: lists, property lists, assoc lists, lambda lists, binding lists, ...

In other languages the visual markers are supported by different characters. In Lisp it is a symbol.

    {
       a := b + x;
    }
vs.

    (progn
      (setq a (+ b x)))
As a Lisp programmer I'm trained to see the PROGN, not the parentheses.

It is a bit like being afraid of riding a bicycle for the first time. How can one move forward and balance at the same time? Difficult. Once learned it is simple and hard to unlearn.

How can one read the various Lisp code patterns? Once you have learned them it is easy and hard to unlearn.


Many of the design choices in Clojure are aimed at resolving precisely these concerns. Quoted lists and literal lists are of course usually written '(f x) in most dialects, and Clojure uses [f x] for function arguments and binding lists.

The only concern that remains is that (f x) could be a function call or a macro invokation. This is entirely by design, and is in fact a core idea of lisp that lets you build your own constructs with the same syntax as the language's native ones.


Lisp seems to be optimized for writing code; Python seems to be optimized for reading it.

You know what is truly optimized for writing code and not reading it? Mathematical notation - and for good reason, most mathematical equations are written on paper. Mathematicians use one letter variable names like (i, j, n, m, x, y, z), various Greek letters, infix notation, and other things that often makes mathematical equations incredibly hard to read, but easier to jot down on paper.

Almost all programming languages, such as C, Java, Python, and Perl are influenced by this notation that although it was easy to write on paper, is not as easy to read and not nearly as easy to parse.

Lisp is not nearly as easy to write on paper - and it often is harder to type - however - for any experienced Lisper it is the most readable programming language there is, hands down, because once you use it for long enough the parenthesis fade away and you can write/read code without the mental overhead that is entailed by parsing.


"Sometimes (f x) is [one of 6 things]"

I don't mean to be the next snarky Lisp guy in the room, but I don't see the difference between reading

    (if (= a 3) ...)
and thinking "that's a special form and a predicate", and reading

    if a == 3:
        ...
and thinking "that's a special form and a predicate". In either case there's "if". Writing "(f x) could be anything" is a straw man. As soon as you replace "f" with "if" the argument falls apart.

(No, you can't name a function "if" in Common Lisp. Go ahead, try it.)


Ah, but you can in Scheme. :)

    guile> (define if (lambda (. args) (display "Haha!") (newline)))
    guile> (if (= 3 4) 't 'f)
    Haha!


Cute! And you're right! Scheme should be kept away from people who are prone to hurt themselves, as well as nail clippers and letter openers. Too dangerous!

:)


> Lisp seems to be optimized for writing code; Python seems to be optimized for reading it.

Urgh... A hundred times no. I really do appreciate that the author seems to have given Lisp a try. But this is simply wrong.

Why does Lisp provide macros? Because they make code easier to read. I say stop focusing on what language abstraction (f x) represents and focus on figuring out what the code is doing.

Oh well, at least he didn't complain about lisp having too many parenthesis.


"Oh well, at least he didn't complain about lisp having too many parenthesis."

Humans are visual creatures. It really is easy to get lost in Lisp's parentheses.

Languages whose blocks are easier to identify at a glance really are easier to read.

The attitude of far too many Lisp developers seems to be that Lisp has failed to dominate after all these many decades because people are just too stupid to realize how great Lisp is.

If Lisp hasn't dominated by now, rather than arrogance and superiority, it would be better to think about why Lisp continues to fail to dominate.

By the way, I'm not suggesting you're one of those people.


> Languages whose blocks are easier to identify at a glance really are easier to read.

Which is why conventions for indenting Lisp code to show block structure have been established for decades. This problem has been solved for longer than Python has even existed. I don't know why people keep bringing it up. Lisp isn't perfect, sure, but this is a very weak criticism.

Besides, what about mismatched delimiter problems in languages like C that have shift-reduce conflicts? Dangling else problem (http://en.wikipedia.org/wiki/Dangling_else), anyone? It seems disingenuous to complain about mismatched parens in Lisp when those are trivially handled by most decent editors, and the problem is in no way unique to Lisp.


Python would argue that if the primary operating factor in readability is block structure and indentation one should simply add block structure and indentation as a mandatory part of the language and then remove cruft which is no longer necessary.


Python doesn't mandate block structure and indentation in lines within brackets and parens. Ask yourself why is that and you'll have the reason why it makes sense in Lisp.

I'm very familiar with Python and hack occasionally in Scheme. After a really short time, parens really are no problem. They are an asset. In big functions where indentation stops being obvious at a glance (and in smaller ones, what's the problem?), I can more easily see/check the extent of a block in Scheme (with editor paren matching) than in Python. The editor is better able to help me select and operate on such blocks. Moving blocks around (e.g. for refactoring) is easier, faster and feels safer.

If I still use Python more often than Scheme it has nothing to do with parens. Much on the contrary, that's one of the reasons I wish I could spend more time in Scheme.


Python doesn't mandate block structure and indentation in lines within brackets and parens. Ask yourself why is that and you'll have the reason why it makes sense in Lisp.

Because they screwed up? I think indentation like the following should be mandatory across multiple lines:

  fun_a(long_arg_a,
        long_arg_b,
        long_arg_c)


And does that remove the need for parentheses? ;)

Because actually, it gets more complicated than that. Sometimes it makes more sense to say:

  def open_window(x, y
                  width, height,
                  resizable=True,
                  caption="")
Without parens, is x, y a tuple or two separate arguments? And do I want to remember that?

Also, what's inside the parens could have been a generator expression, which could have been conceivably partitioned in more than one sensible way, and which could have subexpressions with parens or brackets itself.

Sounds contrived? I do it all the time and it often looks and reads just fine.

I'm not arguing against your indentation rule, which is the one I use, and not even against the proposal of making it mandatory (although I'm not sure that in the heat of refactoring I'd appreciate the unnecessary SyntaxErrors), but about the point that making it mandatory would make parens unnecessary.


True, although Haskell does show we can do function application without excessive parentheses.

I think you're definitely right about Python, though. Fundamentally, Python is still a procedural, block style language. If it were written in the same functional style as Lisp it would probably be just as complex.


The ML and APL families also do function application without excessive parens.

I don't think Lisp's grouping rules are complex. There are a few functions with special grouping, particularly let and cond, but anybody doing real programming in Lisp gets used to them pretty quickly. (Lisp is hardly the only language with special cases in its syntax!) User-added functions seldom have elaborate nesting schemes, or at least have a damned good reason.


I'm not convinced that block structure and indentation are primary readability factors, just noting that Lisp programmers established conventions to address that problem long ago.


That's fair. I'm not saying Python's way is better or Lisp's way is better. I'm a long-time Python programmer but I'm also working on the Racket code base. I was simply pointing out a difference in philosophy, not taking sides. I mostly believe that the debate is a minor stylistic one and there are trade-offs in both directions. This is fundamentally similar to the way I see arguments for and against Python syntactical whitespace vs. Ruby-esque blocks.


Languages whose blocks are easier to identify at a glance really are easier to read.

IMO, it's really pointless to toss about theories having to do with readability for this reason or that. Programming languages are also (sub)cultural entities. It doesn't matter how many clever reasons we come up with explaining why people don't like this or that about whatever language. Most of those things are unsubstantiated, so who gives a flying expletive?

What matters is that languages attract audiences of the size they attract. It's a fact of life. It's all we know until some serious psychological/linguistic research happens. Frankly, I'm not sure if enough people care enough about what coders think for it to happen. Maybe they should. In the meantime, we should just shut up and code. (Starting with me. Why did I write so many comments today?)


in lisp one ends up reading code by indentation - much like python. the parenthesis fade into the background after just a couple of days of using lisp. matter of fact i use a light blue background in emacs, and make the parenthesis a light gray. barely see them.


Why does Lisp provide macros? Because they make code easier to read.

To the person writing the code, yes. To the next coder maintaining the code, not likely.


Do you have any data(even if anecdotal) to back up your claim? My experience is that macros don't obfuscate code at all(unless misused, I'm talking about idiomatic use of course). If i have the code loaded i just type (documentation 'macro-or-function-name 'function) to get its docstring. Slime can also show me its documentation and its argument list. Not to mention that a good macro(like any other well written code) is obvious. On the internet Lisp is the most unmaintainable language, in practice its pretty maintainable.


true, but i still find the loop macro confusing.


Here's an anecdote from the other day:

http://news.ycombinator.com/item?id=2240461

On another note, why do you think lisp isn't more widely used?



Why isn't Smalltalk, Prolog, Modula, ML, Eifel, FORTRAN, Ada, APL, etc, more widely used? Because there can only be a couple of "mainstream" languages, and C/C++ took up all the mindshare in that period.

In fact, a more useful question is: "why is Lisp even used as much as it is?" The current Common Lisp standard is 20 years old, and yet it supports a pretty active if small user community, which is more than can be said of most of its contemporaries.


Well, that anecdote talks about some people _fearing_ powerful, expressive languages.

In a previous job I had workmates arguing for banning "nested" Python list comprehensions. And as an example of such horrible nesting he proposed something like:

  some_list = [some_function(blah)
               for blah in some_other_function()
               if some_condition(blah)]
What this says to me is that this guy didn't get list comprehensions at all and didn't want to. Sure enough, there's no nesting whatsoever going on here. He was just scared by non-C++-like syntax.

My point is: popularity, though desirable in itself, is a poor measure of anything else. Most people run away from good things for no valid reason.

(Not saying there are no valid reasons why Lisp is not more popular, only that none would really be needed. Obscurity is quite a stable default.)


Others already pointed out the AI winter. Reading about the history of lisp is very interesting, it almost feels like you are reading about a lost civilization, so ahead of its time. On the other hand, the AI winter is in the past, why isn't lisp used NOW? I think its getting better, the scheme people are working on sorting out their issues, and of course theres Racket, which is awesome. You have the clojure people attracting a lot of java and ruby developers(among others) to at least give lisp a chance. And even the Common Lisp world is on the rise. The biggest problem IMHO is that there are too many non lisp programmers with strong opinions on lisp that spread FUD, even though all their lisp experience can be summarized as "read a scheme tutorial, and wrote a recursive factorial function".


Look through some really old Lisp code sometime. SBCL is highly readable despite being: 1) 25 years old and touched by tons of different people since then; 2) a program that does some really complicated things in a pretty small space (~60 KLOC for the arch-independent portion).

When you see (def-ir1-transform ...) in the SBCL source, it's immediately obvious what is about to come along (at least if you know about compilers). Similarly old C code (eg: GCC) is completely incomprehensible in comparison.


Isn't this true of all forms of encapsulation? Deeply nested type hierarchies, event chains, and even method calls can be nearly impossible to decipher, even with a graphical debugger, syntax highlighting and static typing. Writing maintainable code is a skill in any language. Lisp offers a lot of advantages to this end.


Isn't this true of all forms of encapsulation?

Yes it is.


Have you ever actually programmed in Lisp? Macros can and often do hide all sorts of ugliness that would otherwise make the code a hundred times more unreadable. Much of Lisp's syntax sugar is built with macros.


Saying that LISP is for writing code and Python is for reading code is like saying that cars are for travelling uphill and bicycles are for travelling downhill.


While you were being facetious, I do think that I'd love a device that turned into a bike when I was going downhill and then a car when going uphill. At the very least... a motorized bike for going uphill (which obviously do exist).


you're right though, even if you thought you were making fun of it

because that was the OA's point: that if a developer wanted to optimize for going uphill, choose a car rather than a bicycle. downhill? bicycle will do that with less resources, waste, cost, pollution, volume passed through, etc. pick which is better given the context and optimization goals.


> Python on the other hand has no macros and doesn't give you much to write concise, abstract, elegant code. There's a lot of repetition and many times it's downright verbose. But where Lisp is nice to write and hard to read, Python makes the opposite tradeoff. It's easy to read. You can determine how to interpret something—a string, a list, a function call, a definition—just by looking at the code locally.

This is a great quote. If you replace Python with C and Lisp with C++, it's exactly what I've been trying to articulate (as a C fan) for some time now. In fact, I think it's much more applicable to C than to Python...


I also agree with the author's post script. Both are good languages that while overlapping a bit are probably best for different types of projects. I use Python when I need an existing library like NLTK or simple web apps hosted on AppEngine, Common Lisp for research, some semantic web stuff, and general NLP. I use Gambit-C Scheme mostly to write small natively compiled command line utilities (mostly NLP stuff).

It is all about getting stuff done, not insisting on using a favorite language, framework, or platform. Solving problems is what is fun and rewarding, not getting stuck on particular technologies.


This is similar to the way I compare Python and Lisp:

Common ground: first class functions, a lot of flexibility, closures

Lisp: macros, performance?

Python: easy to learn by example, readability, batteries included

I use Python day-to-day as I don't really need a macro system that much. I also disagree that Lisp is more elegant - beauty is in the eye of the observer.

Edit: formatting.


Beauty is subjective; elegance less so. Python is the language I've used and use the most, and I'll easily concede that Scheme is more elegant than almost anything else I've seen, and Python hardly even compares. Metaclasses, decorators, 'for'/'while'/'with' statements, etc. etc. In Python they are all good, practical, powerful ideas. In Scheme they're just unnecessary. Scheme is more elegant because it's been a priority in its design. Python aims to be more practical, beginner-friendly and straightforward.


Try Haskell. It has a strange mix of beautiful elegant core, and lots of (useful) syntactic sugar added on top.


I did try it, but not enough, I guess, to tell the beautiful core you talk about from the rest. So Haskell seemed to me full of elegant ideas, yet less elegant, as a whole, than Scheme.


Which Scheme are you talking about? I find the awful type system of base R6RS to be extremely inelegant. Contracts and strong typing make formal reasoning about programs far more powerful, hence more modern Scheme-style languages like Racket.


Good catch. I was talking about R5RS and earlier.


I don't know about that. Say what you want about Emacs (personally I'm not a huge fan) it does a pretty good job at syntax highlighting and auto-formatting Lisp source. I never had problems with reading well-written Lisp code in Emacs.


I especially like the author's post script. Fortunately, this appears to be the prevailing attitude at HN.


If that argument is valid, why not go a step further to, say, early basic implementations? Back in the day, there was no discussion possible whether 'x' was an integer, float, or string. If it were a string, it would be x$, if it were an integer, i%.

I do not think his argument is valid. In both languages, once a program is large enough for any syntactic ambiguity to make a difference, there will be more than enough semantic clues to help solve the syntactic disambiguaties.

In other words: yes, one can write really hard to understand Lisp, where (f x) has six meanings in one line, but outside of obfuscated code contests, programmers just do not write such code.

Edit: the semantic clues I refer to are those implied by function names, names of data types, etc.



The trouble is that you can't easily tell just by looking at (f x) how to interpret it.

Every serious development group is going to need a few patterns and coding standards, because newsflash: no language is perfect. All languages need to be used properly.

If you have good standards for naming, then a simple search should tell you exactly how to read "(f x)" in less than a second. The same goes for Smalltalk method names. If an extremely important method in your application code gets named "add:" well, guess what, there are already base class methods named "add:"! This means your senders searches and implementor searches -- which are fundamental to the way lots of expert Smalltalkers code -- are going to be polluted.

Also, the same thing can apply to function names in Python!

In dynamic languages, good programmers know names are very important. Before a good, experienced dynamic language programmer names something "foo", they first search for foo! They search for things that might be confused with foo! Programmers of dynamic languages who don't do this are either ignorant or inconsiderate. This post doesn't strike me as written by someone experienced or knowledgeable with development in dynamic languages. (And if he's experienced, he's had the bad luck of having been mentored in short-sighted groups.)

There's a lot of repetition and many times it's downright verbose. But where Lisp is nice to write and hard to read, Python makes the opposite tradeoff. It's easy to read. You can determine how to interpret something—a string, a list, a function call, a definition—just by looking at the code locally.

This post takes no account of Don't Repeat Yourself and is seemingly written with no foreknowledge of this idea. Curious, since Python is powerful enough to get close to the ideal of Don't Repeat Yourself.

Strategy for creating a blog post that looks analytical and sounds good -- but only to sophomore programmers:

1) Pick a certain aspect Y of language X which can be abused

2) Give a detailed analysis for why Y is bad

3) Completely leave out any mention of known best practices for addressing Y

4) Compare to a more popular language Z. Leave out any mention of Y applied to Z.

This way, you can get some attention from sophomore programmers and provide them a way to feel better by giving them a good sounding reason to avoid studying X.

Really, this kind of sophistry is so common, there should be a name for it! (Actually, there is a way that Lisp can be easy to write and hard to read, but that has to do with macro expansion. But the tendency is for those with substantive Lisp experience to know about things like that.)

EDIT: To avoid distracting "ad hominem" nonsense. Note the argument is completely the same.

EDIT: Got read/write mixed up.


"What this actually reveals is that this poster is used to code bases not advanced to the point of Don't Repeat Yourself. He may not even be at an advanced enough level to know about this idea or fully appreciate it. Python is powerful enough to get close to the ideal of Don't Repeat Yourself. Evidently, this poster has no clue."

You realize that this poster is Google employee #7, responsible for "Don't be evil" (along with Paul Buchheit), and wrote the first prototype for Google Instant back in 1999?

Something else to keep in mind, from the article:

"When I read debates online, I have a bias towards the people who view these things as tradeoffs and a bias against the people who say there's only one right answer and everyone else is stupid or clueless ... When you're in a debate, consider that the other person might not be stupid, and there might be good reasons for his or her choices."


You realize that this poster is Google employee #7, responsible for "Don't be evil" (along with Paul Buchheit), and wrote the first prototype for Google Instant back in 1999?

Why, are you arguing from authority on his behalf? In that case, I say he should know better! (That's a stand worth burning some karma on!)

tl;dr - If you are going to criticize language X, first take the time to learn the best practices of language X. If you don't, you're just creating more clueless noise of a kind which already exists online by the ton. Also, it seems quite lazy. All you have to do is get on IRC and ask, "Hey, I've been playing around with X. What about when this happens?" You might even learn something this way.

EDIT: Google employee #7? Makes me wonder if dozens of Google programmers have said something like, "Arrrgh, why did he have to name it that!? Did Amit write that code?"


No, I'm arguing against the ad hominem that forms the second half of your original post.

The first half of your post was great. It addresses the arguments directly, and I completely agree. In the second half, you then go on to make assumptions about the programmer's experience and aptitude - assumptions that I've shown are quite false. If you want to debate the merits of an argument, debate the merits of an argument, don't assume that everyone who argues with you is stupid. There's a very good chance that they aren't, and then you look like a fool.

Edit: On a regular basis, I whine about Craig Silverstein or Jeff Dean or Marissa Mayer's code. "Why did s/he have to write it like that? It's cost us untold thousands of hours in coding around their nasty hacks and poor design decisions." However, I recognize the context they were operating in: they were a tiny startup of about 20 people, trying to organize the world's information. They were working basically round the clock, and they changed the world. Sometimes, building something that works for users and won't crash your datacenter is worth the price of maintainability. You have to survive before you have the luxury of hiring people that will call you stupid.

Tradeoffs. They come with every hard problem.


"Why did s/he have to write it like that? It's cost us untold thousands of hours in coding around their nasty hacks and poor design decisions." However, I recognize the context they were operating in: they were a tiny startup of about 20 people, trying to organize the world's information.

Yes, but if you're lucky enough to be Google Employee #7, you should now have time to do things like actually do your homework before you post about Lisp.

Just because you're big company coder [single digit] doesn't mean you're necessarily an expert. I actually expect a lot of startup code to be half-baked. Large swathes of the Smalltalk base image code is still egregious, and people have had decades to do something about it and haven't. There were also lots of sophmoric Java API decisions when it first came out.

I don't mistake seniority for expertise, and seniority or friendship with you or anyone in particular doesn't earn anyone a pass with me.

If he knows about but doesn't subscribe to Don't Repeat Yourself then he does a good job of sounding like he just doesn't have a clue and is in the practice of writing lots of boilerplate code. To me, that is a merit of a programming language argument. (In this case demerit.)

Please explain how you can read this post and think the author has a clue about careful name selection? Basically, his major point is predicated on clueless naming in dynamic languages.


Ad hominem is disqualifying an argument based on attacks to the one making it. The comment you refer to is drawing (perhaps wrong) conclusions about the author of the OP from what he says. Warranted or not, it's not fallacious.


[deleted]


Actually, you should re-read the "why articles like this are incorrect" part. There's nothing essential to the argument that depends on the expertise of the author. Primarily, it's a description of articles of this type, and the particular things they lack.

Which, in addition to drawing incorrect conclusions and ad hominem, is also circular reasoning. Triple fallacy.

Nice try, it sounded good, but you're just blowing smoke here in defense of your friend.


>Python on the other hand has no macros

I challenge this. I won't pretend that Python has full macro equivalence, but decorators are damned close.


Decorators are ways to wrap function definitions with code that gets executed at the time the function is defined. Commonly they add stuff that happens every time the function is called.

This is certainly something you can do with macros, but it's not even really close to the full semantics. Macros allow you to add new special forms to the language. If Python had macros, for example, the new `with` statements would have been a five line addition to the standard libraries.

I really don't see it as "damned close" at all.


Definitely.

Indeed decorators are a patch to a shortcoming in Python's ability to define anonymous functions or classes. If you could just say something like:

  f = memoized(lambda x, y:
                 ... some multiline function)
or

  C = coords_or_vectors(class:
                         ... some class definition)
you wouldn't need the kludge:

  @memoized
  def f(x, y):
      ...

  @coords_or_vectors
  class C:
      ...
Note that I'm not arguing for the syntax in the first two examples, just the ability to do the equivalent without special cases in the normal syntax of the language.

In Scheme you wouldn't need to learn the extra syntax, learn and remember the order in which decorators apply (top to bottom?, inner to outer?).

I was actually very happy when decorators got added to Python. But let's face it, they just give you a way out of a corner Python painted itself into. They're nothing to be too proud of.


"Damned close" perhaps should be qualified as from the point of view of a Pythonista, given the ideals of that approach to programming. I took issue with the specific point that Python "has no macros", and then I qualified it, knowing full well that Lisp macros do more.

Decorators ultimately result in a function that is a substitute for another function. That function can have all sorts of wonderful runtime behavior, including access to arguments, the call stack, and other functions. Dismissing them as "adding stuff that happens every time the function is called" misses an entire dimension of their utility.


Sorry to be blunt, but it's only "damned close" from the point of view of someone that doesn't understand either decorators or macros.

Decorators are syntactic sugar for a particular mix of assignment and function call. They're just there so you only need to write the name of the decorated object (say, function or class) once rather than three times. For example

  @some_deco
  def f(a, b):
      return a+b
Is exactly the same as

  def f(a, b):
      return a+b
  f = some_deco(f)
Let me repeat: decorators only save you writing the name of the function two additional times. Which is nice and worth it, but not groundbreaking, and, more to the point, has nothing whatsoever to do with macros (except perhaps that in Python you couldn't do it without macros, but that's besides the point).

The equivalent to decorators in Lisp is just normal higher order programming (functions taking functions as arguments, and/or returning functions), with no particular syntax.

  (define f 
    (some-deco 
      (lambda (a b) (+ a b))))
It would be bad form to use a macro for something like this, which is firmly within the domain of normal function definitions and calls.

(P.S.: For short functions like this, you could have said, in Python:

f = some_deco(lambda x, y: x + y)

But this breaks for multiline functions and class definitions.)


You're conflating decorator syntax, which I'm not discussing, and decorator functionality, which I am discussing. If there was no syntax for decorators and the only way to decorate a function was to use assignment/composition like in your example, I would still consider that to be a decorator.

A macro is going to be evaluated at compile time and will result in a matching pattern (often something that is structured like a function call) being expanded into some other construct that will be evaluated at runtime.

A decorator is going to be evaluated at compile time and will result in a decorated function being "expanded" into another function which will be evaluated at runtime.

No, they aren't the same thing. How could they be? The languages are different. But there is an underlying affinity.


> You're conflating decorator syntax, which I'm not discussing, and decorator functionality, which I am discussing.

Guilty as charged. I admit I thought you were talking about decorator syntax. Because decorator-the-technique makes even less sense as an alternative to macros.

Decoration is a technique of higher order programming (taking functions as arguments and/or returning them). The fact that you can use it at "compile time" (whatever that means in Python) is a property of the dynamic nature of the language.

Yes, that stuff is powerful. But Scheme has all that, and its designers still saw the need to add macros, mostly to do the things that you can't do by executing higher order functions at "compile time".

Macros are not essentially about doing stuff at compile time; they're about messing with the very syntax of the language.

My point is not just that decorators and macros aren't the same thing; it's that macros are meant explicitly to do what decorators can't.

I guess most Python programmers don't miss macros simply because they've never seen the need for them in first place, not because Python has any replacement. The closest alternative that Python has to macros would be eval.

In my opinion, what ultimately allows most Python programmers who have been exposed to macros to almost never miss them is the syntactic sugar that the Python authors keep (judiciously) adding every release.


Can you expand on this. What is something that macros can do that decorators can't? And likewise, what are some common uses of macros that decorators can do?


One thing that answers both your questions at once are macros that create new control structures. Macros can take a chunk of syntax tree but not evaluate it, and decorators can only manipulate functions, they can't be used in a function.

That said, the usual answers to the question of "Why do I need macros" are slowly but surely being chewed through by Python. The relatively-recent (albeit years old) addition of "with" took another previously-macro-only use case away. I'm not sure what massive win for macros is left. If your Python is "downright verbose" you're probably doing something wrong. It may not always be the Absolute Shortest (TM) but "downright verbose" shouldn't come up often.


>I'm not sure what massive win for macros is left.

From my point of view, not being able to define new syntax is as serious a limitation as not being able to define new functions, or data structures.

There are a number of specific examples usually cited to explain why the ability to define new syntax is important. Answering these specific examples by adding more static syntax to the language seems to be severely missing the point, to me.

Imagine it was functions instead. "Well your language already has a sort() function, but say you want to use an in-place sort instead of what you get with the standard library? You need the ability to define your own functions." "No we don't, our BDFL added an in_place_sort() routine to the standard library for the next version of the language." That doesn't make sense to me at all.


From my point of view, not being able to define new syntax is as serious a limitation as not being able to define new functions, or data structures.

There's a pretty big difference. Both are about DRY. But functions are about abstracting functionality. Macros are about abstracting syntax. Both are useful and can reduce repitition, but in the general dev community, I don't think its clear that everyone is bought into abstracting syntax too much.

For example, the far more mundane operator overloading debates were about just the topic and the side arguing that not all syntax should be abstracted largely won. This is one reason that despite the fact that Lisp is one of the oldest programming languages and easiest to learn, it has never caught on in a big way. It tends to optimize for that which a lot of people don't want optimization.


> [...] in the general dev community, I don't think its clear that everyone is > bought into abstracting syntax too much. [...]

This seems like you're saying "python is better because it does the things people like and doesn't do the things they don't like". But the same argument could be used to claim Microsoft Windows is a great operating system, and that Justin Bieber is a great musician.

> Lisp [...] has never caught on in a big way [...]

This is factually incorrect. Scaled for the size of the industry, it was at one time as popular as Java was a few years ago. What happened then is well documented elsewhere. Search for "ai winter" and "worse is better" to start.


First, I'm not making any claims that Python is better/worse. I'm simply stating that I think many devs (but certainly not all -- and probably even fewer exceptional devs), did not buy into a large degree of syntax abstraction. I personally like some it, but there certainly are limits.

This is factually incorrect. Scaled for the size of the industry,

Sure, if you scale for the size of the industry which found it useful. But that seems nearly tautological.


Okay, then I agree. Lispers use macros sparingly. But not having them at all pinches me terribly and I wish I could explain that better. Decorators don't help much with that pinching :)

I mean the size of the industry that programmed computers at all, not the size of the industry that programmed in Lisp.


"From my point of view, not being able to define new syntax is as serious a limitation as not being able to define new functions, or data structures."

Which is exactly my point. We've gone from "Well, I can use it to define a ternary operator" or "I can use it to define a 'with' block" or "I can create continuations" or a number of other specific examples to "Uh, well, I just like them, here I do this with them" and in my experience there's always an easy "Well, the Pythonic way to do that is this and it's not all that much harder, may even be easier, and is a standard part of a relatively simple language rather than your own hand-rolled thing".

It's a very different argument than it was ten years ago when I first got into Python. It's gone from really strong pro-macro points to personal opinion. I respect and acknowledge that opinion, no sarcasm. But it's not the same.

Your last paragraph isn't relevant to me because the specificity of your example is misleading. It's really hard to come up with an actual macro use case that is not covered by modern Python and is still actually a good idea. (It's trivial to come up with bad uses of macros not covered by Python, but I'm hardly crying about that.)


> It's really hard to come up with an actual macro use case that is not covered by modern Python and is still actually a good idea.

sqlalchemy (and now mongoalchemy) is begging for macros. See CLSQL for an answer.

Regular expressions are infinitely more readable as expressions in a dynamic syntax. See the rx package in Emacs for starters.

What if you want shell/Ruby style back quotes, which in essence create a closure that runs a batch job? See my SHELLSHOCK library (jfm3.org) for the Common Lisp extension. It's not much code at all.

What if you want large literal strings which respect the indentation of their surrounding code, without uglying things up with <<<EOF ? See my BOXEN library (also jfm3.org) for that Common Lisp extension.

What if you want something like Python's r"string", but you want more control over which things are escaped and which are not? See CL-INTERPOL for that.

What if you want anaphora in your control structures?

What if you want Lex/Yacc style parser generator specifications where the type of productions is extensible, and the productions themselves can be generated dynamically at compile time? (This will make no sense to you unless you've ever had no choice but to run m4 on your .l and .y files.)

I could go on. It's not hard for me.


As jfm3 points out, you're not going to define new language syntax with decorators. You're also not going to apply them on the spot, inline with other code, since they are defined on a function, and aren't used in the same way as macros are in Lisp.

What you can do though is define behavior that varies based on the context. For example in Python you don't have function overloading or types, but with a decorator you can effectively create that type of behavior. So while you're not changing the language, you are changing the semantics of the functions you decorate.


This post has some interesting points, but I would really appreciate some concrete code samples.


Without reading the article, can this be summed up as "People actually use Python"?


No. It's something entirely different, which you would know if you had read, or even just skimmed, it.


In general, any time you find yourself beginning to write a comment that says "I didn't read the article" is probably a good time to just click on the article, or go back to the news feed, or close the tab and move along.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: