Hacker News new | past | comments | ask | show | jobs | submit login
Sweet-expressions: A readable format for Lisp-like languages (dwheeler.com)
85 points by evanrmurphy on Nov 26, 2010 | hide | past | favorite | 81 comments



Somebody has invented a more conventional-looking syntax for Lisp every couple years since John McCarthy first suggested M-expressions. Nobody[0] has ever cared enough to use any of them.

People who "get" Lisp get over any aversion to the syntax, and usually even come to appreciate it. People who can't get over the syntax never actually get Lisp.

[0] For large values of nobody


People who actually get Lisp understand that S-expressions and surface syntax⁰ are orthogonal. "Ubiquitous parentheses" are one way to represent them. Sweet expressions are another.

There is zero Lisper on this planet that think about "Ubiquitous parentheses" as the best possible syntax. Not even the seemingly fanatics of Lisp's "regular syntax". Proof: no one in her right mind (not even you) would prefer (quote X), or even (' x) over 'x. 'x is an irregularity that makes the whole thing terser, while using up less cognitive power. In other words, it's better.

Now why stop there? You could try and remove more parentheses by using a tab syntax, or introduce a few other irregularities, or both. There's a good chance that we could come up with something better than the currently widely accepted surface syntax.

Conclusion: it's perfectly possible to get Lisp and not surrender to it's standard syntax, because it may just not be the best. Besides, I'd think twice before suggesting that David A. Wheeler don't deeply understand Lisp.

Now, knowing that the standard syntax is the only widely used one, one of course have to get over it to have a chance of understanding lisp. But if you only meant that, You hardly said anything at all¹.

[0] When you wrote "syntax", I assumed you talked about surface syntax.

[1] http://lesswrong.com/lw/jb/applause_lights/

Edit (addendum): By the way, I'm surprised this issue isn't settled yet by structure editing. With a structure editor, programmers could choose the surface syntax they like independently of the underlying S-expression tree. (Incidentally, this could apply to any language, though with more difficulties.)


I'm sure David A. Wheeler gets Lisp - probably more than most of us. The creators of alternate syntax proposals almost always do, but they're usually not the intended market for their own proposals. Alternate syntax proposals are almost always written by Lisp proponents who want to make Lisp more accessible to others.

In the article, Wheeler says the goal is to "provide a better notation that others can read". To quote pg: "Historically, languages designed for other people to use have been bad"[0]. I've found that to be true with most alternate syntax proposals I've read, including this one. I do not find it as readable as the original Scheme, nor is it as readable as Python.

[0] http://www.paulgraham.com/javacover.html


"Now why stop there? You could try and remove more parentheses by using a tab syntax, or introduce a few other irregularities, or both. There's a good chance that we could come up with something better than the currently widely accepted surface syntax."

The constraint is that the grammar of the resulting syntax has to be capable of expressing everything expressable by S-expressions, has to be unambiguous, and it has to be pretty-printable back into a canonical form. The last point itself is pretty hard to do, but if you actually look at the grammars of a lot of programming languages, very few of them aren't ambiguous!


I would really like to know more about these past proposals for conventional-looking syntax. I already know of a few, including the OP and SRFI 49 [1]. Could you offer any additional reading on the subject?

Some quotes from a different page on the OP's site [2] suggest that even some experienced lispers can see value in conventional syntax:

> I've used Lisp my whole programming life and I still don't find prefix math expressions natural. - Paul Graham

> After 13 years of doing Lisp and 3 or 4 years of Python, I agree: I prefer writing Lisp, but Python is easier to read. - John Wiseman

Personally, I hold no aversion to the parentheses. But I can see some advantages/uses for a more conventional syntax. Also, it's a fun problem. :)

---

[1] http://srfi.schemers.org/srfi-49/srfi-49.html

[2] http://www.dwheeler.com/readable/


Shriram Krishnamurthi (author of PLAI http://www.cs.brown.edu/~sk/Publications/Books/ProgLangs/) recently suggested one: http://shriram.github.com/p4p/

I've been meaning to go through it in detail, but to be honest I really like s-expressions, really really. I'd even say the regularity of syntax is one of lisp's greatest features (even Clojure's [] and {} literals are going too far for my taste...)


Thanks for that pointer. I never expected Shriram to propose one :) The parens are quite acceptable if you use syntax coloring options in DrRacket to make them light grey, which is what I do. Imo, P4P is just not worth the learning delta .. particularly because it doesn't solve the single biggest readability issue - math expressions - which exists even if you make all parens invisible.

In my company, I came up with one myself (http://code.google.com/p/muvee-symbolic-expressions/wiki/Tab...) when some aesthetic objections arose to the use of (paren (laden (scheme))) as a scripting language for our product. It is quite telling that we ended up not using the tab syntax in the end, though it is still available in the product :D


Just out of curiosity, I created two gists with the P4P examples coded using the tab syntax I mentioned -

tab-syntax version: https://gist.github.com/716416 scheme-syntax version: https://gist.github.com/716417


The code as data as code as data ... idea is a feature of core Lisp. Not everybody 'gets' this feature and not everybody 'needs' this feature.

Other than that there have been many conventional syntaxes derived from Lisp: Logo, ML, Dylan, ...

What most of these don't have is the code as data as code as data ... support or at least it gets more difficult.

If the representation via s-expressions is not needed, other syntaxes are not that difficult to implement...


Because the people who "get" Lisp are almost always theoreticians who know what they want to code before they code it. They use Lisp as a write-only language. Meanwhile, working programmers avoid it, because it's very hard to step into someone else's Lisp codebase and understand it (and not just because of the fact that anything could mean anything due to macros—code is already visually indistinguishable from data without them.)

Languages, just like any other computer software, have a User Experience: the experience of programming in them. For example, the usual justification people give for why they like Ruby is that it has a good UX. There's no theoretical purity there, they just say it's "fun to code in."

Lisp, meanwhile, does not obey any of the principles of HCI that have been discovered in the last 30 years (how could it? It was invented before them!) There's a large gulf of evaluation stopping a programmer new to a Lisp codebase from understanding what their changes will do to it. UI design is about human psychology, not theoretical purity; solving this problem might seem like "papering over" the purity of the language from a theoretical perspective, but when doing actual comparative usability studies[1], it would be clear that the "papered-over" version would have something going for it that the theoretically-pure version does not.

[1] A comparative usability study on a language is very simple to perform: you basically print out some code samples, let people with no experience in the language (but general programming experience) read them for a set time period, and then evaluate their comprehension.


Because the people who "get" Lisp are almost always theoreticians who know what they want to code before they code it. They use Lisp as a write-only language.

There is a fairly prominent Lisper who argues the exact opposite; that Lisp is the ideal language for coding before knowing what you really want to code. I'd suggest reading Paul Graham's essays on the subject as a basis for discussing this particular point before going any further.

For example, the usual justification people give for why they like Ruby is that it has a good UX. There's no theoretical purity there, they just say it's "fun to code in."

Ruby's usability is pretty nice, but it has some seriously cool theoretical ideas, too. It's a few steps beyond just putting a prettier interface on i.e. Java. It may not be the most original language, but it's popularizing a lot of things most "working programmers" haven't encountered.

A comparative usability study on a language is very simple to perform: you basically print out some code samples, let people with no experience in the language (but general programming experience) read them for a set time period, and then evaluate their comprehension.

That seems like a questionable metric for the quality of a programming language.


> There is a fairly prominent Lisper who argues the exact opposite; that Lisp is the ideal language for coding before knowing what you really want to code. I'd suggest reading Paul Graham's essays on the subject as a basis for discussing this particular point before going any further.

I don't know if you're assuming here that I don't actually use Lisp; I do, and have, for at least six years, after reading those very essays. I still find it to be write-only, but I don't think I'm communicating what I mean by that clearly.

When I say "write-only", my meaning is "requires extensive amounts of external mental state for interpretation." An unfinished painting is write-only: while you're painting, you can experiment, yes, but that's because the "random-access" bit that you're experimenting upon is not what's on the canvas, but rather what's been loaded up into your mind. If you leave off a painting for some months and then return to it, or if someone else tries to finish your painting—well, they will need to spend days staring at it to dissect the original goal of the work, the original style and methodology, before they can proceed. It is the same with Lisp: the text of a Lisp program is the canvas, while the "effective rules" of the program end up reflected mostly in the programmer's mind. To resume a Lisp program after some months is no easy task, any more so than rebuilding a filesystem index purely from its on-disk data records.

> Ruby's usability is pretty nice, but it has some seriously cool theoretical ideas, too. It's a few steps beyond just putting a prettier interface on i.e. Java. It may not be the most original language, but it's popularizing a lot of things most "working programmers" haven't encountered.

There's a large difference between "theoretical ideas" and "theoretical purity." A theoretically-pure language can be held in the mind all at once, because it only has a few key concepts. Lisp, Smalltalk, Forth, Lua or io, etc. are theoretically-pure, while, say, Perl, while containing many amazing and useful theories, is theoretically a leftover surprise. Theoretical purity is good for the usability of a language: three tools in your toolbox are better than three-dozen, if they each solve all the same problems just as efficiently. However, it seems like, the more theoretically-pure a language becomes, the less concerned its practitioners become with actually evaluating or developing the UX of the language. This is, I think, why Go is being mostly rejected by working programmers, for example: C, as much as it doesn't seem like it, is a theoretically-pure language (as in assembler, it's all just memory addresses which either hold values or other memory addresses), and Go is purely a UX enhancement over it—but the systems programmers are so used to C-like syntax for their systems programming that they won't stop to consider alternatives that enhance long-term usability (such as, say, slices.)

> That seems like a questionable metric for the quality of a programming language.

Well, I did say "basically." :) Think of the sort of "reading comprehension" test you might have had to do in elementary school. To be scientific, you have to control for the subjects' exposure to different linguistic paradigms, the content of each code-snippet, etc. In fact, it would likely be easiest to test with two entirely made-up languages, each of which differ by only one syntactic element X, and see how comprehension varies on X. But I digress: it was just an example off the top of my head, because there are currently no metrics for programming language quality whatsoever, and any metric, no matter how bad, is better than none. If you have a better idea, then we should use that better idea, but what we shouldn't do is pretend the usefulness of a programming language[1] is completely subjective, and that they can't be compared.

[1] I do not use "programming language" to mean "platform" or "standard library" here; merely the mapping from syntax to semantics. LISP-style Lisp is the degenerate-case of a programming language, with a one-to-one mapping from its syntax to the compiler's AST structure.


I am by no means a theoretician and I 'get' lisp. I do maintenance and extension on a code base that stretches back to the late 80s. It is a product that runs on windows, has a large user base, and is still gaining customers (though in a decidedly niche market).

It appears to me that you do not have very much practical experience writing code in lisp. You are understanding the 'code is data' statement in completely the wrong way.

I store and treat my 'data' in exactly the same way that anyone would in perl or python or ruby. I don't think I've ever had a problem distinguishing code from data, when I had the desire. (I keep it in config files, or data files, or in variables).

Code is data, to me, means that I can take some data, and transform it into code. Code is a description of how the world is, at some level, and so is data.

(For example, I could take a description of the hierarchy of a website's rest service, and write a function to transform it into a series of functions that do the proper requests, in the proper formats, with the proper parsers).

How can you argue that lisp does not obey any principles of HCI simply because HCI wasn't invented when Lisp was? It doesn't make sense as an argument. If I don't have a name for something, does it not exist? Gravity didn't exist until Newton described it?

What HCI principles does lisp violate? (I find the idea of HCI principles dubious. Isn't good HCI 'whatever scores well in usability tests').

Lisp is not theoretically pure. It is theoretically empty. It makes no assumptions about what the 'proper use' of it is. The whole point of abstraction, to a programmer, is to 'paper over' the actual language and make the program a description of itself. Lisp is amazing at abstraction. (By amazing I mean the unqualified very best).

----

As an addendum. Your comparative usability study[1] is horribly flawed, because you assume programming experience. You should assume no programming experience. The majority of programmers have experience in an Algol derived language. This introduces a huge bias into any study that you would decide to do. Anything that looks and acts like Algol will be easily deciphered, anything else will not be.


A comparative usability study on a language is very simple to perform: you basically print out some code samples, let people with no experience in the language (but general programming experience) read them for a set time period, and then evaluate their comprehension.

That is just one axis of usability, optimizing for comprehension by someone unfamiliar with the language. Usability is also optimizing for the experienced programmer. Some designs do a good job for both, some work well for one at the expense of the other.

Hef Raskin had some interesting things to say about this:

http://www.asktog.com/papers/raskinintuit.html


Also, people with 'no experience in the language', but with 'general programming experience' will be primed with the conventions of mainstream languages and biased against s-expressions.

(Btw, it's Jef Raskin.)


By "general programming experience," I didn't mean "mainstream programming experience," but rather "having been given a survey of techniques for programming in iterative, functional, OOP, declarative/constraint-based, concatenative, etc. languages, without having learned a single one." In other words, someone who was taught to program for the purposes of the test.


How do you learn to program without learning any programming languages?


I have to say I'm not impressed by the link.

Intuitiveness is indeed highly correlated to previous human experience. But aside from pre-existing interfaces, an interface designer has available the whole of an average person's real world experience - the mouse is intuitive because it mimics a person's experience of moving a physical object - it doesn't mimic exactly but closely enough that a person can use their existing experience to extrapolate. And an interface designer who wanted to improve a standard interface may be able to find a different facet of human experience analogous to the task at hand and thus (contrary to the link) it is possible to create interfaces which are new and intuitive (but I'll admit it's damn hard and you should have a reason).

Most computer languages leverage human experience with natural language - not exactly, but partly. That's why Ruby's "return 5 if done" syntax seems really nice. Thus I'd argue you can make a reasonable study of what constitutes an intuitive interface.

So I claim the intuitiveness of Lisp's parentheses is a valid question for debate.


Of course familiarity by way of similarity to previous experience is a valid question for debate. My claim is that it is just one of several considerations for judging the merit of an overall design, and not necessarily the most important one.


Lisp is considered a language with a high strength in prototyping. I'm not sure why you think it's a write-only language.

I like Lisp because it is wholly unambiguous in visual syntax. That is not something I find in languages such as Perl, Python, Ruby, C++, Haskell, or C. I do find unambiguity in assembly as well.

I've also found that the general idea of "build your own Lisp" as a reason for lack of comprehension is specious when stacked up against the Java or .NET frameworks. You have to know the framework calls to understand what's going on. The same applies for C++ template metaprogramming.

I don't consider that a criticism; I consider that an acknowledgement of the ability of programming languages to provide abstraction.

I am a working programmer, and I like Lisp.


"A comparative usability study on a language is very simple to perform: you basically print out some code samples, let people with no experience in the language (but general programming experience) read them for a set time period, and then evaluate their comprehension."

That's just stupid. If the programmer doesn't understand English and the program is written with English identifiers, does that fail the test? What about APL and Perl?

"Because the people who "get" Lisp are almost always theoreticians who know what they want to code before they code it."

And you seem to be a theoretician who has never had to maintain any Lisp code.


> That's just stupid. If the programmer doesn't understand English and the program is written with English identifiers, does that fail the test? What about APL and Perl?

It's called a comparative usability study. The same person looks at the same code sample written in two different languages, A and B (in random order), given time T to absorb each one (where T is much less than the time it would actually take to fully absorb the code), and then the references are removed and they are given tests on the properties of sample_A and sample_B (again in random order.) The score is not a measure of either A or B individually, but rather the ratio A:B, derived by the ratio of correct questions in sample_A:sample_B. A:B is then handed to a Bayesian neural network as a confidence score.

Thus, if the test-taker doesn't know English, they get the same (low) score on both tests—to the BNN, that is the same as "no new evidence."

Also, you wouldn't be comparing, say, Java to APL. You'd be comparing Java to Java + "an unless statement", or APL to APL + "enforced whitespace." You want to see the relative worth of language features, not the relative worth of our current languages, which have mostly been slapped together with no concern for UX. The result of such a study would be a set of UX suggestions, of language features that comparatively increase, or comparatively decrease comprehension.

> And you seem to be a theoretician who has never had to maintain any Lisp code.

For the last eight months, I've been writing Clojure for a living. I've written quite a bit of CL before that. I still find it harder to come back to any Lisp project after a month or two than to come back to code in any other language.


"Also, you wouldn't be comparing, say, Java to APL. You'd be comparing Java to Java + "an unless statement", or APL to APL + "enforced whitespace." You want to see the relative worth of language features, not the relative worth of our current languages, which have mostly been slapped together with no concern for UX. The result of such a study would be a set of UX suggestions, of language features that comparatively increase, or comparatively decrease comprehension."

This is still meaningless if you're testing it on someone who doesn't know APL. Having an "unless" statement in your example is exactly like testing for English comprehension. This technique would make sense if you're testing proposed design choices for changes to APL on APL programmers (here's 60 seconds to read this program; what do you think it means?), it would make sense if you have equivalent programs in different languages being tested on users of those languages ("users of Perl understood program X 10% better than users of APL given 60 seconds to examine it"), but it is meaningless in the context you provide.


You ignored the most important part of what I said.

> Thus, if the test-taker doesn't know English, they get the same (low) score on both tests—to the BNN, that is the same as "no new evidence."

So, of course, you wouldn't test APL vs. APL+whitespace on someone who doesn't know APL, because your test results would always, predictably, be 1:1. But you wouldn't be testing APL vs. anything—because APL is a set of design choices that have already been made, and anyone who knows APL would already have taken the time to absorb those design choices. It would be most scientifically rigorous, as I said, to start with a made-up programming language (a random combination of language features that has no overarching design principle which users could recognize to reduce cognitive load), teach it to the users, and then do the tests on that language, vs. that language w/ modifications.


> because APL is a set of design choices that have already been made

Compare J, Q, and Dyalog. Three modern APL dialects, very different in feel.

You might protest that J or Q aren't APLs (they only use ASCII! Q has ragged arrays!), but that's like arguing Scheme isn't a Lisp.


"a random combination of language features that has no overarching design principle which users could recognize to reduce cognitive load"

Thank you for walking right into my trap.

The reason I like to use APL in arguments about programming language syntax/semantics is that it's "unusual" compared to Algol-derived languages in more than just the syntax.

Matlab/Octave are also array programming languages that in fact are very similar to APL despite having a C-like syntax. It does not make it any easier for people to figure out what Matlab programs do if they don't understand array programming. (And if you don't know linear algebra, then in practice you can't understand most typical Matlab programs at all).

You're implicitly assuming that all programming languages have similar imperative/algorithmic interpretations and differ by syntax. This is a result of a very fundamental epistemological error.

There is no such thing as "programming" that people understand. All programming languages are related to (derived from, or applied to) other domains of knowledge.

If you ignore this fact all the conclusions that you will come to will either apply only to your implicitly assumed domain (which may not fit any problems that people actually want to solve) or will be incoherent.


My experience is exactly opposite yours. I find it easier to come back to old Scheme code than to old code in C, Java, SQL, Bourne shell, Perl or PHP.

Your assertion is false that it's easy to perform a usability test on a subject familiar with programming concepts but not exposed to syntax. It is not easy to learn programming without exposure to any language, so finding subjects for such a test would be difficult.


Your "comparative usability study" is like sending someone with general familiarity of European languages to China and them concluding that Chinese is a "worse" language because it doesn't look like anything they have seen before.

Most languages that have syntactic structures descended from Algol do all look the same so of course they are easier for someone with experience of one language in the family to understand than something else from a completely different heritage.

I did a lot of programming in Common Lisp, so Scheme looks fairly sensible. Similarly I did a fair bit in PostScript - so I can sort of work out what is going on with Forth and other RPN languages.

As for understanding large codebases in Lisp written by other people - a good Common Lisp environment (we used LispWorks) is about the best tool to dig around in other peoples code (largely because of the excellent REPL).


This is a valid argument with several points worth discussing.

To the people downvoting, that is not what downvoting is for.


It really isn't. Replace "Lisp" with "Perl" and "theoreticians" with something evocative of the stereotypical Perl programmer and you have the exact same nonsubstantive rant that adds little to the discussion of what a good programming language should look like or why a particular one is bad.

I've been a "working programmer" who happens to write Common Lisp for a few years. Common Lisp is a programming language like any other: people who write good code in other languages can write good code in Common Lisp, and people who write bad code can make everyone's life just as miserable. Most time spent figuring out a new code base isn't the style or set of features used, it's figuring out the domain-specific logic. Macros aren't abused to any greater extent than C++'s operator<<, monkeypatching, or preprocessors.

I am of the opinion that anyone who calls Common Lisp a "theoretically-pure" language has not done anything but a simple recursion exercise: features like loop's inability to detect sequence type, the difference between progn and prog1 in terms of multiple values, having functions like rplaca or caar, and mixing &optional with &key are not what I think of when I think elegant or "theoretically-pure". They're there because the designers were sometimes wrong and sometimes doing the best with what they had.


> Common Lisp is a programming language like any other: people who write good code in other languages can write good code in Common Lisp, and people who write bad code can make everyone's life just as miserable.

We're not talking about the extremes, here, though; we're talking about completely average code, written by completely average programmers, and the comparative number of extra milliseconds it takes these programmers to comprehend, or recall, the correct syntax in one language vs. another.

Milliseconds matter; when Google shaves milliseconds off their page load times, they earn millions of dollars. This isn't because every user is slightly better off, but because some discrete number of users switched from "eh, this is taking too long, I'm outta here" to "alright, I'll put up with that." People get fed up with programming languages all the time, but no one bothers to find a formal reason for it. Has anyone ever done an eye-tracking study on people programming?

> I am of the opinion that anyone who calls Common Lisp a "theoretically-pure" language

No one's doing that. "Lisp" as a general term does not mean "Common Lisp"; it refers to the feature intersection of all popularly-implemented Lisps—the "Lisp" that Greenspun's rule refers to getting implemented everywhere. That "Lisp" is very pure.


Greenspun's rule refers specifically to Common Lisp ("... of half of Common Lisp.") and there's no way that "half of Common Lisp", even with a healthy dose of hyperbole, can refer to the feature-intersection of all popularly-implemented Lisps, nor to anything "very pure".


People who can't get over the syntax never actually get Lisp.

So tired of hearing this from Lisp zealots. There are actually many intelligent people that understand lisp but don't think the plusses of the syntax outweigh the minuses.


I took this as meaning that the people who are really interested in reading and writing Lisp code eventually get over or even grow to appreciate s-expressions. Given that's the case, who are these people that would like to read and write Lisp but can't get past the parenthesis? Is there really a market for an alternate Lisp syntax?

I think there probably is not. As you point out, there are many solid language choices that do not have Lispy syntax and a lot of very smart people being very productive with them. If the syntax bothers a person enough, this alternate syntax probably won't help.


If your goal is to learn Lisp, then you get over it. If your goal is to find a good, modern, productive programming language and you approach this goal without preconceptions then you evaluate s-exprs/macros as another language feature with tradeoffs like any other. Plenty of capable engineers understand the Lisp gestalt but just don't find it compelling enough to put up with a syntax they find unpalatable.


I agree with the grandparent that (contrary to the vague group of Lisp zealots mentioned) understand Lisp and hate the parentheses. On the other hand I also agree with you that there is probably not market/interest in this group of people to learn "Lisp with an alternate syntax".

As someone with an interest in programming languages I do feel I have reasonable grasp on Lisp's s-expressions, but I dislike the parentheses syntax. However I don't think an alternate syntax would help me, most Lisp would still be parenthesis Lisp...

I think most of people like me instead flock to functional languages like Haskell. As I mentioned in an earlier Haskell discussion I think Lisp's macros are probably more powerful (in the sense that they can express more then Haskell code), but like a lot of Haskell coders I just don't agree with the Lispers that this (slight?) increase in expressibility is worth the loss of readable Haskell syntax.


Are you from a maths background? I'm not and having played with Haskell a little the syntax is a real turn-off for me. For me Lisp is a lot more readable.


No, I have a CS background, my first languages were Java and Python. I only started looking into the math related to programming languages after Haskell. Do you have any examples of. Which syntax makes it unclear to you?

I think the uniformity of Lisp syntax makes it hard to scan and find the various parts I'm interested in. Especially the quote operators can be nefariously easy to gloss over...


I guess it's just a matter of experience then, I find Lisp pretty easy to scan through. So we can debate this until the end of time apparently. :-)

With regard to Haskell the original post eloquently sum up most of my problems with it (and then so more).

edit: Haha, I thought this thread was attached to the "Lisk" topic (Lisp syntax for Haskell) that's what I meant with "original post" above.


Ah, I saw the Lisk topic, and while agree that Haskell's very exact whitespace could be annoying without a good editor (mine automatically indents correctly anyway so I never have problems), if its such a big deal you could just use {} and semi-colons as that is valid syntax as well. You only have to do exact whitespacing if you don't use those, although it might be easier to just use a decent editor instead.


As for nobody: Cadence SKILL is Lisp/Scheme that uses syntax that reasonably looks like M-expressions (+ some infix hacks). And I think that that counts as pretty substantial body of Lisp code.


"Well, all true Scotsmen like haggis."

Or, for those who cannot draw the correlation: "Well, all true Lispers like parentheses."

Never mind that pure functional programming is an ideal that even Lisp does not live up to. Think I'm throwing B.S. around? Try doing I/O in Lisp without side-effects.

Now that we've established that Lisp is not at the top of the blub-curve, maybe we can get on with: 1. Writing functional languages people actually use. 2. Advancing the state-of-the-art in programming languages from being mired in a single-threaded monolithic server past.

With respect to my second point, much progress has been made; but it is still underpinned by single-threaded programming language design. What we need is someone who truly groks multiprocessor, multi-threaded, heterogeneous run-time environments; and from that knowledge can write a language to take advantage of such an environment.


I'm not convinced that Common Lisp is designed to be side-effect free (nor am I convinced that side-effects are an ideal to attain^H^H^H avoid, oops). Even Haskell has side-effects - just pushed into Monads.


I'm not convinced that Common Lisp is designed to be side-effect free

It certainly isn't!

nor am I convinced that side-effects are an ideal to attain

I assume you mean avoiding side-effects? I tend to agree. The style I favor is pretty free-wheeling with side-effects in small scopes (within a function or something smaller) and gets progressively more disciplined as the composed pieces get larger and larger. I find programming this way strikes a nice balance between the two styles (imperative and functional). It lets one write most algorithms in a straightforward, efficient way while providing much of the advantages of strictness. To sum up: side-effects within black boxes, side-effect-freeness outside them, and a lot of small, atomic, composable black boxes. This is a natural way to use Common Lisp. (Especially if you avoid CLOS like I do.)


Agreed.

I tend to make sure that side-effects are limited to some "block" of code (function/class/module/whatever).

I just don't see the problem with syntax much anymore - Python syntax bugs me - a language's usability for the practitioner of the language works around the axis of semantics and maintainability far more than syntax. Obviously J, APL, and egregarious abuses of Perl are examples of syntactical issues, but, as was once said, one can write COBOL in any language....

I'd rather go with the powerful semantics and boring syntax of Lisp. I like this image- http://img264.imageshack.us/img264/1397/lispnd7.png.


"What we need is someone who truly groks multiprocessor, multi-threaded, heterogeneous run-time environments; and from that knowledge can write a language to take advantage of such an environment."

Please Google: Starlisp, Paralations, NESL, Multilisp, Qlisp, Actors (ACT 1, ABCL), Termite, Kali

There have been only two (tuplespaces and STM) parallel programming paradigms that have not been pioneered by people who can be considered "Lispers."


I think that, for those of us who like Lisp, one of the things that draws us is the elegant simplicity of it all. There are lots of other languages that distinguish between expressions and statements, function calls and primitive operations, and all the rest of it.

It is undeniable that there are certain readability advantages to conforming a little more closely to the conventions from arithmetic notation, or just looking like C, which is the course most languages take.

But I believe that there are also very different readability advantages to a very simple, elegant, and consistent notation for representing a computation. There are fewer things you need to remember and translate in your head to express the program you want to write, or to understand the program you want to read.

For the record, I favor the approach Clojure took of making access to the reader out of bounds for the Clojure programmer, and assigning distinct semantics to the various paired characters on the keyboard: [] {} (). This breaks up the visual monotony of traditional S-expressions and improves readability. But remains simple, elegant, and consistent, in my opinion.


The parens are a cue so that the editor indents my code properly. How do you propose to make emacs auto-indent sweet expressions?

It seems that I would have to manually do that as indentation would convey meaning.

Whitespace is a notorious pain in the ass to get right, parens are visible, countable, and highlight-able.

Infix is stupidly ambiguous and is the cause of multitudes of errors; a 'natural' way to describe math or not. (There is a reason some prefer the reverse polish calculator).

Anyway, I don't 'get' the 'aversion' the syntax is trivial to explain and it keeps me from having to remember fiddly rules. Use a proper editor to indent/highlight/reformat, and it is as good as any other syntax.

(If you want Lisp to be popular, just get Justin Beiber to write a song about it).


I agree that sexpr are the best syntax out there for a language because it's consistent and simple. However, this expression thing is a good idea. I've been waiting for something like this for a while.

    Anyway, I don't 'get' the 'aversion' the syntax is trivial to explain [etc]
Although you mightn't get it, I'm sure you have run into it a bit and realise it's widespread. Perhaps nine in ten people who identify as programmers would dislike it. The practice of ignoring that hasn't done any favours for lisp takeup. It's still a fringe language.

Like the author says,

    But most software developers have abandoned Lisp precisely because of Lisp's hideous, inadequate notation
This syntax is a gateway that allows you to smuggle lisp into a workplace. It looks like python. When people ask about what you're doing you say "Oh, that's this cool language called Racket. As you can see, it reads a lot like python, but I've found it allows me to do xyz nicely. Here, let me show you how this script works." Then you would show them the script, perhaps show them a repl tool that appears to work just like python's and they'd be able to get around your your scripts as much as they can around your existing python code. Sweet!

    How do you propose to make emacs auto-indent sweet expressions?
You could have emacs auto-transform into sexpr when you loaded a file, and save to sweet expressions when you saved. Should be trivial. From your perspective, you could like in a sexpr world. Except when you were stepping your colleague through the code - remember to vi for that bit. Otherwise the game will be up.


People claim the same issue with Python, but I don't think it's really hurt anyone much. It's usually very obvious and auto-indent is surprisingly possible on a lot of it.


It's a clever design, and Wheeler makes a good case for it. Certainly my first reaction is "you'll have to pry my parentheses from my cold, dead fingers", but it's unarguable that a lot of people are repulsed by the syntax, even if the enlightened among us know it's actually beautiful.

He's right about the requirements for such a syntax: it has to mix well with the existing syntax, meaning, among other things, it must impose no semantics and it must not be necessary to write the operands of an infix expression in infix. I don't recall CGOL etc. getting the latter right. I also agree with him about the lack of a precedence scheme.

I have to admit, I can see making light use of the curly infix syntax in my own programs. I'm less sure what I think about the modern-expressions; I'd have to try them to see. I guess I can see some advantages. But I'm afraid I draw the line at significant whitespace. I know it works for Python, but Python was designed for it from the ground up. Looking at Wheeler's example, I'm not persuaded that it works as well for Lisp, though I commend him for his effort in designing it.


Having the infix expression reader fall back to emitting calls to a user-provided `nfx' macro seems clever at first glance, but it's insufficient. Different programs/libraries are likely to likely to implement `nfx' in different ways, and thus step on each other. I think (for Common Lisp) it will be necessary to do something like this: create a generic function `translate-nfx' which uses `eql' dispatching on the first operator, and set up the `nfx' macro to call it:

  (defmacro nfx (&rest stuff) (translate-nfx (cadr stuff) stuff))
  (defmethod translate-nfx ((op (eql '+)) stuff) ...)
This creates an extensible framework where packages can supply `translate-nfx' methods for their operations. It would probably be worth predefining methods for CL arithmetic operations, so that users don't wind up doing that in incompatible ways. I'm afraid that means choosing a precedence scheme, which Wheeler was trying to avoid (for good reason); I don't see a way around it, though.


I've worked with programs in half a dozen Lisp dialects and everyone who claims Lisp is hard to understand because of the syntax is wrong. The only reason I could work with so many dialects and so many programs is because the syntax is uniform. If you think you have an idea for how to "improve" Lisp syntax that involves an ambiguous infix grammar, please stop trolling and go write some Python.


Some people seem to have a hard time with deeply nesting languages. Anecdotally, they seem to prefer "chaining" constructs, like "someObj.method(arg).otherMethod(arg2).om3(a4).om4(a5)". I'm fine with heavily nested Lisp-y code, but chaining code feels very awkward to me.

Self and Prolog both have well-designed, completely unambiguous grammars. People interested in syntax design (or "fixing" Lisp) would do well to look at those, rather than trying to force a pseudo-Python syntax on Lisp. Python's grammar has too many edge cases. (Lua's syntax is also quite simple, though not as much as Self's.)

In the end, clear semantics are at least as important as a good syntax. Lisp has very clear semantics. So do ML and Erlang, another language which has a bit of a quirky syntax.


"Anecdotally, they seem to prefer "chaining" constructs, like "someObj.method(arg).otherMethod(arg2).om3(a4).om4(a5)". "

The thing with fluent interfaces is they're identical to nested Lisp code, except they read left to right instead of right to left. I suspect writers of Hebrew and Arabic would find chaining to be more confusing than nested Lisp code for that reason.

Also worth noting is that a popular style of indenting chained fluent calls is by splitting them by lines, like:

  someObj.method(arg)
         .otherMethod(arg2)
         .om3(a4)
         .om4(a5);
That's not that different from formatting Lisp code.


The new and "readable" syntax looks truly ugly compared to the S-expression counterparts. Why is it that some people dread parentheses so badly and other people just never mind and choose to spend the time working with them?

The main reason I consider parentheses and S-expressions superior to other syntax is that it allows for flexible navigation in the source code. I can move forward and backward, upward and inward in the tree, unit-by-unit and the unit can be a literal or a compound S-expression or anything and it just works the same. (At least in Emacs.)


Why is it that some people dread parentheses so badly

Two tendencies combine to make "parentheses" the Lisp topic that in sheer quantity dwarfs all other Lisp topics added together: 1. the human brain craves familiarity; 2. people love bike sheds.

(Bike shed = something anybody can have an opinion and argue about irrespective of knowledge or effort. Since their purpose is to jump into the argument, these opinions tend to be strong ones, diminishing the likelihood of any resolution. Indeed, a feature of such discussions is that they are argument for argument's sake, and thus no resolution is possible or even desirable.)


> Two tendencies combine to make "parentheses" the Lisp topic that in sheer quantity dwarfs all other Lisp topics added together

Parentheses (and along with them s-expressions and macros) are one of the few characteristics still unique to Lisp. Most of the other features - including GC, dynamic typing, first-class functions and lexical scoping - are now shared with other programming languages as well.


I am not a big fan of lisp, but u can not really change the lisp syntax and have lisp anymore. (+ 1 2) has more features included then '1+2', because u can for example add 3 numbers easily like (+ 1 2 3) and you can treat all that as a list and parse it to whatever u need, recurse over it and so on. With a "better" syntax u just don't have these features anymore.

Someone who would use sweet-expressions actually has not understood lisp yet. But like the mouse mode in Vi, it might be a useful tool to make new lispers more comfortable. So I still like the idea!


I think you've misunderstood. This does not take away any of those issues. With this setup, you can write (+ 1 2 3) or +(1 2 3) or (1 + 2 + 3) and they would all translate into (+ 1 2 3). You can treat any of them as a list. You can still parse, transform, do anything with it. There is no loss of functionality here. Think of it as a macro that aliases "plus" to +. There is no difference between (plus 1 2 3) and (+ 1 2 3) in a macro defined in this way. They are S-expressions under an invisibility cloak. But they're still S-expressions within.


I understand your point and how this macro works. My point is, that you still have to think in this (+ 1 2 3) way to be able to write meaningful, recursive, 'code is data' like functions. If u think (func_name param_1 param_2 param_3) you will easily get a recursive solution for a recursive problem. And if you think that way anyway, what's the point in writing (1 + 2 + 3) in your code? It will just confuse the lisp mode in your brain.

I also think it is hard to get into this mode. But like this way of writing is not basically intuitive, so is recursive thinking and 'code is data' to me (and so I guess it will be the same for other people). It needs some time of meditating to get into lisp mode and it also needs some months of training. But when you are there, everything works together, because it works in the same way.

Or to say it like Bruce Lee: You must be water my friend. When you fill water into a bottle, it becomes the bottle. If you fill water in a cup, it becomes the cup. (And if u put it in braces it becomes a lisp expression: (begin (water) (q_e_d))).


Well, that does sound subjective. By far not all lisp code is actually being manipulated as data in any non-trivial way. And for situations where all you are doing really is adding a few numbers together, why not write it in a way that is intuitive in the problem domain, rather than intuitive in what you call lisp mode? To me, being able to separate the two is an advantage. Your internal martial arts argument is void, for you are frozen in the shape of s-expressions and refuse to adapt to any other container.


Isn't Lisp awesome? It's flexible enough that you can write an alternate syntax, develop your own semantics and grammar, and build the language you work best in on top of it.

I don't think I've come across another language that can actually do these sorts of things so well (or at all for that matter).

Viva Lisp!


I liked the somewhat elegant heuristic for general infix-to-prefix conversion:

<quote> {...} contains an "infix list". If the enclosed infix list has (1) an odd number of parameters, (2) at least 3 parameters, and (3) all even parameters are the same symbol, then it is mapped to "(even-parameter odd-parameters)". Otherwise, it is mapped to "(nfx list)" — you'll need to have a macro named "nfx" to use it </quote>

That being said, (still-prefer 'I (using 'parentheses :in 'Scheme)).

Now, if someone added mix-fix syntax to Scheme...


Almost like haml for Lisp. Cool.


Yea, though you know I'm still attracted to Scheme b/c of its sweet sweet parens.


Is it just me, or does that look like Python?


"Ugly" s-expressions?

I find them to be quite beautiful, like matryoshka dolls. I am a bit disappointed when I have to work on code that doesn't manifest its own structure so explicity, looking like a tangle of words and punctuation down the page, like what becomes of balls of yarn after a kitten has gotten through with them.


Well from the perspective of a programmer that doesn't know lisp, and has been put off in the past due to its syntax: I find this dialect much more appealing.

Nested round brackets are pretty horrible to read / parse.

Whether it destroys the essence of what lisp is all about is another matter, but I suspect it doesn't at all.


Wow. Speaking as somebody who's bounced off Lisp more than once, I find this really attractive.


once you "get" lisp anything beside s-expressions/prefix notation feels "inside out" and asymmetrical. I think [ ] and { } should be reserved for things like dicts, vectors or lambdas.


Mostly agreed, but I've used Lisps for years, and prefix for arithmetic still feels weird. My personal preference is infix without operator precedence (like APL and Smalltalk) or postfix (like Forth). All-prefix, all-postfix, or all-infix consistently make sense. Infix with arbitrary transposition due to historical "order of operation" doesn't, but it's the common convention, and minor changes to it (e.g. +. for floating-point-addition in OCaml) seem to really piss people off.

Most other function calls are already in prefix notation, people just think (f x y) is totally weird, while f(x, y) is normal. Cognitive dissonance.

Also, I think it's cute that Prolog sticks all arithmetic under an "is" operator ("X is Y+Z*3"), rather than letting it dominate the language the way it usually tends to.


"Mostly agreed, but I've used Lisps for years, and prefix for arithmetic still feels weird."

But that is just because you learned the conventional notation from a very young age, right? We all did, hence the name "conventional". When we learned to add and subtract, we also learned that the notation was 2+3, not (+ 2 3). That's why any other notation feels "unnatural".

I do wonder if it would be possible to teach kids prefix notation instead, and whether such notation would seem completely natural to them. (I suspect the answer to both is yes.)


That is an interesting question. I see one possible argument (whose correctness I don't know enough to judge, I'm just making it up) why infix is more natural than prefix: A kid will start by seeing an object, and only once you have an object does it make sense to combine it (by addition or whatever) with another object. Also, note that a certain amount of English syntax is infix: "I went to the store", not "Went to I the store".

...However, I think that in some other languages, postfix notation is common. I think my friend who studies Spanish told me that they say the equivalent of "I her it gave" (meaning "I gave it to her"), and I think I remember Shakespearean English using postfix. ...Looking at the text of Romeo and Juliet, I see both prefix and postfix.

  Prefix: "O, where is Romeo? saw you him to-day?"
  Postfix: "The which if you with patient ears attend"
On the one hand, we could say that, since these plays were apparently rather successful, people obviously didn't have too much trouble understanding them. On the other hand, since this sort of strange permutation has been mostly dropped from common usage (so that I recognize it as strange), that may be evidence that these things are just harder to use. On the other hand, that could just be because "common usage" comes mostly from people who have been educated in schools, and schools have no reason to teach more than one method of speaking. On the other hand, Shakespeare seems only to have done that in order to make lines fit into rhyming iambic pentameter; in no way was this natural to him.

I should probably note that a lot of speech deals with object method calls more than function calls ("I.saw(Bob.win(a_game))" or, as it would probably be expressed in Arc, "(I!saw (Bob!win a-game))"; function call would be like (saw I (win Bob a-game))). And adjectives and adverbs act kind of like keyword arguments: (I!drove :to (the store) :[qualifier] fast). From that perspective, the distinction might be "object method calls" vs "function calls", rather than "infix" vs "prefix".

Does someone who knows more than I about other human languages (or English, for that matter) know about the prevalence of postfix notation (and prefix, if it so happens)?


Turkish and Japanese both typically end sentences with verbs, IIRC. There are probably several others; I really should just buy a copy of Comrie's _The World's Major Languages_ already.

In German (which I'm at least a little familiar with, unlike those two), auxiliary verbs stack at the end of the sentence: "I have eaten my breakfast." -> "Ich habe mein Frühstück gegessen." (lit. "I have my breakfast eaten.") It's unusual to have more than a couple, though - human languages typically don't nest very deeply.

OTOH, comparing natural languages and programming languages may not be that useful - programming language design places a high priority on avoiding ambiguity, while natural languages assume have quite a bit of it. It may be better to consider programming languages as a kind of notation for math, logic, rules, instructions, declarations, etc. Kenneth Iverson had some interesting ideas about that.

Another issue with method calls is that many things don't have a clear primary actor. In single-dispatch OOP languages, this turns into ugly workarounds (e.g. the Visitor pattern), "who owns this method?" debates, and tedious rewriting. Multimethods avoid that issue entirely.


I learned arithmetic at a young age, yes (and I taught myself BASIC when I was 5-6ish). My main objection to prefix notation for arithmetic is that it's cumbersome. Mostly because if the arity of ops aren't predefined (e.g., + isn't limited to 2 arguments), parenthesis are needed for grouping.

Infix, w/ order of op:

    y^2*(6x^3 - 3x^2 + 2x - 9)
Prefix:

    (* (^ y 2) (+ (* 6 (^ x 3)) (* -3 (^ x 2)) (* 2 x) -9))
Infix, no order of op:

    y^2*((6* x^3) - (3* x^2) + (2* x) - 9)
But as long as we're talking polynomials, J wins, IMHO:

    (y^2)*+/_9 2 _3 6*(x^i.4)


Update: The page at http://www.dwheeler.com/readable/ appears to be the project's homepage.


Sweet-expressions is for LISP as CoffeeScript is to Javascript :-)


This touches on the core of my enthusiasm about sweet-expressions. I'd like to make a "SweetScript" that compiles to JavaScript (via some lightweight lisp) and could essentially serve as a CoffeeScript with macros.

Would you find such a tool useful? What challenges would you foresee facing such a project?


It's funny that Lisp's "syntax" draws so much attention when it is pretty much the simplest and most unambiguous syntax possible, and that messes like C++, Python and Perl don't seem to bother anyone enough to propose alternatives.


It's yet another case of people getting hung up on the first obviously different thing they notice about a language.

If somebody is still griping about the parens in Lisp, the significant whitespace in Python, the glyphs in APL, etc., they haven't gotten to the differences that actually make the language interesting - either they'll get used to it like the other X programmers (and realize it wasn't as big an issue as they thought), or they'll find deeper issues in the language design to complain about (and probably give up on it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: