The real question here is "do I need syntax?", or, more precisely, "do I really need a [complex] grammar?" Any given Lisp is still a programming language--it has some syntax and it has semantics. But it does have a much simpler grammar than any language (save Forth).
The grammars exist because programs are made to be read by people, not just computers. A more heterogeneous notation can make for more concise code that's easier to scan and easier to read.
Of course, this is not to say that Lisp is unreadable. I actually rather like reading and writing (paredit is awesome) Lisp code. But, after quite a bit of Racket, I've found that I still heavily prefer having infix operators and a bit more syntax. At the same time, I also often want less noise--ie no parentheses. I want all this so that I can quickly scan and comfortably read my code. That's why we need grammars.
I think a more helpful question would be "in what instances do I need syntax?". Could you build a type-system like Haskell has for LISP? Would it be usable?
In terms of syntax, I think there's a divide between the needs of static type-systems, versus dynamic ones. LISP is at its core a dynamic language. It makes sense to treat code as being data that can be manipulated. Getting rid of all that syntax gives you great power in terms of what you can do. And personally I find Clojure code to be much saner than code written in Ruby or Python, but that's another discussion.
However, the other side of the coin are Haskell or Miranda (its predecessor) or languages in the ML family or Scala. These languages go to great length to achieve much of LISP's expressiveness, while ensuring a great amount of static type-enforcement. Haskell is especially notable because Haskell is lazy and has many goodies, such as type-classes or rank-2 types. People say that when a piece of Haskell code successfully compiles it is usually correct. LISP can't do this, even though there have been experiments with pluggable type-systems.
So IMHO, syntax helps if you want static type-safety, because the language and the type-system exposed need to be expressive. Whether static type-safety is useful or not, it's a matter of debate, being highly dependent on the problem domain. One could argue that static type-safety is very useful if the shape of the data you're manipulating is well defined and doesn't change much, otherwise you're better off with a dynamic language, but that's another discussion.
"People say that when a piece of Haskell code successfully compiles it is usually correct."
Every time I see this claim, I wonder what the heck people are working on that is so simple that getting the types right is all it takes to get the code right? Is it just that enough of my work is math or parsing, both of which typically have an infinite number of ways to get things wrong but with the correct types?
Or is the idea really the other way around -- if your program is sufficiently complex, you'll never get it to actually pass Haskell's type checks?
Edited to add, I love the article you linked to so far. :)
I had some similar experiences in OCaml recently, so I might be able to shed some light on this. I found that that when compiling code, the result was often one of three things: one, type checking failed (or I got an inexhaustive match warning), and I got a useful message, two, it worked, and three, it failed but the problem was fairly obvious when I saw it. Functional programming, for me, is a fairly direct way of expressing the algorithm in my head, so the basic idea tends to be much more fully fleshed out the first time I write it down in code.
Sometimes I got a type error from some bizzare syntax mistake, usually failing to correctly delineate an expression, and sometimes the algorithm in my head was just wrong and I had to fix it. At that point, the cycle starts over, and the same benefits I mentioned above help in debugging. FWIW, the problem I was working on was fairly simple, but novel. I was a very happy camper this Christmas, working with OCaml, despite its syntax.
Out of curiosity, what language do you do your "math and parsing" in? OCaml might be a good fit.
I do most of my $work programming in C++. (Not out of any great love for the language, mind you.)
To give a somewhat extreme example of the sort of stuff I do -- one of my current projects is to try to create a first-class fillet surface type [1] for the geometry engines I work with. My current code converts each fillet to a NURBS surface, but this is slow, consumes a lot of memory, and produces surfaces which frequently are not accurate enough for my purposes. So my hope is to be able to directly calculate points on the fillet and their derivatives.
It's easy enough to calculate points on the fillet surface, but I don't see any obvious way of directly calculating the derivative along the surface. So I've been investigating Chebyshev polynomials in two variables. But that algorithm looks a bit hairy, and I don't have any feel for whether it will be an actual improvement in practice. (And having written all this, I'm suddenly wondering if I'm making every effort to sort out the simple cases that are easy to solve exactly.)
My point here is that figuring out the correct types involved seems like a tiny, tiny part of the required work. Maybe it's because I haven't put in enough time in Haskell, but I don't see how stricter than C++ types are going to help. The real work is getting the math correct and fast enough to be useful.
By the way, my comment is not meant to be a dig at Haskell (et al). I'm not sold on it as a language, but it certainly has a lot of interesting ideas -- pattern matching and laziness spring to mind.
Often when using C++ I find myself solving C++ problems, not real problems. In a domain with very difficult math and performance challenges, the C++ problems will be dwarfed by the difficultly of the real problem you are solving.
For domains outside of physics and applied math, I think the dominating factor is often the inherent trickiness of writing correct procedural code vs. the ease of writing strongly typed functional code.
Yeah, all your functions would basically be float -> float -> float -> etc . I would expect a functional language to be nicer for that anyway just because it's functional (expressions FTW!), but I also wouldn't be surprised if it wasn't worth switching languages for. I had the luxury of devoting about a week of vacation to my (rather abstract) pet project [0], half as an OCaml learning project. You have nearly the opposite situation (sounds fascinating, by the way).
Pattern matching is at least as much fun as it looks like, if you have complicated data types. That's a lot of what made OCaml so perfectly match my problem.
> People say that when a piece of Haskell code successfully compiles it is usually correct. LISP can't do this, even though there have been experiments with pluggable type-systems.
Why can't Lisp do this? In the Common Lisp world, there's some pretty aggressive, though still optional, static type checking. There's also Qi, Shen, Typed Racket, et cetera. The question is orthogonal to the question of syntax.
I don't know much about Lisp, but the reason people say this about Haskell isn't just that it has a type system, but a powerful one.
For example, how would you type the cons function? In Haskell, it has type t -> [t] -> [t] (i.e., for all types t, given a value of type t and a value of type list of t it produces a value of type list of t). In Lisp, on the other hand, you can construct a cons cell out of any two objects.
Obviously you can certainly construct a type system in which you can handle Lisp cons cells reasonably; Hindley-Milner type systems aren't the only game in town. But I don't think it'd have the 'if it compiles it probably works' property any more.
The Haskell equivalent of cons would probably be the 2-tuple constructor, (,), which has the type a -> b -> (a,b). You probably wouldn't use it to make data structures like you would in Lisp though.
Yes, that is true of type systems. But the comment was that syntax is needed for a language which implements static types. This implies static type checking, and while Hindley-Milner inference can do a lot of lifting, a statically typed language which had no syntax for explicitly expressing types might struggle to express solutions that lend themselves to programmer defined data types.
So, a language where, say, blocks are started sometimes with parameters that are delimited with pipes, and sometimes the bodies need a terminating end keyword, other times they need wrapping curly brackets. If you want an explicit lambda, you have to wrap that whole block in curly brackets with a lambda keyword. Classes and functions need an end delimiter but not an explicit start delimiter; if statements however need both a start and end delimiter in "if x then y else z end"
Ruby is not semantically sane, it's insane in a way we have all gotten used to. (If you don't know ruby, try to rewrite this for your ALGO language of choice, I picked ruby for its simplicity)
S-expressions are about as semantically sane as you can get, it's just most of us have a <5 year head start with the craziness of c-style languages.
> So, a language where, say, blocks are started sometimes with parameters that are delimited with pipes, and sometimes the bodies need a terminating end keyword, other times they need wrapping curly brackets. If you want an explicit lambda, you have to wrap that whole block in curly brackets with a lambda keyword. Classes and functions need an end delimiter but not an explicit start delimiter; if statements however need both a start and end delimiter in "if x then y else z end"
You are talking about syntax here.
> Ruby is not semantically sane, it's insane in a way we have all gotten used to.
I hate Ruby even more. It is just as semantically insane as Lisp, and the syntax is worse.
> (If you don't know ruby, try to rewrite this for your ALGO language of choice, I picked ruby for its simplicity)
Ruby is not simple. It is full of corner cases and redundant constructs. It is hard to make any sort of reasonable guarantee about a Ruby program without resorting to some heavy testing. And no matter what nice property you might have wrestled out of the system, someone else can destroy it anyway by means of monkey patching.
> S-expressions are about as semantically sane as you can get
S-expressions are syntax with no intrinsic semantics.
> it's just most of us have a <5 year head start with the craziness of c-style languages.
What about Haskell? (And, in general, languages that have a denotational semantics that makes equational reasoning feasible.)
No syntax has intrinsic semantics.
I don't think 'intrinsic semantics' is a meaningful combination of words.
Explicitly, in lisp, everything is a stack of instructions arranged in a tree. The instructions are either functions that manipulate the input data, or functions that manipulate the underlying tree of instructions.
By naming convention and documentation, we give the various functions semantics. ("With" macros being an example).
> No syntax has intrinsic semantics. I don't think 'intrinsic semantics' is a meaningful combination of words.
I was just refuting the assertion that "S-expressions are about as semantically sane as you can get".
> Explicitly, in lisp, everything is a stack of instructions arranged in a tree.
I do not particularly care what data structure is internally used. That is the compiler's business, not mine. I want to reason about my code in terms of the semantics of the language I am using.
> By naming convention and documentation, we give the various functions semantics. ("With" macros being an example).
Names may be misleading, documentation may be incomplete, but types never fail to accurately guarantee properties about my code. Hence my preference for types.
> I was just refuting the assertion that "S-expressions are about as semantically sane as you can get".
His comment was equally nonsensical. S-expressions are consistent and easy to parse, that is all.
> I do not particularly care what data structure is internally used. That is the compiler's business, not mine. I want to reason about my code in terms of the semantics of the language I am using.
>I do not particularly care what data structure is internally used. That is the compiler's business, not mine. I want to reason about my code in terms of the semantics of the language I am using.
Good, that data structure generally isn't used internally in the compiler (not efficient enough). I'm talking about the data structure that is used for the programmer's brain. The semantics and way of reasoning about the code is in terms of a very simple data-structure.
My business is the compiler's business.
> Names may be misleading, documentation may be incomplete, but types never fail to accurately guarantee properties about my code. Hence my preference for types.
Why would you write such shitty code? There are many ways of skinning that cat. You can use types, you can use tests, you can use assertions.
You could make it overly complicated and long. A stupid person is going to have a bad time writing code, no matter what language you hand them. Shitty code is shitty. This is a tautology comparing the language I'm currently into to a language I'm not currently into. Don't use it. You're good at writing Haskell, good for you.
I'm curious then what you mean by semantics, like the words used for library function names? I've always equated semantics and syntax, perhaps to my detriment.
Syntax is the presentation layer for semantics. Semantics are the content, meaning, and/or implications for what you convey via syntax. They're largely orthogonal.
def add(x, y):
return x + y
(define (add x y)
(+ x y))
These are similar semantically -- they each define a procedure that takes two parameters, adds them, and returns that result. Hand-waving how the language is actually implemented, they're morally equivalent, anyway. But syntactically they're fairly different.
Is Haskell the semantically sane language you prefer? Fascinating. The best Haskell book I can find is next for me as soon as I finish SICP, any suggestions? I'd love it if the book had homework like SICP.
I started Learn You a Haskell a couple days ago on a lark and I'm also loving it. I've also heard a lot of praise for Real World Haskell which I plan to look at next.
You can always macroexpand a macro to know precisely how it's defined and it's going to be executed [1]. If you use emacs+slime it's as easy as putting your cursor before an expression and tying a keybinding [2].
But in the reality using macros is no more complicated than using functions.
What's semantically insane about Lisp? I find it has very clean semantics; it is strongly typed, its scope is perfectly lexical (sans a explicit caveat), programs are commonly written in a way that preserves referential transparency (in the FP sense, anyway), with pockets of imperative mutation here and there. The limited dynamic scope may be somewhat unpleasant though, but that's its flexibility.
As for macros, they are merely functions that transform expressions. They can be somewhat unpleasant to read, but they are basically that.
Unityping. Now every useful function is automatically not total!
> I find it has very clean semantics; it is strongly typed
Of course it is strongly typed! All expressions have type Univ.
> programs are commonly written in a way that preserves referential transparency
All my C++ programs are commonly written in a way that preserves referential transparency, and clearly distinguishes between objects that are meant to be mutated (non-const) from those that are not meant to (const). That still does not make C++ a pleasant language to program in.
> The limited dynamic scope may be somewhat unpleasant though, but that's its flexibility.
Spooky action at a distance. Me no like. :-(
> As for macros, they are merely functions that transform expressions. They can be somewhat unpleasant to read, but they are basically that.
Meh. The only total languages are used for formal verification, you'll always have to settle for less to get Turing-completeness.
> Of course it is strongly typed! All expressions have type Univ.
Have you used PHP? You'll come to appreciate the differences between the different "unityped" languages.
> All my C++ programs are commonly written in a way that preserves referential transparency, and clearly distinguishes between objects that are meant to be mutated (non-const) from those that are not meant to (const). That still does not make C++ a pleasant language to program in.
Does that say anything about Lisp? Nope, not at all. It isn't like C++ is comparable to Lisp, despite the efforts put by C++ programmers to wrest safety from a language designed as a superset of the language of cowboy-coders par excellence, its true nature as an imperative language with raw pointers may rear its head at any point. Not to say that Lisp has had more facilities up to this point than C++ to write referentially transparent programs.
> Spooky action at a distance. Me no like. :-(
Depends on the Lisp. Common Lisp's global variables are mutable and dynamic. Me no like either. In Racket and Clojure you can declare variables to be explicitly dynamic, and they can be rebound within special blocks, locally, sort of. Not that it's something anyone should use frequently.
> I actually rather like...
> I've found that I still heavily prefer...
> I also often want...
> I want all this so that I can...
...
> That's why we need grammars.
> A more heterogeneous notation can make for more concise code that's easier to scan and easier to read. [...] I want all this so that I can quickly scan and comfortably read my code.
I actually find those harder to read. Generally, because I can indent Lisp code however the heck I like, I have a pretty good idea what's going on just from looking at the structure of the code and reading a few words here and there.
It's a fairly common experience for me in other languages to be scanning back and forth across a line searching for that $%^$£%£ operator rather than (in the rare event I get really lost in a Lisp file) just going, "Highlight the next s-expression up, ah that's what's going on." Or "You know what? Indent my code for me in a different way."
I wonder whether there's a divide between people who understand code initially by looking at the structure and people who understand code initially by looking at the operators in terms of who likes Lisp :/
I remember trying REBOL years ago, but only for a short while and I didn't see it as much of an improvement over Lisp. Should I revisit it again some day?
I wouldn't view Rebol as a Lisp++ but rather something that's inspired by but interestingly different yet simpler to grasp to Lisp.
> Should I revisit it again some day?
I'm probably not the best person to ask here because I've only come across Rebol in last 12 months. I've had great fun playing with it and have used it for a few small scripts/projects.
The current pros would be:
- Rebol3 is now opensourced (since 12-12-2012)
- Rebol3 is faster than Rebol2 (in my simple tests about twice as fast) & with smaller footprint.
- Rebol3 comes with some design improvements, for eg. 64-bit integers, module system, async schemes & more.
However the cons are:
- Rebol3 still in alpha/beta development.
- So not all features of Rebol2 are (yet) present in Rebol3.
- Rebol3 GUI is only available for Windows & Android at this moment.
So depending on your use case then you may find yourself still needing to use Rebol2.
If you code in C, C++, C#, JS, or Java, there is a good chance your code has just as many curly brackets and parentheses as the equivalent code in lisp. The biggest differences being they are more irregular in location and there are more newlines separating your closing delimiters. Check this insightful comment out. https://news.ycombinator.com/item?id=6960546
If you code in Ruby, Python, or Coffeescript, yeah, lisp has more parens, no doubt. That is the price of its regularity. Lisp could throw all that out the window, but you'd lose the easy macros.
> The real question here is "do I need syntax?", or, more precisely, "do I really need a [complex] grammar?"
A more useful question is the following: "is a simple grammar the be-all and end-all of programming language design?". I think the answer is "no". Ceteris paribus, a simpler grammar is of course to be preferred to a complex grammar. But, ultimately, people write programs for what they do (so that they can be useful) and for what they mean (so that they can be maintainable), not for how they look like. So a simple semantics takes precedence over a simple syntax.
> Of course, this is not to say that Lisp is unreadable. I actually rather like reading and writing (paredit is awesome) Lisp code. But, after quite a bit of Racket, I've found that I still heavily prefer having infix operators and a bit more syntax.
That is precisely what languages like Haskell (and to a lesser extent ML) give you: a typed lambda calculus (semantic simplicity) with infix operators (syntactic eye candy) for convenience. The type system reduces the universe of valid programs to those that typecheck, so you do not have to worry about stuff like the meaning of 2 + "potato".
Racket is typed, and as strongly typed as Haskell and ML. I think what you were getting at is that Haskell and ML are statically typed whereas Racket is dynamically typed by default. However, #lang typed/racket allows full access to static type checking.
By that token, there is no such thing as statically typed either, and if we chase the turtles all the way down to the Turing tarpit, there's only two types: high and low and they're both voltages or the equivalent.
As Wittgenstein remarked, many philosophical conundrums arise from the odd results that follow from using ordinary language in an odd way.
> By that token, there is no such thing as statically typed either
Wrong. Just because a language A is implemented by means of a translation into language B or execution by a machine C, it does not mean that A's abstractions do not exist at all. If A makes guarantees B or C cannot possibly make, then A will compile to a strict subset of B or yield a strict subset of the processes that can happen in C. In this strict subset, A's guarantees hold.
As a bonus, you even get more verbose syntax as in other languages, because Racket, being a Lisp will let you write code any old which way so long as there is a macro available to parse it.
The difference, if there is one, is that Racket provides an infix syntax as part of its definition, i. e. there is a standard notation. I''ve seen it used in the reference documentation for the -> operator when writing contracts.
Racket is a scheme, but this infix notation is unique to racket, i.e. it's not in the scheme specification. In fact, there are other proposals for infix notation for scheme, such as http://srfi.schemers.org/srfi-105/srfi-105.html
The key idea here is that underneath everything you tell the computer to do is boolean logic and transistor circuits. Yes, even Lisp S-exprs; it's not really correct to say "Lisp has no syntax", it has a very simple syntax that is trivially parseable. Even LLVM assembly is a translation layer, as is LLVM itself; underneath it on my computer is an i86 processor, and underneath it in my phone is an ARM-architecture Snapdragon.
Once you understand that, you realize that all of these languages and frameworks and libraries and databases are just tools, and they exist for your convenience. They're there to let computers handle things that humans are really bad at. For example, memorizing memory layouts really sucks; let a computer map out your structs onto memory. Managing heap memory really sucks; let a garbage collector do it. Managing register allocation really sucks; let a compiler do it. Managing syntax kind of sucks; let a parser slap something more friendly on top of it, or don't and use Lisp. Managing vtables kind of sucks; let C++ or Java hide it behind a class, or don't and continue to use C.
Once you've gotten into this mindset, you're free from the religious wars that surround technologies, because you realize it's all just tools that compile down to machine code at the end. And you understand when a tool might be useful, and when you could just as easily implement it yourself, and when it once was useful but has since ceased to be.
That's one philosophy. Your code exists to be run by your computer.
Personally, I take a different one: the computer exists to run my code. LLVM, x86, ARM... all just implementation details. Sometimes they're unavoidable, but c'est la vie.
Ultimately, what I care about is semantics. I care about what my program means--the logic it represents. After all, that's the whole point, the end goal. The fact that it's running on a physical computer is just a detail, albeit an important one.
A programming language is more about shaping my thoughts and forming my logic than actually running. Choosing a programming language is incredibly important--it affects what I can say, how I can say it and even how I think. And that's the most important part.
> What is a program? Several answers are possible. We can view the program as what turns the general-purpose computer into a special-purpose symbol manipulator, and does so without the need to change a single wire (This was an enormous improvement over machines with problem-dependent wiring panels.) I prefer to describe it the other way round: the program is an abstract symbol manipulator, which can be turned into a concrete one by supplying a computer to it. After all, it is no longer the purpose of programs to instruct our machines; these days, it is the purpose of machines to execute our programs.
The end goal of the web browser in which I'm typing this is not to represent some piece of abstract logic, but to drive the physical hardware: to change the color of pixels on my screen, to sense and react to my input. Attempt to distill it to some piece of logic or mathematics, and you end up with something useless, an absurdity.
The original task was "apply a seasonal rebate to our products," but none of the programs actually accomplished this feat. They contained some logic which might be useful in such a program, but none of them applied anything. Logic on its own is sterile.
I thought the end goal is to display the page. I would say that "to change the color of pixels on my screen, to sense and react to my input" is a means to an end, not "the end goal".
This was a really neat discussion to watch. Devotional vs Observational semantics.
When I'm in my algorithms class and we have a code snippet to find a minimum spanning tree, we don't even have some machine to compile it down to, but because we have denoted some meaning to bit of the program, we can even prove that the meaning of the entire program is to output the MST.
Changing pixels on the screen (usually?) only matters because it let's us get at the results of the program. Even if the program is a video decoder + video, I don't care about pixels, I care about the picture. Pixels are just more implementation details.
Once you've gotten into this mindset, you're free from the religious wars that surround technologies
That seems all fine and well, but you're ignoring the elephant in the room: compatibility. You aren't free to choose your tools as you please if compatibility is one of your requirements. This little bit of impurity combines with network effects to create big problems. Compatibility along with all its attendant problems is the single most valid reason for engaging in a religious war over technology.
"This is why everyone should study compilers and machine architecture in college."
Everyone? For starters, I would exclude those studying law, medicine, most of the 'soft' sciences, and probably chemistry and physics, too.
Yes, most of these might be better of having some knowledge of programming, but I doubt they really need to know what a parse tree is. Hand-waving "3+2x6" is three plus (two times six), just as in normal life" probably is sufficient for most of them. Even math majors need not know about compilers and machine architecture.
That leaves a tiny fragment of 'everyone in college': just the computer science students and, to a lesser degree, the software engineering ones. I would present the latter to parse trees, but I would not go in any depth there. Really, for most people doing software engineering, compilers can be black boxes.
This is so, but why are our language A <-> language B translation tools so bad, that people end up having to manually convert code from one language to another (e.g. if some code exists in language A, but you need its functionality in language B)?
because some of the essence of what you wanted to express is lost when you compile. it's not a one to one mapping.
otherwise we would just compile and decompile into each form. intermediates like lvm come close to trying to do this i think? not sure if they allow going back to the original form from the IL though.
oh, and lastly, think of trying to express a feature in 1 high level language with another that doesn't support that feature. it's quite hard to gain expression when it's not idiomatic in that language.
kind of similar to spoken language actually. some concepts in some languages form as single words or phrases that never really translate well to other spoken languages because the place that other language formed never had that concept.
Compilers have gotten better, but in comparison to a human, they are still not particularly good at the edge cases. Things like instruction selection and register allocation in particular, sometimes I wish I had the time to write my own more intelligent algorithms for them.
The point of a programming language is making complex stuff readable. One can certainly hand-assemble and write software as a series of byte values and I have done so in the past because of necessity. It's very boring and difficult.
In this respect, I don't find Lisp readable or intuitive at all, but that's just me, I'm probably not smart enough:-)
It's not about being smart or not, it's simply getting used to it. Probably chinese doesn't seem readable or intuitive to you, but a chinese also don't find english readable or intuitive.
Now show the same tree for a ruby block, a Java inner class, a C# lambda and how they're all the same.
Except they're not.
Only the simplest ASTs are directly translatable. Language semantics differ wildly when you get into higher level constructs. Not even Lisp is able to help you, as that just changes the problem from one of syntactic representation into one of library implementation.
Language semantics differ, full stop. Say you have x = 2147483647:
x + 1 => -2147483648 (Java)
x + 1 => Undefined behaviour (C, assuming int32_t and various things about the implementation)
(1+ x) => 2147483648.0 (Emacs Lisp)
(1+ x) => 2147483648 (Common Lisp)
x + 1 => x + 1 (Prolog, using some interpretive license)
That's not really a critique of the author's main point, I think. Of course you can't directly translate between semantically different features, but the OP's point was about semantically identical programs, and how different syntaxes distract from that.
That said, I think it's possible to invent a language in which all the things you mention can be expressed and at least compared. I'll do my best to do it myself if no one beats me to it. Any language with closures is close. Java inner classes and (AFAIK) C# lambdas practically are clsoures. If the only thing ruby blocks add is that returns and similar statements affect the block-creating function, that can be simulated with exceptions.
Of course you don't care that much about which language to choose when you just want to do a simple if statement. However when you want to do more complex stuff, like answering a request on port 80 and compile a document that you send back over port 80, making sure to encrypt it properly so that no third party will get a hold of it, then it would be beneficial with some sort of programming language.
The question should really be, why does the IT department care if he uses C or Basic? And more over, why did he try to make it in C in the first place if he knows the IT department prefer Basic? This is a case of unprofessional behaviour by the author, not whether or not you need a language to express an if statement. At that point he could just as well have written this is x86. Who needs maintainability anyway?
Sure, and when you write an expression more complex than a single ternary, you start to see why Lisp is so uncommon in real-world usage.
It has a wonderful technical elegance, but the humans that have to write code tend to not think in the same sense. After four or five brace levels it gets very challenging to keep track of everything.
I think that can be said also of most mainstream languages, i.e., that more complex expressions are less comprehensible. The same applies to increasing levels of nesting. Lisp's syntax is so different that it just looks less comprehensible than any language you're more familiar with.
>If you understand how compilers work, what's really going on is not so much that Lisp has a strange syntax as that Lisp has no syntax. You write programs in the parse trees that get generated within the compiler when other languages are parsed.
If you say "S-expressions" instead of "LISP", this is perfectly true! The syntax of S-expressions was only meant to be used for LISP data structures, while programming was meant to happen in another syntax called "M-expressions" [1], which was to be converted to S-expressions by the compiler. However, the programmers liked to use S-expressions directly, so M-expressions were never actually implemented.
In that sense, the LISP syntax (S-expressions) were ideed designed as an intermediate language, not to be used by programmers directly (except for plain data structures).
It's not as if..it is truly the case. That's why only nutters use LISP for large projects. Who really wants to wrap every S-Expression in parenthesis? Talk about painstaking and what an eye-sore. If only M-Expressions caught on, LISP could be decent. But really, it's just making you write your program as a data structure because that makes it easier for the compiler to process.
So...
Ruby: good for developers, bad for the JIT compiler (slow)
LISP: awful for developers, good for the compiler -> machine (fast)
Is the tradeoff worth it? Not at all. Most LISP intros start by convincing you that you'll eventually get used to your code looking like a sack of parenthesis. No thanks, I shouldn't need to get used to staring at overburdened verbosity for the compilers' sake - build something better. Wait..we have other languages that are fast and look nice. And many even process into a well-formed AST. Okay, thank heavens.
I disagree that Lisp is awful for developers. To me and many others it looks quite pleasant while the "other languages that look nice" actually look like a needless mess of braces, brackets, asterisks, comma's, etc. etc.
Only Python comes close IMHO but has many other downsides.
Brackets, asterisks, and commas give array indexing, pointers, and the clean separation of function arguments.
(incf (elt vector 2))
..versus..
++vector[2]
which one is more readable?
LISP is not to be taken seriously. It's an academic curiosity, and cute, novel, not a language that needs continued zealots. It has no market share..the reasons are always going to be the same. The language is esoteric. I wouldn't program anything serious in JSON so why would I use LISP, where every semantic is a list..not even a hashmap.
I'm really not going to participate in a cherry picking contest on PL syntax and while my experience has show that Lisp is not for everyone, I'm quite surprised at the hostility shown towards it by you and other people on this page.
I'm not quite sure whether it is just plain trolling or traumatic experiences with Lisp at college. (The latter which I can understand since being allowed to only use a very limited part of the language to solve convoluted problems can be quite off-putting.)
Knee-jerk? There's nothing pretty about having every single semantic of your language need to wrap. That's called, syntax hell. And it's useful for when you want to refer the AST self-referentially, like in live editors, ie. emacs & overtone (music production.) Otherwise, it's water trash. Maybe it was cool in the 80s when the only other kid on the block was Fortran or QBasic - but we have better languages now, so we don't need to write our program as a big nested list..we can make it easier for ourselves, and we can be way more productive..well unless..we're some old dude from the 80s..that's stuck on the LISP bandwagon. tears
Damn straight, I went there. And got no replies. Because there's really no good reasons to defend LISP's horrible syntax in non-live programming use-cases.
That's definitely a good feature. Other languages have run-time reflection, compile-time macros, mixins, and polymorphism.
LISP is great for live audio production because of the homoiconicity. It's very suited for dynamic interactive programming, namely Emacs. I believe that's where the zealousness needs to stop. It's not a good language to code large projects in.
Yes, and he's still wrong. You don't write 'directly in parse trees', you write in text that gets converted to parse trees. The conversion is very simple, to be sure, but it still exists.
When writing an S-expression of Lisp code, you're just one quote symbol off writing the list literal that evaluates to the code's parse tree. Modulo whitespace, the parse-tree prints as code.
That's what we mean when we say you write directly in parse-trees. You're writing the string representation of those parse-trees.
Right, but it seems inaccurate to me to say that Lisp code 'has no syntax'. What do you call the reason that 2927(foo"bar. isn't a valid Lisp program?
Alternately: If I augment Python by letting you surround a block of Python code with curly braces in order to get an object representing the corresponding AST, does that mean that I can write Python directly in parse trees? :P
It's always called a "read" error. If you're working at the REPL, it's detected at the Read phase of the Read-Eval-Print-Loop. It is, of course, a read-syntax error.
But the fact that the Read phase can be usefully separated from the Eval phase is a distinguishing feature of Lisp, and why it makes sense to say, at least, that Lisp has no concrete syntax, only abstract syntax. Lisp programmers fluently think in terms of the abstract syntax their code generates, because they read and write in the string representation of that abstract syntax. That's why it makes sense for the Lisp eval function, unlike the eval function in other dynamic languages, to take lists as input rather than strings. That's why a metacircular interpreter in Lisp will be written to input lists. That's why a DSL in Lisp written as a custom evaluation function will be an interpreter for lists. The Lisp language is defined with respect to the list data-structure, rather than strings.
On your Python idea: the problem is that no-one would ever use Python code as the literal representation of Python's abstract syntax.
It would be entirely backward, anyway. Lisp starts as a notation for a data-structure rich enough to represent abstract syntax (symbolic expressions) and then defines a language in terms of this data structure. The philosophy of this is understood when we realise that the roots of Lisp are in metalanguages and logic.
I'm going to guess wildgift was being sarcastic. COBOL syntax is so needlessly verbose it interferes with the understanding of the actual formula. I contend that
This is a fascinating quote. It seems to be a combination of either:
* Greenspun's 10th rule of programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of CommonLisp."
http://c2.com/cgi/wiki?GreenspunsTenthRuleOfProgramming
* James Gosling quoting Guy Steele: "Lisp is a Black Hole: if you try to design something that's not Lisp, but like Lisp, you'll find that the gravitational forces on the design will suck it into the Black Hole, and it will become Lisp".
http://web.archive.org/web/20091201072950/http://blogs.sun.c...
For simple business rules and logic, I found it useful to just use a table or mini-spreadsheet to represent them. It has strict structure and it's simple to understand and fill in. Business users love them.
We've been working on a programming language that is based on composition. This basically leads to a hybrid data structure (not necessarily tree-structured).
Here is an example (write out 1 2 4 8 16 32 on different lines):
Application (
using Library ( name "Vision.Console" )
action ForEach (
items 1 2 4 8 16 32
action WriteLine ( text CurrentItem () )
)
)
Parts are upper case and properties are lower case.
Each Part is a class 1 contained within a given framework.
I'm working on a toy programming language at the moment, and I was considering using a LISP dialect of some sort as the IR to be compiled down to. Is this a dumb idea? I had the same sort of epiphany that the OP had, but then I think LLVM IR is a better choice. The only reason I like the LISP-as-IR idea, is that if a programmer wants to get deep and optimise, the tooling can work with lisp instead, which allows for some neat tricks I have in mind.
You're basically reontologizing the history of Lisp, which is that it was originally S-expressions that were the intermediary language for M-expressions, which were the human representation.
They never quite got around to making M-expressions though, people found working in S-expressions to be pretty fine.
Dylan is sort of a compromise around this area though:
It's a sack of parenthesis anyway you slice it. Of course staring at anything, including parenthesis, gets easier the more often you torture yourself. Just like talking with a lisp, you get used to it...hence they call the language LISP; painfully charming.
I was under the impression that the C standard does mandate that comparisons give 1 and 0 for true and false. (Other nonzero things are considered true, but a plain comparison won't return them.)
"Each of the operators < (less than), > (greater than), <= (less than or equal to), and >= (greater than or equal to) shall yield 1 if the specified relation is true and 0 if it is false. The result has type int."
Here you go, simple integer math, no programming necessary :)
That said, this isn't optimisation, multiplication and division aren't intrinsically better than branching.
My exact thought. Moreover, there might be other architectural reasons why he was asked to implement the logic at the higher level. I can see what he is getting at, but not his motives.
Alright, that's one anecdote. But take a look at this one.
In Python:
a = 3 * 4 + 5
In Lisp:
(setf a (+ 5 (* 3 4)))
Now maybe it's just me, but balancing all those parentheses and thinking in prefix notation gets a little hairy, especially if we introduce more math into a program (think graphics or ML).
Can't write that in Python. Looks like Python does not support even basic math notation.
Lisp does not even attempt to do that.
We have to write
(+ (/ 3 4)
(/ 7 8))
or
(+ 3/7 7/8)
Let's say we want to write a slightly more complicated expression in Python:
2 2 +-+ 3+-+3+-------+2 3+-+2 3+-------+
- 2b x \|3 log(\|a \|b x + a + \|a \|b x + a + a)
+
2 2 +-+ 3+-+2 3+-------+
4b x \|3 log(\|a \|b x + a - a)
+
+-+3+-+2 3+-------+ +-+
2 2 2\|3 \|a \|b x + a + a\|3 +-+3+-+3+-------+2
12b x atan(----------------------------) + (12b x - 9a)\|3 \|a \|b x + a
3a
/
2 2 +-+3+-+
18a x \|3 \|a
Not possible. It does not know basic math format for roots, exponents, ... Lisp also does not know it.
Balancing parentheses in Lisp is usually done with the help of an editor...
For the first part, it's still more intuitive (at least for me personally) to write
a = 3/4 + 5/8
than
(setf a (+ (/ 3 4)
(/ 7 8)))
and the primary problem that I have with the parentheses not closing them, but logically seeing how expressions link together. Even if you can click on an end paren and see the expression it closes off (as you can in most IDEs) it would be easier just to not have the extra parentheses there in the first place, so as to avoid the confusion.
It's really hard for me to even tell what you're trying to write for the second part -- in Python or Lisp.
Personally I like the parentheses a lot, since they allow me comfortable editing operations for such expressions. Additionally it makes it easy to computationally manipulate those expressions as data.
The grammars exist because programs are made to be read by people, not just computers. A more heterogeneous notation can make for more concise code that's easier to scan and easier to read.
Of course, this is not to say that Lisp is unreadable. I actually rather like reading and writing (paredit is awesome) Lisp code. But, after quite a bit of Racket, I've found that I still heavily prefer having infix operators and a bit more syntax. At the same time, I also often want less noise--ie no parentheses. I want all this so that I can quickly scan and comfortably read my code. That's why we need grammars.