Hacker News new | past | comments | ask | show | jobs | submit login

> And that makes me grumpy sometimes.

If you will excuse a moment of cheekiness...

Could be that you have cause and effect backwards here: because you are grumpy you are dismissing 99% of other language developers' work as rubbish. Could be that it's less than 99% and you are overlooking some great ideas.




That's not cheeky, it's a perfectly fair question. Yes, that is certainly possible. And there have been a few cool new ideas that have come along that are not easily subsumed by CL, like Haskell's type system. It's easy to implement Hindley-Milner, but actually using that information to inform the compiler, plus adding laziness as a core language feature, is much harder. But I think the jury is still very much out on whether or not Haskell is really a net win.

But the most popular language on github at the moment is Javascript, and there is no question that it is simply a very badly designed Lisp with C syntax. This is not intended to disparage Brendan Eich. He had a week to design and implement something, and under those constraints he did a pretty amazing job. But I can't help but imagine how different the world would be if he had used Scheme as a starting point.

Was there something in particular that you had in mind that you think I may have missed?


My take is that Language X usually has major deficiencies compared with Language Y for domain Z, for many values of X or Y: Lisp, Scheme, Smalltalk, Forth, Erlang, Haskell, assembler, C, etc.

Confirmation bias makes it easy to bind those variables to values that make one's own favorite language obviously the best and everybody else's infuriatingly, irrationally terrible. If unchecked then this leads to wildly false conclusions, such as that languages fit into a hierarchy of powerfulness ("Blub Paradox.")


What do you see as Common Lisp's "major deficiencies"? (Pick your favorite value for Z.)

This isn't a challenge, I'm genuinely interested in your answer.


I prefer not to. I think it would be more productive as a private thought experiment: what would you expect a Smalltalk hacker curse about if you forced them to use Common Lisp? a Haskell hacker? a Rust hacker? I think they would miss some really valuable things, and that you could easily miss the value of those things if you took them out of their original context (e.g. considering only whether Lisp would be improved by adopting those specific features.)


that should be pretty obvious: lack of library support.

there is the famous example of the reddit founders, who believed pg's lisp story, and built the first version of their site with it. it went so badly for them that they had to start over again in python.

... but i bet you are going to have a very plausible-sounding reason why it didn't work for them.


I have no idea why Lisp didn't work for Reddit. But Common Lisp has exceptionally good library coverage today, and with Quicklisp, getting access to it is virtually seamless.


okay, maybe i betrayed my biases there too much, but, i agree with everybody else: if lisp was such a great secret weapon, there would be a hell of a lot more visible success stories by now, other than just pg's original viaweb implementation, and some flight routing software.

plenty of other tech has come from up nothing in the last few decades, to wide adoption, and big successes. the fact that lisp hasn't is, in my mind, prima facie evidence that it is not nearly as great as its proponents claim.


> there would be a hell of a lot more visible success stories by now, other than just pg's original viaweb implementation, and some flight routing software.

There are, but you are ignoring them, you simply did not do any research or it is simply hidden.

The success of Common Lisp today is relatively small, but a careful reader could find a few interesting applications of it like the scheduling system for the Hubble Space Telescope, the design software for the Boeing 747 (and other aircrafts), the software for the Roomba, the software for DWAVE's quantum computer, the crew scheduling software for the London subway, chip design verification software at Intel (and other chip companies), ...

There are some old application platforms which survived. For example Cyc, an attempt to provide common sense AI to computers, is under continuous development since the mid 80s. The company Cycorp has 50+ employees, is very secretive and you need to guess who pays for it. Customers are among others the United States Department of Defense, DARPA, NSA, and CIA. They are using it for various applications.

Note also that prototyping software was for a long time an application area for Lisp. Have relatively small teams develop a prototype and make it a product once the idea is validated. Example: Patrick Dussud wrote the core of the first Microsoft CLR (.Net Common Language Runtime) garbage collector in Lisp. The code was then automatically translated to C (IIRC) and enhanced from there after some time. Lisp now is no longer used and the GC has a lot of new features, but the first working versions came from that Lisp code.


> if lisp was such a great secret weapon, there would be a hell of a lot more visible success stories by now

Not necessarily. There are other possible explanations of Lisp's relative lack of commercial success, not least of which is the fact that a widespread belief that "there must be something wrong with it because no one uses it" can become (and I think has become) a self-fulfilling prophecy.

But another important factor is that the Lisp community seems to attract people who are really good at tech but really bad at business. I think if someone (or, more likely, some pair of co-founders) could bridge that gap they could still kick some serious ass.


Just a data point: I founded a Lisp startup together with a bunch of experienced Lisp hacker buddies from the SBCL community. Sadly and reluctantly, we found Lisp awkward and ended up rewriting everything in C, and then never looked back.

These days I am developing such software with LuaJIT and that is working much better for me than either C or Lisp.

One thing I learned along the way is that many tales of Lisp heroism are actually anti-paradigms. Once upon a time when I read about ITA Software rewriting CONS to better suit their application I thought it was impressive; now I see it as a farcical workaround for having chosen an ill-suited runtime system and sticking with it (and generally an indictment of Lisp not providing a practical performance model for the heap.)

Lispers are too expert at spinning bugs as features. "It's insanely complex, every line could be an interaction with undefined behavior or a race condition or an unexpected heap allocation" becomes "suitable only in the hands of trained specialists, like a chef's knife or a surgeon's scalpel or a Jedi's light saber."

I feel like we need to have a shared "our emperor didn't have any clothes" moment with regards to Paul Graham's essays.

(I say this as somebody who does love Lisp and will probably do a lot more Lisp work in the future but only on a project that is a peculiarly good fit.)


(Funny feeling of being a Lisp hacker searching for catharsis in the Hacker News comments section... :-))


Don't search, go do something cool,what all people will talk about!


Tangentially: I am working with LuaJIT these days and this feels really exciting to me. The compiler is a new kind of beast and possibly the beginning of a large Lisp-like family tree. Feels very "MACLISP" to me - exciting!


Some functional languages make certain behaviors implicit, such as partial evaluation and laziness. However, these work better if they are explicit. They work better because one of the two is severely confusing when implicit and the other potentially performs badly.

  C:\Users\kaz>txr
  This is the TXR Lisp interactive listener of TXR 162.
  Use the :quit command or type Ctrl-D on empty line to exit.
  1> (defstruct integers ()
       val next
       (:postinit (me)
         (set me.next (lnew integers val (succ me.val))))
       (:method print (me stream pretty-p)
         (format stream "#<integers ~a ...>" me.val)))
  #<struct-type integers>
  2> (lnew integers val 0)
  #<integers 0 ...>
  3> *2.next
  #<integers 1 ...>
  4> *2.next.next
  #<integers 2 ...>
  5> *2.next.next.next
  #<integers 3 ...>
Why would I want implicit laziness everywhere? The best of all worlds is to have expressions reduced to their values eagerly before a function call takes place.

When I don't want an expression evaluated in (what looks like) a function call, I can, firstly, make that a macro.

If I really want lazy semantics, I can have a decent vocabulary of lazy constructs that fit into the eager language. For instance for making objects lazily I have lnew, distinct from new.

Implicit laziness everywhere is academically stupid. You're drowning the execution of the code in an ocean of thunks and closures.

The pragmatic approach is best of making a compromise between making everything explicit and visible, yet keeping it syntactically tidy and convenient.


> Why would I want implicit laziness everywhere?

Modularity; see the stone age paper discussed yesterday: https://news.ycombinator.com/item?id=13129540

> Functional programming languages provide two new kinds of glue - higher-order functions and lazy evaluation. Using these glues one can modularise programs in new and exciting ways, and we’ve shown many examples of this.

> This paper provides further evidence that lazy evaluation is too important to be relegated to second-class citizenship. It is perhaps the most powerful glue functional programmers possess.


The paper claims in its conclusion that it has provided evidence (what is more, "further evidence") yet I can't find any in there.

It argues that you can achieve a certain useful separation between programs together when one produces data for the other.

This can be achieved in a very satisfactory way with explicit streams (i.e. lazy lists). It can be satisfied with delimited closures, coroutines, threads and often with lexical closures. Not to mention Icon-style generators.

Lazy lists can be incorporated into the language so that their cell are first-class objects and substitute for regular eager cells smoothly. (Thank you, OOP).

The paper is actually wrong there, because laziness alone will not provide the kind of separation that g can begin executing, such that f then only executes when an item is required. Not for an arbitrary f! Suppose f traverses a graph structure recursively and yields some interesting items. Lazy eval alone isn't going to allow the f traversal to behave as a coroutine controlled by g, proceeding only as far as g continues to be interested in further items. The author is attributing to lazy evaluation magical powers that it doesn't have.


> Why would I want implicit laziness everywhere?

For the same reason you want automatic memory management: so you can fob off the job of figuring out where the thunks should go onto the compiler, just as you fob off the job of figuring out where the calls to malloc and free should go. At least that's the theory. It seems plausible to me. I think it's an open question whether my failure to grok Haskell is due to a problem with Haskell or the ossification of my brain.


It's not the same. Here is why: the program correctness doesn't depend on when (or even whether!) that automatic memory management happens. Lisp systems have been bootstrapped without having a working garbage collector upfront. Short-lived Lisp images run as processes in a conventional OS might never have a chance to collect garbage.

Laziness has precise semantics which has to unfold properly, or else things don't work.

Delaying evaluation is not the same thing as delaying reclamation. They are opposite in a sense, because we only allow something to be reclaimed when it is "of no value".


Not trying to troll you here...

To what degree do you use Lisp as an FP language? As a pure FP language? The forced purity may make Haskell a very different language.

And, to return to your original complaint: If you dislike new languages, I bet XML drives you straight up the wall...


> To what degree do you use Lisp as an FP language? As a pure FP language?

It depends on what I'm doing, but I generally write in an OO style more than a functional style. Real problems have state.

> If you dislike new languages

I want to be clear that this is just a general observation. I don't dislike new things because they are new, I tend to dislike them because they are generally bad. But they are not all bad. Clojure is cool. WebASM is very cool. The work that has been done on Javascript compilers is nothing short of miraculous (even though the language itself still sucks).

> I bet XML drives you straight up the wall

Kind of, but not really. Yes, I dislike XML because it is nothing but S-expressions with a more complicated syntax. But it doesn't drive me up a wall because when I need to deal with XML I just parse it into S-exprs, do what I need to do, and render the results back into XML.


I dislike XML because it's a nested tree of internal nodes and typeless character string leaves.

In XML I have no way to place "255" and "FF" in such a way that XML understands them to be the same object, of integer type.


Sure you do. <base10>255</base10> <base16>FF</base16>


Pure FP deals with state!


Sure, everything non-trivial is Turing-complete. But FP begins as a stateless paradigm and then tacks on state as a sort of a kludge while all the while seeming to be a little embarrassed about it, while OO embraces state from the beginning as part and parcel of the mental model that it endorses. I find the OO model has a better impedance match to my brain and the real world. Reasonable people can (and do!) disagree.


State is something that is best just embraced rather than "dealt with".


An analogy to that is that OO doesn't embrace IO. Yet weirdly enough that makes OO-IO better than older languages that have built in commands to write to disk.

Haskell doesn't have state but you have multiple models to choose from, from simple folds to STM or State or Reader or Writer Monads all of which serve different purposes and do different jobs well.


OOP absolutely embraces I/O. I/O begs to be OOP and makes, hands down, the best use case for illustrating OOP.


What I am getting at is that OOP languages like C++ have no IO commands built in it is all delegated to libraries.

Haskell has no state support built in, it is all delegated to libraries.

So:

C++ has excellent IO support, but the language doesn't embrace IO at all.

Haskell has excellent State support, but the language doesn't embrace State at all.


C++ I/O libraries in fact depend on the sequencing semantics built into the language. If we make two calls to the library, they happen in that order; consequently, the I/O happens in that order. We can do wrong things like:

    f(cout << x, cout << y)
where we don't know whether x is sent to cout first or y.

C++ statements could be added to C++ (e.g. as a compiler extension). They would be straightforward to use; C++ doesn't inherently reject that the way Haskell and its ilk reject sequencing and state.


Haskell doesn't reject sequencing. f = g . h will require h is evaluated first.


h might not be evaluated at all! Consider

    h x = factorial x
    g x = 0
(but I agree that Haskell doesn't reject sequencing).


Yeah the IO monad will, but it isn't generally true of monads. Infant the maybe monad Will cease early on Nothing by design. So it is a brain shift.


> Haskell has excellent State support

Someone who understands where that support is and how to use it should rewrite atrocities like:

https://rosettacode.org/wiki/Assigning_Values_to_an_Array#Ha...


I don't think that's so bad, it's just you are using a function instead of the usual built in indexing operator [0]. Here's something a bit more convenient using Data.Array.Lens[1].

    arr ^. ix 2
vs the original:

    readArray arr 2


Failure to grok Haskell is probably due to the lack of easy accessible literature on it. Plus with so few other people groking it there isn't as much osmosis available.


But I can't help but imagine how different the world would be if he had used Scheme as a starting point.

I've heard that he did in fact use Scheme, but the suits insisted on the whole curly-brackets-and-semicolons thing so they could call it Javascript and piggyback on Sun's marketing efforts.


To be fair, he only took some of the semi-colons.


He took an option on the semi-colons.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: