Hacker News new | past | comments | ask | show | jobs | submit login
D-Expressions: Lisp power, Dylan style [pdf] (csail.mit.edu)
89 points by fanf2 8 months ago | hide | past | favorite | 61 comments



I’ve had a change of heart over the years regarding macros. Prefix syntax, as in Common Lisp, makes it really easy to write macros without understanding how programming language syntax and grammars really work. As more of a “computer” person than a “computer science” person, that was super appealing.

But that’s probably optimizing for the wrong case. Macros are used much more often than they are written, and the person writing the macro should probably understand syntax and grammar. Moreover, as soon as you add guardrails like hygiene (like Scheme does), the inherent conceptual simplicity of Common Lisp style macros is greatly reduced.

Too bad Dylan never took off. I think it would have been a better language than Java for enterprise software. Ironically, it was a create of the time insofar as it focused on something (trying to match C for performance) that ultimately didn’t end up mattering so much. We now write everything in one of the worst and least optimizable languages this side of TCL, and that doesn’t hold back deployment.


It is interesting to me that prefix notation gets brought up so much in the context of Lisp. Most operations in most languages use prefix notation (function calls, procedure calls, method calls, and most macro systems). The only thing that makes Lisp unique in this context is that it also makes arithmetic operations prefix. This is just the natural side effect of treating "primitive operators" as first class functions. It's something that other languages that treat these operators as non-primitive constructs awkwardly work around, like some syntax magic in Haskell or operator overloading in other languages.

What makes Lisp macros so intuitive is not the prefix notation, it's the homoiconicity of the language. Writing a macro is just a list operation, where the familiar map / filter / reduce applies.


What makes Lisp macros so intuitive is not the prefix notation

The culture is the ultimate "reality distortion field." Prefix notation would be seen as intuitive, if that were culturally established. We'd see something like PEMDAS as arbitrary and silly.

Just look at how much content there is around PEMDAS and interpretation of math problems. Clearly, it really isn't "intuitive." We just have this enshrined in the culture. (That said, one of the biggest UX mistakes the designers of Smalltalk made, was to eschew operator precedence!)


My intention wasn't to argue whether or not prefix notation is intuitive. The point I wanted to make was the intuitiveness of Lisp macros is mostly unrelated to the use of prefix notation. Homoiconicity matters a lot more for macros.


I love prefix-notation for arithmetic, not just because it prevents the need for disambiguating precedence.

Subjective, I know, but I think (+ 1 2 3 4) is way nicer than (1 + 2 + 3 + 4).


That's the usual issue, people want their language to be a glorified pocket calculator, so they can type their usual formulas as-is, while lisp rapidly grew a culture of searching for new and larger abstractions. I know some people who would die writing an imperative for loop over an external accumulator rather than (+ nums...) of (fold, +, nums).

I keep trying to pin point the psychology of it. Cause even as a kid, I was more attracted toward HP calcs even though I never heard of lisp at this point, but RPL felt like infinite magic. And of course when I ran into emacs, I felt the same strange feeling.. there's something.


FORTH go PROSPER and


My problem with prefix, is simply that my mind does not think in prefix.

If I have to write 1 + 2 * 5 - 3, I'm not going to "start" with *.

I will type:

   (+ 1 2<-<-(* ^E 5) <-<-<-<-<-<-<-(- ^E 3))
It indicative of how I'm thinking about it (regardless of how the navigation is done).

I just have a lot of those starts and stops when converting infix to prefix in my tiny Piglet brain.

That said, as a general rule, yea, I like (+ 1 2 3 4) and its ilk. It takes a bit of exploration to read. For example, I need to transit the entire list of 1 2 3 4 to understand all that is being added, which can be more difficult with longer expressions. But that can be mitigated with formatting:

  (+ (this that)
     (func zzz)
     (if (< v q) 1 3))
(Function conditions are also fun to intermix into things!)

I think just as a rule, I find formatting more important in general with s-expr systems than algolesque systems.


Where this happy infix world goes wrong is precedence rules, especially where they're different between different languages.

The numerical operators, with the Boolean operators, negation, comparisons, maybe implicit type conversions as well. Maybe && as separate from &. Maybe user defined operators with different precedence to the builtin ones.

Maybe the large precedence table you have to keep in working memory to use the infix conventions is still simpler. Maybe even when there's a style guide saying to put lots of parentheses in as well. It starts to be difficult to fit a definition of simple to the observations though.


> If I have to write 1 + 2 * 5 - 3

That's a great example of an expression that can be ambiguous, but not with prefix/suffix notation.

Most people that write lisp-like code use an editor that helps with s-exp editing (like paredit), so that isn't really a significant issue. In fact, I think it is faster to write s-exp code than algol-likes once you've become accustomed.

I agree that formatting is very important w/ parens langs, but there are many languages that consider formatting a high concern.

I would also argue that reading code is harder than writing it. Optimizations that speed up writing code is less interesting to me than ones the help reading/understanding/making assertions easier about code.

Finally, infix notation is available in lisp/scheme, it's just a macro away (but seriously, don't).


I find value in ease of "local" modifications to code. My editor being able to indent as soon as I type or paste code is a huge time saver. What's an even bigger time saver is my editor being able to perform tree operations on tree-like data.

When using "curly brace languages", what I really miss is structural editing. I often deal with tree-like structures in any language, and being unable to simply cut a node, move my cursor to the start/end of a node, nest something, etc. is really inconvenient. These are operations that take me at most a second when using Emacs with smartparens. In JSX, for example, these one-second operations need me to imitate smartparens' behavior by hand then run Prettier since indenting by hand is an even bigger waste of time. Transforming e.g.

  <Foo>
    <Bar>
      <Baz />
    </Bar>
  </Foo>
to

  <Foo>
    <Bar />
    <Baz />
  </Foo>
takes *seconds* even though the equivalent operation with S-expressions is a single keybind in Emacs.


> My problem with prefix, is simply that my mind does not think in prefix.

Mine, neither, for maths.

But for everything else, we’re both already used to prefix:

    fprintf(STDOUT, "foo %d", 3);
is prefix, just as much as this is:

    (format *standard-output* "foo ~d" 3)
And it turns out that the advantages of a single homoiconic notation are so compelling that it’s worth a little bit of mild ugliness when dealing with maths.


For me, it's concatenative languages. Grammar so simple that BNF is of dubious usage.

4 3 2 1 + + +


I prefer [1,2,3,4].sum that you can get in ruby out of the box, but `1 2 3 4 sum` would look great too.

Now if we want to handle compounded expression, you need either a given constant finite number of argument, or have a reserved word to mark grouping, much like `)`. A bit like stack based programming languages.


When dealing with pure functions, there is also no ambiguity for f(g(1), h(i(2), 3), j(4)). In both cases, prefix notation removes ambiguity.


Method calls are infix:

    target.method(argument, another)
Here, the operation is `method` and the operands are `target`, `argument`, and `another. Note that the operation appears in the middle of the operands.


Fair point. I was considering `target.method` as the operator.


It depends on the language's semantics. Sometimes (as in Python), `target.method(a, b)` really is two separate operations, `target.method` (which returns a bound function) and then `<thing>(a, b)` which applies arguments to the result.

Even then, it's still not a prefix operation. It's postfix.


Whew. (Wipes sweat off brow).

Things like this are mentioned, in some detail, in the O'Reilly book Python in a Nutshell, IIRC, in the first edition, which is what I have.

Couldn't quite wrap my head around all the material in that chapter, TBH.

But I still like to read about such stuff, and I do understand bits and pieces of it.


In Python, the "." in "target . method" is an infix operator.

Note that the "(" in (a, b) is also an operator. (There is some special parsing because otherwise you could do things like args = (1, 2, b=3); target.method args.)

As to whether <thing from target.method><thing from (a, b)> is postfix, I'm not sure how you get there. Yes, there's an implicit funcall at the end, but there's something similar with (+ a b) or ((. target method) (arglist a b)) and we wouldn't call them postfix.


You're right. Function application is essentially an infix `(` operator. Not sure how I got that wrong.


Not really because the ordering is unambiguous given the parens. No need for operator precedence rules like PEMDAS.


Oh, you still have to worry about precedence. Consider:

    var x = 123
    print(-x.abs())
Or even:

    print(-123.abs())
What do those print? Do they print the same thing?


Unary operators are still operators. The integer parsing rules are probably different. In Lisp, -x would be a symbol, and the proper analog to -123 would be (- x) eg.


I think we're in agreement. `-` and `.` are both operators and the language and user have to understand the relative precedence of them.


> The only thing that makes Lisp unique in this context is that it also makes arithmetic operations prefix

I think that most problematic for prefix notation are not arithmetic operations, but field accessors.

In C you have a->b->c->d, in Lisp it would be (d (c (b a))), which makes you to jump to the center and read it from the inside-out.


The direct translation of the C expression to Lisp would actually be (-> a (-> b (-> c d))), which is fine to read from left to right as well. As lispm said, this can be reduced to (-> a b c d) due to the flexibility of prefix operations.

The direct translation of the Lisp expression you shared to C would be: d(c(b(a))), which just like the Lisp expression, evaluates from the middle out. Both are fine to read from left to right though.


> The direct translation of the C expression to Lisp would actually be (-> a (-> b (-> c d)))

Not really. -> looks like binary infix operator, but it is really an unary postfix operator (parametrized by field). Because field itself is not a first-class entity in C.


Either -> is a binary operator or is is not an operator in the expression at all, and the unary postfix operators are ->b, ->c, and ->d. In the latter case, that gives us (->d (->c (->b a))) in Lisp, which I can see how ypu could get, but I don't agree.

    #include <stdio.h>
    #include <stdlib.h>

    struct foo {
      int a;
    };

    int main() {
      struct foo *f = malloc(sizeof(struct foo));
      f->a = 5;
      printf("%d\n", f->b);
      free(f);
      return 0;
    }


    test.c:11:21: error: no member named 'b' in 'struct foo'
       11 |   printf("%d\n", f->b);
          |                  ~  ^
    1 error generated.
What do you mean when you say fields are not first class in C?


In Lisp one could have (-> a b c d), just like (+ (+ 2 3) 4) is (+ 2 3 4).


> Most operations in most languages use prefix notation

Yeah, and IMHO most operations in most languages suffer from being mostly prefix. Yes, LISP is not that much worse, but doubling down on the bad part doesn't exactly recommend it. ¯\_(ツ)_/¯

One of the cool things about Smalltalk is that it consistently makes everything infix.


> as soon as you add guardrails like hygiene (like Scheme does), the inherent conceptual simplicity of Common Lisp style macros is greatly reduced

Clojure offers a nice middle-ground to this. In Clojure, symbols are namespaced, and syntax quote "fully qualifies" symbols. A "fully qualified" symbol is a symbol with its namespace explicitly written out. For instance, foo is unqualified whereas clj.user/foo is fully qualified. The language disallows binding fully qualified symbols. So the most common bugs arising in unhygienic macros are eliminated while maintaining the same level of "simplicity" as macros in Common Lisp.

There are also other features to help ensure hygiene, such as syntax that automatically produces gensyms for you. e.g. instead of

  (let [foo-sym (gensym)]
    `(let [~foo-sym some-value]
       (do something with ~foo-sym))))
one can simply write

  `(let [foo# some-value]
     (do something with foo#))


Actually it was my experience with Tcl back at our 2000's startup, continuously porting modules from Tcl down into C, that formed my opinion that languages without either JIT or AOT on their reference implementation, as mostly suitable for scripting and teaching purposes.


Haha—sorry, I didn’t mean to shit on TCL. It’s got no pretension of pretending to be something it’s not.


Er. You don't specify what the opinion you formed is.

Should that have been:

"... languages without either JIT or AOT on their reference implementation, are mostly [only?] suitable for scripting and teaching purposes."

... ?


Not necessarily in the reference implementation, but quite early:

Adam Sah, UCB: An Efficient Implementation of the Tcl Language (1994, UCB Masters Degree Thesis for Professor John Ousterhout):

https://www2.eecs.berkeley.edu/Pubs/TechRpts/1994/CSD-94-812...

After Ousterhout and his team went to Sun, but before the Java Juggernaut made its debut, Sun was positioning TCL to be the "Official Scripting Language of the World Wide Web".

Brian T. Lewis, Sun Microsystems Labs: An On-the-fly Bytecode Compiler for Tcl (1996, Usenix TCL/Tk Workshop):

https://www.usenix.org/legacy/publications/library/proceedin...

I wonder what would have happened if John Oosterhout's TCL team had applied Dave Ungar's Self team's JIT tech to TCL, before the Self team left Sun and made HotSpot (who Sun then hired back to apply to Java). Anyone know if / how those two teams at Sun overlapped / interacted at Sun Labs?

Then "The TCL War" happened, which didn't help TCL's world domination plans either:

https://vanderburg.org/old_pages/Tcl/war/

https://news.ycombinator.com/item?id=12025218

Slightly Skeptical View on John K. Ousterhout and Tcl:

https://softpanorama.org/People/Ousterhout/index.shtml

>There was some concerns about TK future. See for example the following message from [Python-Dev]

>FYI ajuba solutions (formerly scriptics) acquired by interwoven

    On Tue, Oct 24, 2000 at 07:07:12PM +0200, Fredrik Lundh wrote:
    >I'm waiting for the Tcl/Tk developers to grow up -- they still
    >fear that any attempt to make it easier to use Tk from other
    >languages would be to "give up the crown jewels" :-(
>In the long run this has probably harmed Tk seriously. If Tk was just a widget set divorced from Tcl, then it might have been chosen as the underlying widget set for GNOME or KDE, and then have benefited from the development work done for those projects, such as the GNOME canvas enhancements, which now can't be absorbed back into Tk without a lot of effort to merge the two sets of code.

The genius of TCL/Tk:

https://news.ycombinator.com/item?id=22709478

DonHopkins on March 28, 2020 | parent | context | favorite | on: Is there any code in Firefox (as of 2020) that com...

The genius of TCL/Tk, and the reason I believe Tk was so incredibly successful despite the flaws and shortcomings of TCL, is that toolkits like Motif, based on the X Toolkit Intrinsics, that aren't written AROUND an existing extension language, end up getting fucked by Greenspun's Tenth Rule:

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

>"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." -Philip Greenspun's Tenth Rule

The X Toolkit ends up needing to do all of these dynamic-scripting-language-like things, like resolving names and paths, binding events to handlers, calculating expressions, and instantiating objects based on resource data. So if it doesn't already start out with a standardized scripting language to use for that, it has to duplicate all that dynamic almost-but-not-quite-entirely-unlike-half-of-Common-Lisp stuff itself.

Case in point: Motif's infamous UIL (because X resource files weren't half-assed enough).

https://www.donhopkins.com/home/catalog/unix-haters/hpox/uil...

And then when you do get around to plugging that toolkit into some other scripting language (the way WINTERP plugged Motif/Xt into XLisp, or GTK/GObject plugs into Python for that matter), now you have two or more fat bloated complex buggy poorly documented incompatible impedance-mismatched half-assed competing layers of middleware and object models tripping over each other's feet and spraying each other with seltzer bottles like the Three Stooges.

https://www.youtube.com/watch?v=VO9RP4QEZKU

https://news.ycombinator.com/item?id=22610342

>Speaking of the plague, does it support UIL? ;)

>Neils Mayer described WINTERP (XLisp + Motif) as: You might think of such functionality as "client-side NeWS without the postscript imaging model".

http://nielsmayer.com/winterp/

>Don Hopkins wrote to a comp.human-factors discussion about "Info on UIL (User Interface Language)" on January 22, 1993:

https://groups.google.com/d/msg/comp.human-factors/R3wfh90HM...

>Here are some classic messages about UIL. Avoid it like the plague.

[...]


It's not the prefix-system per-se, but it is the fact that code and data are sufficiently similar that you don't have to context-switch when writing macros is very nice.

Your argument seems to be that Macros ought to be harder to write in exchange for being easier to use? What non-lisp macros are easier to use than lisp macros, and how?


I think his argument is actually that the extreme simplicity of defining a macro in CL hides the fact that writing correct macro code isn't very simple. Using gensyms to create bindings in macros is an example - it's not obvious, but often critical for getting macros to behave nicely. Things like accidently capturing variables, multiple evaluation, etc. make "real world" macros difficult to write. Not insurmountable, of course, but more difficult than just basic list processing.

Ideally that has no impact on how easy it is to use a macro, except to the extent a bug would make the macro hard to use.


> We now write everything in one of the worst and least optimizable languages this side of TCL, and that doesn’t hold back deployment.

Javascript?

If so, why do you say it's a least optimizable language?

Do you mean for ahead-of-time single point in time static compilation, rather than code evolving within a JIT afforded real runtime statistics?

I don't have numbers but as an old optimizing compiler person I'm asking because it's plausible you know datasets i have not seen.


I assumed they were referring to Python which is after javascript arguably the most used language and also known to be one of the slowest.

https://niklas-heer.github.io/speed-comparison/

Honestly though, you could probably take your pick. Javascript is suprisingly fast for such a dynamic intepretted language, but PHP, Python and Ruby are all simultaneously some of the most used languages and the slowest.


Implementations interpret, languages don't; PyPy compiles and is about half the speed of V8. But one should put micro trust in a micro benchmark in general.


You just reminded me that ive not seen jython or ironpython for years…now curious how they fare!


Julia is the spiritual successor to Dylan in many ways: multiple dispatch, Lisp heritage, and a full-power macro system with gensyms and local capture.

It's a bit type-cast as a language for numerics and scientific programming, a niche where it's enjoying robust success.

But as a language, it's fully suited to general purpose programming, providing an excellent experience in fact. The ecosystem for most applications which aren't in the existing niche is somewhat thin, but that's a chicken-and-egg problem. It has solid package management, a good concurrency story, well-designed FFI, and performance-sensitive parts of a program can be honed to native speed by making them type-stable.

Slept on imho.


From 1999. How many times you are going to do same sheiße over again?

I am telling you, Mulisp (1978) solved all problems, because everything could be pretty-printed either in Algol-style or Lisp-style.


+1, I guess. Computers are capable enough that the representation in which code is viewed/edited need not be the same as it is canonically stored in, so long as one can unambiguously transform back & forth. We could potentially get over the bikeshedding by letting everyone configure their IDEs per their own taste for syntax.


Nim (https://nim-lang.org/) originally had "syntax skins" with thoughts like these in mind.. e.g. braces vs. indent for block structure as per user-preference. The particular feature was unused enough to be dropped as not worth maintenance, though.

Also, Mathematica since forever (maybe version 1.0 in 1988) has had things like `CForm` and `FortranForm`.


Kaleida Labs (a joint venture of Apple and IBM) developed ScriptX, which was a cousin of Dylan: a lisp-like language with a "normal" syntax without all the parens, with a CLOS-like (without all the MOOP stuff) object system with generic dispatch, multiple inheritance, proxies, and a "Bento" persistence system (from OpenDoc), and container and multimedia libraries that leaned heavily into multiple inheritance. (You'd typically mix arrays or dicts into your collections of other kinds of objects. So you could directly loop over, filter, and collect your custom classes.)

Its parser was a separate layer from its compiler, so Dan Bornstein (one of the ScriptX designers who later made Dalvik for Android) write a Scheme parser front end for it.

ScriptX influenced MaxScript, the scripting language in 3D Studio Max, which was written by one of the ScriptX designers, John Wainwright. Other Kaleidan Lisp hackers include Shell Kaplan (Employee #1 at Amazon) and Eric Benson (who worked on Lucid Emacs), both went to Amazon and did a lot of Lisp and Lisp inspired stuff there.

https://en.wikipedia.org/wiki/ScriptX

Shel and others wrote about Lisp at Amazon and their Lisp-inspired templating notation here:

https://news.ycombinator.com/item?id=12437483

Kaleida's ScriptX training classes were lots of fun: taught by Randy Nelson, who is a professional juggler and former member of The Flying Karamazov Brothers, who Steve Jobs hired to teach developers at NeXT and Apple:

https://web.archive.org/web/20190310081302/https://www.cake....

https://news.ycombinator.com/item?id=18772263

I used John Wainwright's MaxScript plugin API to integrate the C++ character animation system code I wrote for The Sims into 3D Studio Max, to make an animation content management system and exporter in MaxScript, which is like Lisp without parens for 3D:

https://web.archive.org/web/20080224054735/http://www.donhop...

Dan Ingals's work on Fabrik inspired a lot of the stuff I did with ScriptX at Kaleida:

https://news.ycombinator.com/item?id=29094633

Apple also developed Sk8, which was a lot like Dylan and ScriptX, i.e. Lisp without all the parens, plus objects.

https://news.ycombinator.com/item?id=38768635

Mike Levins explained Coral Common Lisp and Dylan and Newton and Sk8 and HyperCard in the broader context and palace intrigue of Apple:

https://news.ycombinator.com/item?id=21846706

mikelevins on Dec 20, 2019 | parent | context | favorite | on: Interface Builder's Alternative Lisp Timeline (201...

Dylan (originally called Ralph) was basically Scheme plus a subset of CLOS. It also had some features meant to make it easier to generate small, fast artifacts--for example, it had a module system, and separately-compiled libraries, and a concept of "sealing" by which you could promise the compiler that certain things in the library would not change at runtime, so that certain kinds of optimizations could safely be performed.

Lisp and Smalltalk were indeed used by a bunch of people at Apple at that time, mostly in the Advanced Technology Group. In fact, the reason Dylan existed was that ATG was looking for a Lisp-like or Smalltalk-like language they could use for prototyping. There was a perception that anything produced by ATG would probably have to be rewritten from scratch in C, and that created a barrier to adoption. ATG wanted to be able to produce artifacts that the rest of the company would be comfortable shipping in products, without giving up the advantages of Lisp and Smalltalk. Dylan was designed to those requirements.

It was designed by Apple Cambridge, which was populated by programmers from Coral Software. Coral had created Coral Common Lisp, which later became Macintosh Common Lisp, and, still later, evolved into Clozure Common Lisp. Coral Lisp was very small for a Common Lisp implementation and fast. It had great support for the Mac Toolbox, all of which undoubtedly influenced Apple's decision to buy Coral.

Newton used the new language to write the initial OS for its novel mobile computer platform, but John Scully told them to knock it off and rewrite it in C++. There's all sorts of gossipy stuff about that sequence of events, but I don't know enough facts to tell those stories. The switch to C++ wasn't because Dylan software couldn't run in 640K, though; it ran fine. I had it running on Newton hardware every day for a couple of years.

Alan Kay was around Apple then, and seemed to be interested in pretty much everything.

Larry Tesler was in charge of the Newton group when I joined. After Scully told Larry to make the Newton team rewrite their OS in C++, Larry asked me and a couple of other Lisp hackers to "see what we could do" with Dylan on the Newton. We wrote an OS. It worked pretty well, but Apple was always going to ship the C++ OS that Scully ordered.

Larry joined our team as a programmer for the first six weeks. I found him great to work with. He had a six-week sabbatical coming when Scully ordered the rewrite, so Larry took his sabbatical with us, writing code for our experimental Lisp OS.

Apple built a bunch of other interesting stuff in Lisp, including SK8. SK8 was a radical application builder that has been described as "HyperCard on Steroids". It was much more flexible and powerful than either HyperCard or Interface Builder, but Apple never figured out what to do with it. Heck, Apple couldn't figure out what to do with HyperCard, either.


We could potentially get over the bikeshedding by letting everyone configure their IDEs per their own taste for syntax.

We Smalltalkers were discussing doing this at Camp Smalltalks in the 2000's.

I'm currently working in golang, and I've noticed that Goland IDE expends quite a bit of compute indexing and parsing source files. Not only that, but, a significant portion of the bug fixes have to do with this, and the primary motivation for restarting Goland has to do with stale indexing.

Wouldn't tools like git simply work better, if they were working off of some kind of direct representation of the semantic programming language structures? Merging could become 100% accurate, for one thing. (It's not for some edge cases, though many might mistakenly think otherwise.)


> Merging could become 100% accurate, for one thing.

How so? Merge conflicts don't arise from the inability to locate the proper change, but from the inability to decide, which of the change, if any, would be proper.


Before that, MacCarthy's own Lisp 2 added Algol syntax (1965-ish?), and then Vaughan Pratt (of "Pratt parser" fame) came up with CGOL in 1973.


There's a long tradition of people asserting Lisp's problem was the lack of infix syntax, then discovering this was not in a fact a problem of Lisp.


How does Mulisp know how to print an arbitrary Lisp-style macro in Algol style?


Obviously such Algol program produces Lisp-code and looks incomprehensible.

But when Mulisp finally conquers the world, we can make backquote-style macro for Algol, where special characters indicate that following stuff needs to evaluated and inserted to the code.


> But when Mulisp finally conquers the world...

That will be never. If Mulisp hasn't done so in the last 46 years, it never will.


it was finnish deadpan humor


I asked an LLM and it suggested humor is forbidden here on English days ending in "y" because of Poe's law. This is why we can't we have nice things.


I don’t know, but I found the muLISP/muSTAR-80 Artificial Intelligence Development System Reference Manual on the Wayback Machine.

https://web.archive.org/web/20140530042250/http://maben.home...


Seems like Nim is the living embodiment of this paper. Similar expression-based syntactic concepts, similar AST dumps.


I have a special fondness for this paper. This was my first real paper I digested successfully. It is easy to read, posits a cool idea, and shows some of what makes it tick.

My one qualm is that it does not elaborate on implementation as much as I'd like.


i'll add to general discussion about syntaxes and infix notation, i'm not sure where to attach this point to existing threads. genera had infix mode, that was enabled on a reader macro. you could write something like

    (defun bresenham (a x0 y0 x1 y1 &aux dx dy d y)
      #◇dx:x1-x0,
        dy:y1-y0,
        d:2*dy-dx,
        y=y0◇
      (loop for x from x0 to x1
            do #◇a[x,y]:1,
                 if d>0 then
                   (y:y+1,
                    d:d-2*dx),
                 d:d+2*dy◇))
you can make the code above fully operational in common lisp using dispatch macro on a unicode character, so i've been experimenting with such infix mode in my private code. i'll leave the judgement over whether or not this increases readibility.


This desperately needs a date `[1999]`




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: