It is a pedagogical experiment to see just how well the interpreter (actually EVALQOUTE/APPLY) defined on page 13 of that book really works. The answer is: perfectly, of course.
Well timed! I've just been trying this out myself after reading Maxwell's Equations of Software [1]. It's fun realising that the "base" functions you're doing in Go can be reduced to even more fundamental forms using the Lisp you're implementing.
I spoke to Rob shortly after he wrote this code (a few weeks ago). He said he had been wanting to do some coding over the weekend and pulled his copy of the LISP 1.5 Programmer's Manual - which he bought long ago, in grad school I think he said - off his shelf, and this code was the result. He also talked about how reading that book back when he bought it was incredibly eye-opening at the time and had such a significant influence on almost everything he's done in his career.
When I look through what Rob has done, I see a few patterns clearly traced to LISP, including a strong emphasis on recursive approaches and simplification by relying on functional (immutable) semantics. Look at the bitmap layers work, Newsqueak, Sawzall. There is of course also a heavy dose of pragmatism, which seems to be what you are criticizing.
Your question is therefore answerable: since he was in fact exposed to all that material forty years ago, he would be exactly where he ended up.
Not that there's much type theory in that book, but I'll note that Rob has also worked with type theorists in the past as well, notably Luca Cardelli on Squeak. And he was the one who suggested we approach Phil Wadler to help us work on the semantics of generics in Go. So again the ignorance about programming languages that your comment implies simply isn't accurate.
This pattern of assuming what other people know about is endemic on HN (and the internet more broadly, but especially so here), and it is harmful to productive discussion.
I wish I could upvote this twice. The classic "blub" post by Paul Graham is usually read in "upwards" direction where people using a language without some abstractions (say, monads or borrow checkers) can't understand why they would need it when looking at a language that has them. There is probably also a "downwards" interpretation where people using such a language can no longer understand that it's possible to be productive in languages that lack the advanced abstractions. Choosing just the right abstraction can be a distraction that gets in the way of "just building stuff", but if there is only one abstraction available then that's what will be used.
Interestingly, preaching the benefits of "simplicity" is also a very popular pastime on HN but that apparently does not extend to the languages used.
> The classic "blub" post by Paul Graham is usually read in "upwards" direction where people using a language without some abstractions (say, monads or borrow checkers) can't understand why they would need it when looking at a language that has them. There is probably also a "downwards" interpretation where people using such a language can no longer understand that it's possible to be productive in languages that lack the advanced abstractions.
Thank you, this is a very helpful framing of something I've recently been struggling to articulate about languages with sophisticated type systems after having surprisingly positive and productive experiences with Clojure and Go.
This is exactly what we see. People adhering to different computing paradigms use some of the same words (e.g. "type"), but they have different meanings and basically they talk past each other as a result.
It's not worth responding to people like that, but it's funny now to point out that Wadler says in the Featherweight Go video that Go's type system has something that Haskell's type system may like to borrow, i.e. that interface types are open:
I'm not sure what Philip meant there, but I don't see this as a particularly notable weakness of Haskell's type class system. We can already encode an open world through the standard "HasX"-style classes. There's a good reason that we don't use them much, though: type classes are typically not just interfaces but come with laws expressing how they should behave. It's not clear to me how those laws interact with an open world.
> This pattern of assuming what other people know about is endemic on HN (and the internet more broadly, but especially so here), and it is harmful to productive discussion.
That's not really what my comment is about; and I know very well who Pike is, and I'm assuming everyone here does. (Come on, legendary!)
Note that the phrase I used, "doing the experimentation", specifically avoids speaking to what Pike knows or doesn't know.
// Expr represents an arbitrary expression.
type Expr struct {
// An Expr is either an atom, with atom set, or a list, with car and cdr set.
// Car and cdr can be nil (empty) even if atom is nil.
car *Expr
atom *token
cdr *Expr
}
Instead of using interface types for expressions, there is a simple expression structure type with fields that may or may not be set depending on what type of value it is.
I probably never would have done it that way, I would probably use an interface type instead. This way actually saves space in cons objects… in the above version, a cons is 24 bytes, and below it would be 32:
type Expr interface { }
type Cons struct { car, cdr Expr }
Not efficient compared to modern Lisp interpretations, but an interesting choice.
I wrote a toy lisp 1.5 interpreter in Go a few years ago, and part of the joy was making the core of the interpreter mimic McCarthy's typography. This is my apply function:
func apply(fn, x, a Addr) Addr {
if atom(fn) == T {
switch fn {
case CAR:
return caar(x)
case CDR:
return cdar(x)
case CONS:
return cons(car(x), cadr(x))
case ATOM:
return atom(car(x))
case EQ:
return eq(car(x), cadr(x))
default:
return apply(eval(fn, a), x, a)
}
}
switch car(fn) {
case LAMBDA:
return eval(caddr(fn), pairlis(cadr(fn), x, a))
case LABEL:
return apply(caddr(fn), x, cons(cons(cadr(fn), caddr(fn)), a))
}
panic(errint("bad node: " + car(fn).String()))
}
> I probably never would have done it that way, I would probably use an interface type instead
I think when you're all in the same package and accessing unexported fields you know about, this is much clearer than an interface. I often add unexported mutually exclusive fields that local code has advanced knowledge about to prevent unnecessary over-abstraction.
And yeah, and then a new team member doesn't know certain fields are mutually exclusive, and neither does the compiler because of the lack of sum types.
I find this a fascinating topic, so forgive me for this reply which has turned in to a mini blog post of sorts...
Go embraces the common insight that you can encode a sum as a dependent product. That is, here, you have a discriminated union of atoms and cons cells, but the discriminant is not a specific tag value, as it would be in Haskell or ML or similar, but rather the nil/non-nil state of these fields. Of course, there is no static enforcement of this property, as there would be in contemporary languages with sum types, including Rust's enums.
The quintessential example of this in Go is result/error multi-value returns. The result is valid if the error is nil. If your goal is to save bits (which is rarely the case for Go programs), as you note, avoiding an explicit discriminant (such as an interface pointer) means that you're encoding the discriminant's information in to less space, utilizing the specific domain knowledge of mutual exclusion.
Stepping further from Go, this same principle is at play in Clojure, which favors open maps for information. In Clojure, multimethods can dispatch on arbitrary functions of values. This means that any field can easily be used as a discriminant. Now, saving bytes is certainly not the purpose of this in Clojure, but it's interesting to think about. Forcing dispatch in to a privileged discriminant tag means that data needs to be transformed/parsed in order to re-arrange information in to the tag for dispatch.
As excellent as that article is, I think the truth lies in some balance. Both parsing and validation are useful techniques. Contemporary typed languages tend to push you down a path towards parsing instead of validation, which is probably the way the pendulum should swing for many use cases. However, there is one language that stands in stark contrast: TypeScript. Because it needs to support existing JavaScript idioms, it has grown a type system powerful enough to enable reasonable type safety with a validation based approach. Tools like control-flow sensitive type checking, literal types, and type-guards make that possible. You can use an arbitrary key in some JS object with a literal type value as a discriminant and the type system will do the right thing with union types.
There is one other area where Go is interesting here: zero values. Go zero-initializes all freshly allocated memory and encourages a style that embraces that. These freshly allocated objects are called "zero values" and often they are useful right out of the gate. You're encouraged to make code no-op safely with zero values, or apply some sane defaults. This makes an important distinction between `EnableFoo bool` and `DisableFoo bool`. Looping zero times is a perfectly valid thing to do. Silently skipping nil values is a perfectly valid thing to do in some cases. Etc. Clojure is similar with nil punning. While not without it's downsides, this is an interesting point in the design space that seems totally ignored by theoreticians and totally under-appreciated by working programmers, even those who work with Go and Clojure regularly. I'd really like to see that change, as I've found my programs have generally improved as I make judicious use of these techniques.
Objective-C zero-initializes ivars, and it's common to rely on this. The big problem is that you still need to test that code is doing the right thing when it encounters a nil state.
If you define away the zero states completely with sum types, code flow is completely accounted for at compile time, and entire categories of "oops, that shouldn't be nil right now" bugs simply don't exist. On Apple platforms, this is a major reason to use Swift instead of Objective-C.
I haven't used Go, only read some code occasionally; perhaps there's some other difference here that makes this a nonissue. But it sure looks like it has the same set of problems.
It depends on whether or not you view "oops, that shouldn't be nil right now" as a categorically different problem than "oops, this integer shouldn't be greater than 10 right now" problems. Or "oops, this array shouldn't have an odd number of elements right now" problems.
Yes, it's nice to have the type system catch problems. And yes, in the context of memory-unsafe languages null or wild pointers are a big problem. But I've found that you'd need a combinatoric explosion of data constructors and abstract interfaces to enforce the interesting invariants of my programs. A nil pointer (which panics at runtime with a good stack trace!) tends to be among the easiest problems I have to solve when my programs violate invariants.
I'm not arguing that sum-types are a bad idea. Only that dependent products (with or without enforcement, static or dynamic) are an under appreciated technique. Sum-types are really just one special case of dependent products.
I called out Clojure specifically because of the culture around this specific issue and because of the dispatch design of the multimethods that are present in the standard library and widely favored. Nothing stops you from making dispatch tables or simply switching on a key in JavaScript.
Personally, I find that one much more interesting. Before I was familiar with array languages, I hoped that functional programming and lisp would become the new zeitgeist of software development. Now, I think that array languages have the most promise in revolutionizing the discipline. While it probably wont happen, I wouldn't complain if array languages became the defacto norm for most new applications.
I always get the feeling that a lot of languages would be well served by having an APL or prolog implementation added as a library or macro; this would let teams leverage these tools down in the core of their application without having to compromise on how they create APIs.
Closest I’ve seen to this was core.logic in the Clojure community, but it appears to be dead.
It’s being used in rust-analyzer (the leading language server) today, and is likely to be used in the compiler in the future. Not really intended to be general-purpose, but an interesting datapoint. And could evolve to be more general, knowing the Rust community.
Dyalog is an APL vendor, they also offer a non-commercial license and high quality documentation. They have a case study of a school management system that is a record keeping system [0].
[0] https://www.dyalog.com/case-studies/education.htm
There is a GNU implementation of APL (https://www.gnu.org/software/apl/). Not sure if it is used in commercial applications or not. I would imagine that if you are doing commercial work with APL you are going to be paying for Dyalog.
If you want something in between, i'd suggest Julia, which has taken a lot of array programming concepts and ported them into a functional language, so you can get the ideas without going so far down the rabbit hole that everything is an array.
The problem though might be that those ideas (like Julia's broadcast) are so seamlessly integrated into it's functional paradigm that you won't "really" know which parts are parts where you've been tricked into doing array programming.
"Array language" describes a pretty generic syntax format wherein an operator acts on multiple inputs. Functional programming is a style that nests nicely with arrange languages. If you know "R," just think of lapply. It's like lapply everywhere, but without the lapply syntax. Every function is vectorized.
People who aren't formally trained on loops or object-oriented programming tend to pick this up more easily. That said, you can do OOP and loops in J if you want to.
No, not really, None of these are what I'm talking about when referring to "array languages". I'm talking about APL and its descendants, j, kdb+/q, k, etc.
I think a lot of these ideas have gone into Python's Numpy, for example. But I believe these are useful for the domain of signal processing and numerical computation, and not that well suited for general problems. (You might be interested in Racket which is a Scheme relative that has very good support for both domains.)
I would not call awk an array language. Arrays are essential in awk, but it lacks array operations. For example, in an array language, you add two arrays a and b, element by element, with a+b, while in awk you need a loop. Moreover, you do not have direct map and reduce operations, and again you need loops.
As a fan of both awk and array languages, I think that an nsl-awk would be a really interesting project.
A few weeks ago, I went through "Make a Lisp" [1] using C#. It only took a few days to get to a pretty useful implementation. I "cheated" quite a bit using the existing C# implementation from the repo as a reference, but my end result looks pretty different. Learned a TON. Highly recommended.
My only memory is of the closing ceremony: a fleeting image of a bear mascot being carried across the stadium by balloons. I watched that on TV as a nine-year-old boy in Slovakia..
I was impressed at first, but then I saw the David Letterman bit. An olympic medal is far more believable than a talk show host interested in a computer scientist.
Well timed! I've just been trying this out myself after reading Maxwell's Equations of Software [1]. It's fun realising that the "base" functions you're doing in Go can be reduced to even more fundamental forms using the Lisp you're implementing.
[1] http://www.righto.com/2008/07/maxwells-equations-of-software...