Looks like they also made public their quantum virtual machine a month ago, as I was wondering how to play with a compiler without the HW to run a program... ^__^;
Maybe you could shoehorn it into running on IBM's Q platform. They let you run circuits in real hardware and give you back the statistics on the measurements.
It’s interesting that they chose to use Common Lisp for their entire quantum stack. Maybe it hit the sweet spot between expressiveness and performance? Choosing a Lisp makes sense, but I’m not sure I would choose CL. The macro system is primitive and the whole language stinks of a compromised design. Also, at this point, the language feels old and somewhat archaic.
If performance isn’t a dealbreaker, Racket would be sensible choice. Though personally, I think OCaml would be great for this sort of project. Native code, syntax extensions with ppx, a bulletproof type system, and a highly optimized compiler that produces blazing fast code.
Common Lisp isn’t a very pretty language and isn’t the pinnacle of academic language design, but it’s rock solid, is ANSI standardized, is very highly performing, has great IDE support, has multiple companies offering commercial support, and has many open source implementations. The ugliness, in my opinion, comes from the odd character of the language, and not its fundamental design. It’s not perfect, but it’s surprisingly orthogonal. There is the odd pitfall, and some semantics that aren't standard (but are de facto standard), and in the end you get a combination of a lot of great features, paradigms, and functionalities that are difficult to find elsewhere.
(The macro system is plain, but surprisingly effective!)
Without even getting into the benefits of Lisp for language & compiler development, these are wonderful aspects for a company.
I think a lot of the choices that went into common lisp were done for practicality's sake although there is some archaic elements due to design by committee and trying to put something together to unite the many different lisp implementations at the time.
There's a lot to like about Racket, but Common Lisp is really fast, practical, and is portable to many different implementations like SBCL, AllegroCL, LispWorks...etc that are similar, but have different characteristics (one compiles faster, one has better performance, the commercial ones have GUI libraries...etc).
Given a choice, I would personally choose Common Lisp over Racket pretty much any day (not that Racket isn't nice btw). Performance is one reason, but the Chez Scheme migration should help out Racket a bit.
OCaml is neat too, but not really directly comparable to Common Lisp. Both are solid languages, but I wouldn't put them in the same category. If I was writing a compiler I would choose OCaml...crazy macros and code generation everywhere...lisp :)
I'd like to see someone who really uses Common Lisp regularly like LispM comment though.
I wonder if the Chez Scheme port, we’ll see Racket ported to more places. A lot of moving to Chez has been writing more of Racket in Racket rather than C.
While the Chez and Racket are probably more similar, I have to wonder if we’ll see Racket ported to SBCL or elsewhere.
I like Racket for the cleanliness and good design, but I do covet SBCL’s performance and CL’s ubiquity.
The 'archaic' elements are due to backwarts compatibility with Lisp. You get that raw core of Lisp from the 60s: s-expressions, cons cells, LAMBDA, LIST, CONS, CAR, CDR, APPEND, COND, EVAL, PRINT, PROG, READ, macros, dynamic binding, ... all these (and more) are still in Common Lisp.
Common Lisp is an improved Lisp dialect, not a fully new language. A goal was to relatively easily port some existing software which was written in the years from 1958-1984 to Common Lisp. CL appeared in a documented form in 1984 with CLtL. Thus some code that ran in Common Lisp was a port of code written starting in the 60s, like Macsyma. People did not want to throw away hundred thousands lines of code and they wanted to use the newer language, because it was available, slightly better than the old one and was supported on more machine types.
This language was mostly developed by five people: Scott Fahlman, Guy Steele, David Moon, Daniel Weinreb, and Richard P. Gabriel.
> and trying to put something together to unite the many different lisp implementations at the time
Common Lisp is specifically a successor to Maclisp, mostly based on Lisp Machine Lisp (another Maclisp successor), by simplifying/generalizing LML. It's even not directly compatible with LML. It's much less compatible with other Lisp dialects of the time: Interlisp, Scheme, IBM's Lisp, Standard Lisp, or even Scheme or Logo.
After the initial design of Common Lisp was done, an effort was started to standardize the language: improve it and add a bunch of important features like error handling and object-oriented programming.
Error handling was then based on the Symbolics 'New Error System' and the object-system CLOS was a new design based on Flavors and LOOPs - two earlier Lisp extensions.
The most committee like design was actually CLOS. But even that was driven by only six people, lots of community support and a portable reference implementation (PCL).
Arguably, CLOS is one of the greater designs in the Lisp language space.
The third part then are implementation extensions, which got especially important, when the standards effort went out of steam/funding/interest...: concurrency, unicode support, foreign-function interfaces, standard libs, ...
The design advantage Common Lisp has over Scheme is that it started as a larger integrated language. This makes features know of each other and the language is designed to use and support them.
Scheme has often this feeling of a very small core language, even a too small core language, and a lot of stuff bolted on top. The result is that 'practical' implementations (practical here means implementations which are used to write software and not so much as a teaching language/implementations) often reach the size of Common Lisp and one of its implementations AND all of them are diverging in some basic ways (Chez, Chicken, Racket, Guile, ...).
Take for example the object system. Since CLOS was added to Common Lisp no one bothered to propose a replacement for it in Common Lisp. Thus every implementation, small and large, is using it.
> but the Chez Scheme migration should help out Racket a bit
If I would want to use Scheme, I'd use Chez Scheme directly. It's a great implementation of Scheme. In provides what many Common Lisp systems (incl. SBCL) also offer: an optimizing native code AOT compiler. Chez already has what Racket now gets: an improved implementation mostly written in itself.
As always, thanks for the well written and thoughtful reply.
>If I would want to use Scheme, I'd use Chez Scheme directly
I came to the same conclusion based off of a fast interpreter, AOT compiler, ability to make executables...etc. I know I've looked at the CISCO GitHub page for it before.
Just use Embedded Common Lisp. Comes with a full Common Lisp incl. CLOS implementation, compiles to C, uses the usual free IDE SLIME/GNU Emacs. https://common-lisp.net/project/ecl/main.html
Common Lisp seems like a fine choice. Just to add to your Racket comments...
Some of the ways one might do this in Racket are also doable the same in most any Scheme variant/descendant of the last couple decades (and more ways, for Scheme implementations that provide `syntax-case`). So you could write it "in Racket" in such a way that you can also try out performance of the same code with various other Schemes (Gambit, Chicken, Bigloo, etc.).
Some ways in Racket, however, have no equivalent in any other platform right now. For example, having a `#lang quilc` that uses the Racket module&phase system and syntax objects&transformers to turn the language into Racket code behind the scenes (preserving things like source location, and source IDE support), and let you mix&match with modules implemented in other `#lang`s.
An optional addition to this idiomatic syntax transformation-based approach in Racket is that you could could also write the code to make it simultaneously spit out some native/assembler or IR for LLVM, CUDA/OpenCL, C, etc., like a traditional compiler. Perhaps the expand-to-Racket "backend" is initial rapid prototype as you figure out your language's syntax and semantics, and the IR-writing is what you do later. And you can keep both backends, and test the tricky one against the original simple one that's easy to analyze. And your compiler is bootstrappable from a nice language.
(The Racket universe also has some hands-on PL people, who you can still talk with directly on the email lists, thanks to Racket still being a relatively small and strong community. Without the temptations of money confusing things. :) And it has some nice related PL tools, like Redex.)
SBCL (which Rigetti has chosen for qvm and quilc) is better than OCaml in pretty much every aspect of importance for these projects. Performance (compiler macros, intrinsics, runtime assembler, ease of code generation, parallelism), interactivity (no contest, OCaml isn't even close), speed of development and exploring the solution space via DSLs.
Compared to SBCL, OCaml feels slow, clunky and academic.
OCaml’s syntax feels a little (well actually a lot) clunky and illthoughtout. Reason ML fixes a lot of these problems and is a great language all around. I’m a hardcore static typing guy (though I started with Common Lisp, which is why I really appreciate static typing), so maybe I’m biased though.
I’ve programmed professionally in both CL and Racket (though it was a research institution so maybe “professionally” isn’t the right word), and in both languages always felt this sense of unease when hitting the “run” button. Half the time the program would error out with something that even the most basic type checker would catch (you called car on a non-list type, oh no!). When programming in OCaml, or Haskell, or Scala, I feel like I have a guardian angel watching my back.
Being able to move many runtime errors into compile time errors is such a great thing, I’m not sure why some people prefer dynamic typing. Certainly, the number of correct program with static typing is a subset of programs with dynamic typing (which can be keenly felt with Typed Racket) but even with this restriction, I still think static typing is a massive win.
Admittedly though, with a big enough working memory, it shouldn’t make a big difference. Personally though, I love myself some auto-completes and underlined red squiggles!
I don't know how you programmed CL, since there is no "Run" button like in DrRacket (which simply evaluates everything and discards previous state) but if you did the equivalent of that, you were doing it wrong.
The way one should program Common Lisp, is through an environment like SLIME which lets you write code interactively and iteratively. DrRacket is very far away from that. In CL/SLIME, you usually evaluate everything you type in, immediately after you do so. Since you program inside a live environment, you can see this as continuously adding to a live process and molding it to your mental model. You build your program up from zero and all state is kept as you do so. There is no reload/wipe-previous-state/run cycle.
This way of engineering, when done over a short feedback loop, is supremely powerful. I can't stress this enough. Since you can evaluate expressions inside your editor, without any context switches [unlike Python, Ocaml, Haskell where there is a separate interpreter and one does not build his program up from scratch from inside the interpreter], evaluating things becomes second nature. So you do it all the time. Not to mention that the entire language is geared around this sort of development. The object system (CLOS) allows you to redefine pretty much everything at runtime. The debugger allows you to do the same. Errors (conditions) can have associated restarts that you can select interactively. You have inspectors that let you dig into the state of your process as it runs.
The short of errors you are describing are easy to catch and after becoming proficient working in this way, simply do not happen. The main issue that a lot of dabblers in Lisp run into is that they are not exposed to the Lisp Machine development paradigm and thus do not know how CL is different and how to take advantage of all that it offers.
If DrRacket -> Run was how I had to write Lisp code, I'd be quickly moving on to something else myself. Thankfully I can use SBCL / SLIME and recapture some (most?) of the magic of the Lisp Machines.
The “run” button is however you run your CL code. Are you seriously disputing my experience because I said “run” in quotes? Would it please you more if I had instead said I used SLIME with SWANK with emacs and I pressed C-c C-e or whatever?
As I sais, I have programmed CL professional. And yes, Racket doesn’t have a real REPL. And yes, CL with emacs and SLIME and SWANK is nice or whatever, but personally, I’ve found that a REPL provides much more value with a dynamic language vs a static language.
Why? Because with CL you need to be running each function because you have no faith in your code. The most trivial errors that could have been caught by a simple type checker will crash your program. I’ve almost never had the experience of writing say 300 lines of CL and having it work on the first run. Because of this, you have to use the REPL as a crutch.
In contrast, with haskell or scala even though those languages provide a REPL, I rarely find myself using it. In fact, I very rarely run my code at all. With a functional and monadic style and heavy use of type parameters, chances are that if it type-checks, it is correct.
One area in which CL is pretty cool is modifying a running image on the fly. I once fixed a bug mid-meeting with CL and everyone thought that was pretty cool.
> REPL provides much more value with a dynamic language vs a static language
One of the reasons why CL is how it is, is to support incremental & interactive programming. That was a big driver for its language design.
Being able to work on programs of arbitrary size while they are running. For that CL is a dynamic language. Not just a dynamically typed language. 'Dynamic language' means that one can change the program AND the programming language while the code is running. For that Common Lisp does a lot of runtime checks and includes an error system which enables one to repair code while in an error -> in Common Lisp the debugger stays in the context of the error and one may have explicit ways to restart the program. One can use that also programmatically.
Something like a REPL is not just there because you need to test your code to make it run, it is there because it enables us to program AND test a running program without crashing it.
We have to agree to disagree. It's also how you word certain things that make me wonder about the level of exposure you have had to proper Common Lisp development methodologies. For instance, you write: "The most trivial errors that could have been caught by a simple type checker will crash your program".
In Common Lisp you simply pop to the debugger, and don't lose global state. Your process still runs, you simply fix the error and continue. Not to mention that you can use type declarations and SBCL will do compilation-time type checks and spit out warnings for most of the trivial errors you describe. It will even highlight the relevant code _inside your editor_. And since you are compiling at runtime all the time via SLIME, this of course means that you never use the REPL like a crutch.
I find that the main reason I don't use REPLs in languages like OCaml and Haskell is not due to the static type system but because they introduce a huge context switch and break me out of flow. They also make it that much harder to inspect state of a running process since you have to jump through hoops to do so, assuming that it's possible to begin with. Everything meshes together in CL (and Smalltalk and Erlang ) to make it seamless. This does not happen in the statically typed languages you mention and the experience is that much poorer.
At the end of the day, when programming and especially when I'm doing exploratory programming where the problem is hard and the solution not 100% fleshed out, I am trying to quickly model and make concrete an abstract model that only exists inside my mind. Languages like Common Lisp that have short feedback loops are ideal for me entering a state of cybernetic "entanglement" where subject [mind]-object [program] mesh together via the medium [programming language] and I can proceed to do rapid transformation of my mind space into the computer. None of the statically typed languages you mentioned allow one to do that. It's like programming with punch cards versus programming hooked up to a brain-computer interface.
Everything you say is true. CL via a REPL really does make you feel like there is no barrier between your code and your brain.
With regard to programming in CL, yes, if you cause a stacktrace in your code, you can easily debug and fix it on the fly. But most of the problems (atleast with me) could have been caught with a static type checker in the first place.
I just feel like my brain is stupid, imperfect, and error prone. I want and need something that can offload a lot of thinking to the computer.
It’s not like static typing and the immediate feedback CL has are mutually exclusive. With the right language you could definately have both.
Like imagine a CL with static typing (a hindley milner system), would it be worse or better?
I would say it must be objectively better:
What do you think? (You’ve made some good and reasonable points, so I would genuinely be interested in your opinion)
Alan Kay has talked about what he calls "extreme late binding of all things" [1], where he feels that Lisp and Smalltalk do it particularly well. I'm in full agreement with his point of view but to answer your question I had to do some thinking to figure out exactly why that is the case.
First let me say that these days most of the code that I write, I'm doing it in a dynamic language (CL, Emacs Lisp, Erlang, Lua). But I've also spent decades working with statically typed languages (mostly C and Java) usually for profit. And I do enjoy Haskell and OCaml, but I get a different sort of enjoyment from them that is not linked to the short feedback loop and cybernetic modeling process I've described for CL. It's kind of like the fun one has when laying down a mathematical proof, or working out a mathematical problem. Now you made it clear in your last post that you do understand pretty much everything I talked about. So it's interesting for me to contemplate why you prefer statically typed languages and the most probable explanation is that you've spent a lot more time with them. Programming languages shape the thoughts that you think after all. Second order cybernetics tells us that the emergent metasystem composed of subject-object [mind-program] interaction can begin to model itself in a new round of cybernetics.
After decades of doing exploratory programming in Lisp-like languages, "extreme late binding of all things" is exactly how my mind thinks about the process of modeling but also about itself.
Maybe this is why most of the simple programming errors you've talked about seldom arise with me (or pretty much every hardcore Lisper I would say). Maybe this is also why when you switch to a Lisp-like language with your mind being wired to a statically typed system, these sort of errors are very frequent. It's your mind going through the modeling process in the way that it's used to. Fascinating if you think about it.
So to end with, I can see the benefits of a statically typed language. I would never pick Common Lisp when working with groups of people of different skill levels on a shared vision. I will probably pick OCaml when working out problems in compiler theory. But I will pick a Common Lisp-like language _every single time_ when I'm working on manifesting a singular vision - my own - and I want this process to take place via cybernetic emergence of something greater than myself. In a similar way to Steve Jobs instantly knowing what the future will look like during his trip at PARC, it's obvious to me that "programming" has nothing to do with type systems or computers. These constitute only the medium and the medium will eventually disappear..
> chances are that if it type-checks, it is correct.
a - b type checks where b - a or a + b should have been written.
shuffle(list) type checks where sort(list) should have been written.
sin(x) type checks where cos(x) is required.
The type of the average application code contains very little information. The crucial difference between sin and cos gets bulldozed; both are just number -> number.
The macro system is primitive in the sense that a hammer is primitive: old but still a good choice for putting a nail in something.
I don't want to get into a scheme/lisp flamewar (there are enough of those already), but I think the expressiveness of the two is sufficiently close to make other choices dominate[1].
As far as OCaml, despite what appears to be mutual love between the ML and lisp communities, the world can be divided into people who like typed languages and people who like untyped languages, so I imagine that has an effect.
Ultimately there's a lot that goes into making decisions into what language to use; I think the Mill guys are insane for implementing their assembler in the C++ templating language (as in a valid assembly file is sugared into a C++ program that outputs the binary), but it works for them, so shrug.
1: OTOH Scheme is small enough that various implementations are doing intereseting things with expressiveness (racket has several, the language modules coming to mind first; Gerbil with process migration, &c.). Common Lisp is probably just too big to make a new implementation just to experiment with a single feature (though see CLASP for a counterexample)
I find scheme’s hygienic macro system or racket’s insanely powerful macro system to be more advanced and easier to use correctly. None of the gensym and hygiene problems you have with CL.
PS: I know you probably know more and CL and Lisp Machines than anyone else on HN.
But do you think CL macros are better than Scheme macros?
If so, I would be genuinely interested in why (I know you have more experience with Lisp than I could ever have, do I think your opinion would be valuable to this discussion)
What does 'better' even mean? For what? Which of the dozen of Scheme macro systems?
Macros are a tool and over the past decades literally zillions of procedural macros have been written in Common Lisp. Somehow people managed to do that. Without 'hygienic' macros. Without rule-based macros.
Scheme has seen a lot of research in that area and that's great. Common Lisp is more concerned with core language stability.
Having a single macro system, which might have an older but careful design, still has some advantage: basically all code in Common Lisp uses the same macro system and all tools need to support only one macro system. Most of the time it gets the job done.
It's like asking whether a carbon bicycle frame is better than a steel one. Depends. If one is travelling, the steel one is possibly more robust and easier to repair. It might not include the latest research in lightweight materials, though.