This history is also discussed in an essay by Garrett[0]. And there's been a few previous HN posts about it[1]. (I'm not suggesting the parent post is a dupe, but people might be interested in the essay and related contents).
Chip design tools at Intel. I didn’t go there by choice. I went to work for a startup that got acquired. And I have a couple of side projects I can tell you about if you’re interested.
Yes we're using Lisp (specifically Allegro Common Lisp) and yes I can talk about it (within reason of course). Anything in particular you want to know?
How is your work affected by the end of the switching (Tofino) division, are there other users of Barefoot's tools inside the company? I hope I'm not asking anything confidential here.
I'm actually not sure how much I can say about that in public. But if I'm reading your about page correctly you are also at Intel, yes? So if you want to ping me on Teams I can give you some more information.
I've started work on a book/blog (still haven't decided on the target medium) about how to apply the scientific method to everyday life. It's still very drafty, but if you want a sneak preview of the current state of things you can find it here:
This is part of a more overarching effort to try to move the needle on climate change. I think one of the reasons it's such an intractable problem is that there is too much scientific ignorance out there, which supports an unjustified level of techno-optimism with regards to CO2 emissions.
I'm also thinking about turning TIKN into a YouTube channel because that seems to be what the cool kids are doing nowadays. But I'd want to do this with a collaborator because maintaining a YT channel is a fucking boatload of work. If you know anyone who might be interested please send them my way.
Not entirely unrelated (in fact, the reason I know how hard it is to make a good video)... something I did about 15 years ago that I'm still proud of despite the fact that it never got any traction is make a documentary film about homeless people:
If JuliaCon last week is any indication, there might be more Lisp in space in the future! As long as the runtime size can be shrunk down a bit, but it sounds like that's in the works too.
If I'm remembering correctly there are some CL compilers that aim for runtime-free small native executables, even with realtime constraints, but I forget which.
They're starting to be posted, but it will be a few more weeks before they're all uploaded. There was a period for a week or two after juliacon where the raw 6 hour live-streams were up (it was in person this year), but now they are taken down because MIT requires all non-livestreamed videos to be captioned (for accessibility reasons).
Slim binaries and better static compilation were mentioned as big future features in the closing remarks, so that'd be a good one to watch - a lot of the people there working in embedded, robotics, or plane satellite projects were excited about those.
There weren't any talks I saw where anyone was actually running Julia in space, but Ronan Chagas' "Attitude control subsystem development using Julia" and Corbin Klett's "Realtime embedded systems testing with Julia" both get pretty close and point out reasons why you wouldn't quite want to run Julia on a satellite or plane just yet. Hopefully the videos capture the questions from the audience because there were some good followups!
How does that fit with your initial "If JuliaCon last week is any indication, there might be more Lisp in space in the future"?
Is this about some Lisp written in Julia? Some Lisp with interoperability with Julia? Or it's based on considering Julia itself a Lisp for some reason?
I'm assuming OP was considering Julia as a pseudo-lisp. Julia doesn't look at all like a Lisp from the outside (no S-exprs except with (https://github.com/swadey/LispREPL.jl) but it takes many of Lisp's deeper lessons. It's (somewhat) homoiconic (see https://stackoverflow.com/questions/31733766/in-what-sense-a...), has an AST based macro system, and is an expression based language (no statements, everything returns a value), and has first class functions and types. Also multiple dispatch comes from taking CLOS/Dylan and getting rid of some of the parts that make a compiler writer hate you.
It's also helpful in preventing code from crawling off the right side of the screen.
However, like many of the sibling comments point out, if you think getting rid of the parens entirely is desirable then you have missed the point, which is that Lisp code is not text, it's a data structure, a linked list, and the textual representation of the code is just a serialization of that data structure. And the most straightforward way to serialize a linked list is with delimiters at the start and end, like so:
I'm told that Elixir still closely follows the tradition of "code as data" while the syntax actually looks more like ruby. Things like "do...end" blocks are actually just syntax sugar for effectively parentheses[1].
I've also read that Julia is an acceptable Lisp [2].
The closest thing to what you want is Dylan, which was originally created by Apple: https://opendylan.org/
But parentheses aren't really that bad, after a while you get used to them and start seeing the actual structure behind them instead, kinda like that quote from The Matrix. Without them macros become much more complicated, especially if you want the syntax to be infix.
Clojure and Clojure-like Lisps (Janet, Hylang) use fewer parenthesis by replacing them with other pairs of characters ([], {}) for certain block structures, which is exactly what other non-Lisps do. The only difference is that Clojure still puts the opening character at the beginning of the structure instead of in the middle, like how non-Lisp languages usually do.
I actually find Clojure's syntax more confusing with all the punctuation they threw at it. Compare that, for example, with Scheme. I like the idea of keeping Lisps "pure" if that makes sense. In Lisp, the first element of a list is the operator. Okay, then how is Clojure a Lisp when it has vectors like this [1 2 3]? Unless [ is a macro that expands [1 2 3] to some real Lisp-y expression under the hood.
It's a literal vector in the sense that '(1 2 3) is a literal list. Though unlike the quoted list it evaluates to itself, like a string or an int does.
I'm not the same person you asked, but I find Clojure to be as easy to read as Scheme, but harder to write, since I need to remember what punctuation to use where.
I don't know, it makes sense to use different data structures for different purposes. (), [], {} are all different data structures (lists, vectors and maps) that are useful in different situations, so makes sense to have shorthands for them.
Maybe. What bothers me about this idea is that it doesn't scale: As soon as you need a fourth bracket type you're hosed, because there are no more on the keyboard.
With Common Lisp, you can define [ and { if you want to, but it's more common to use a reader macro character in front of an opening paren or a function name after it if you want different data structures. Those approaches scale.
To be fair, there are more differences; where Clojure requires parentheses, non-Lisps often don't need them at all.
E.g. in Python indentation carries meaning. Ruby has "do/end"; but also it's possible to skip parentheses in some function calls; there are pipes too. Haskell - whatever haskell does, sometimes also pipes. There are more examples.
That being said, for me clojure parentheses were always more of a help than an inconvenience.
There are, or rather were Lisps with fewer parenthese: Lisp 2, CGOL, Dylan, but I wouldn't say they were popular. It isn't too difficult to write one yourself. After you've done so, and used it for a while, you'll understand better why those earlier projects never gained traction, and learn to love (or, where appropriate, not notice) Lisp's parentheses.
But you said popular, so Javascript then. Seriously, JS is not what I'd consider a very good Lisp, but I think it is one because it allows you to build real lexical closures.
Edit: Haskell is a sort of a lisp too, and it's the only one IMHO that got rid of the parens in an intelligent way. Haskell lets you use either prefix or infix notation, and it works mostly without parentheses. Plus you can use the '$' character as a clever hack to remove parens where they would otherwise be needed. But...the price you pay for this is laziness and automatic currying, which most Lisps don't do.
I just put out a new release of TXR, which has a change that can reduce the nesting.
In pipes, you can now bind variables that are in scope of later pipeline elements.
1> (flow "/usr/share/dict/words" ; start with this path
(file-get-lines) ; get content as list of lines
(let lines) ; bind lines variable to this list
len ; take length of list
rand ; get pseudo-random in [0, length)
lines) ; index into lines
"subdues"
If you look at the original LISP paper, it uses meta-expressions. Then towards the end is the alternate syntax with parenthesis. The Wolfram (Mathematica) language and K were both inspired by meta-expressions.
But that has a lot of pushback, and no scheme I've researched supported (implemented the SRFI) t-expressions out of the box. Usually it's some third party library or runtime you have to use for that.
Have played around a bit with it in GNU Guile, but in some circumstances I myself got confused :)
For me, (...) are not the problem. But the thing that throws me off and confuses me when I read Lisp code is the excess use of long words. The language is too verbose for its own good. When I read Python or other languages, the space between the expressions/statements makes it much easier to follow the flow and understand what's going on. In Lisp, I often see lines stacked together with minimal room for my eyes to "rest".
As a long term (40+ year) Lisp developer I carry that forward into my other languages. Vertical whitespace is free in lisps,* as with most languages, so what I try to do us clump together an "operation" ideally so that it fits in my fovea when when I'm glancing at the code. So a quick glance tells me "this isn't what I'm looking for" or alternatively "all this goes together so read the whole thing before changing anything, but there's a good chance your change won't affect other parts of the function."
Thus:
(if (< foo 0) (do-this)
(do-that))
or if it's long
(when (< foo this)
(some-stuff)
(more-stuff)
...)
Obviously one can do this in C++ by introducing a block, but it isn't as syntactically natural; a block implies that there is important RAII involved, so usually I just do it by "vertical clumping"
I find the standard c style (used by lots of languages) actually spreads the code out too much, emphasizing the curly braces rather than the code itself. (this is a local version of the OO+IDE plague that smears functionality across lots of tiny files.
This sense of locality is presumably why the gnu programming style is compact in some ways and expansive in others, different from K&R style, for example,
return-type function-name ( type1 arg1 ) {
allows the eye to see the identifiers without the syntactical markers () and {}, letting indentation and vertical whitespace to tell a story.
* except, de facto, Interlisp because it normally used a structure editor rather than a text editor.
Classic old Lisp doesn't have the problem of long words: it has mostly short words like atom, cons, car, cdr, array, ...
Then, eventually, along comes Common Lisp with symbols like multiple-value-bind and update-instance-for-redefined-class.
I made a dialect called TXR Lisp in which most of the important symbols in the standard library are deliberately quite short, to avoid this readability/writability issue.
Lisp code often avoids vertical breaks within sections in a function, because many constructs nest, and so the nesting and indentation separates them already, like:
And yet, none of these efforts caught on. It's as if people who like Lisp, like it __with__ parentheses and people who don't like Lisp, wouldn't like it even __without__ parentheses.
well JavaScript if you're willing to buy into the idea that it's scheme in c syntax, which isn't actually that far off.
But no, genuine lisps come with parentheses (or some equivalent) because, as the name suggests, the entire point is list processing. Lisp programs are just data without (much) syntax. The more syntax you add, the less of a lisp it is.
> well JavaScript if you're willing to buy into the idea that it's scheme in c syntax, which isn't actually that far off.
It is quite far off. Just look at the version of "SICP in Javascript" (of which Julie Sussman was even one of the authors). The important points of the book are obscured by all the boilerplate and workarounds for the poor expressivity of the language.
Could it be that JS has poor expressivity for the concepts discussed in SICP, but good expressivity as a general scripting language[0]? Same thing for Python.
[0] And conversely, Scheme doesn't have good expressivity as a scripting language.
Scheme has better expressivity (or more expressive power if you want to put it that way) but there’s a higher gain mismatch between its syntax and the things people usually script these days.
Scripting languages are tightly coupled to the language they are scripting.
[0] https://flownet.com/gat/jpl-lisp.html
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...