Hacker News new | past | comments | ask | show | jobs | submit login
Learning Common Lisp to beat Java and Rust on a phone encoding problem (athaydes.com)
344 points by medo-bear on Oct 1, 2021 | hide | past | favorite | 230 comments



Before Python became ML's darling, there was Lush: Lisp Universal SHell by none other than Yann LeCun and Leon Bottouk[1], and I thought my Lisp knowledge was going to finally pay off ;) I've since moved on to J and APL. SBCL is amazing and aside from calling C directly, it is still very fast. Python is slow without its specialized libraries from the C-world, but SBCL produces short, readable code that is pretty efficient. There ECL[2] for an embedded Lisp, or uLisp[32] for really small processors like the ESP chips.

[1] http://lush.sourceforge.net/ [2] https://common-lisp.net/project/ecl/ [3] http://www.ulisp.com/


Why did it not catch on, though?

Python was already well known as a "scripting" and server-side web development language in the early 2000s, but it's commonly mentioned that it really exploded in the 2010s, where it was the implementation language for several scientific packages, most notably the machine learning eco system.

It seems that the language really found a local optimum that adheres to many different people across disciplines.


My vote is it is due to python being included in most base distributions. Combined with a ridiculously loose concept of dependency management, it is trivial to tell people how to get started with many python projects.

This doesn't really scale well, but momentum has a way of making things scale. Especially when most of the popular libs of python are pseudo ports of other language libraries.


Hard disagree. Python is a nightmare to get started with from a dependency management perspective. Its use in exploratory programming and for writing quick hacks to solve immediate problems is so popular because you rarely care about backwards compatibility or future proofing the deployment.

I think it caught on because the syntax is clear, the execution model has relatively few pieces of foot gun trivia to memorize, a really nice repl, and most importantly, almost no one doing exploratory work needs anything faster.


As a software developer, I fully agree with you. Having watched many science teams embrace python due to not having to install the base system, though?

That is, your complaint is what I meant about it not scaling. It is terrible. But the momentum behind it is keeping it going, despite being laughably bad in that area.


> Python was already well known as a "scripting" and server-side web development language in the early 2000s, but it's commonly mentioned that it really exploded in the 2010s, where it was the implementation language for several scientific packages, most notably the machine learning eco system.

That's...somewhat misleading. Python was well known for its scientific stack as well as server-side web development and scripting from the early 2000s (or earlier; NumPy, under its original name of Numeric, was released in 1996, BioPython in 2000, matplotlib on 2003, etc.).

In the 2010s, it became known for its machine learning stack, which was built on top of the existing, already solidly established, scientific stack.


Python makes it pretty trivial to load compiled modules as third-party packages, and given the core language itself is already implemented in a similar way, at least for CPython, creating numerical packages as thin wrappers around pre-existing BLAS implementations was probably easier in Python than in Lisp.

It might seem stupid, but operator overloading and metaprogramming features make it fairly simple to emulate the syntax of other languages scientific users would have already been familiar with. Specifically, NumPy, SciPy, and matplotlib quite obviously tried to look almost exactly like MATLAB, and later pandas very closely emulated R. It's a lot easier to target users coming out of university programs in statistics and applied math who have been using R and MATLAB and teach them equivalent Python libraries. Trying to teach people who aren't primarily programmers to use Lisp is going to have a much steeper learning curve.

It really didn't explode in the 2010s, either. You're thinking of Facebook with pytorch and Google with TensorFlow making it dominant in deep learning, but the core scientific computing stack goes back way further than that. As for why Google and Facebook chose Python rather than Lisp, I think it was just already one of their officially supported languages they allowed internal product teams to use. Lisp was not. Maybe that's a mistake, maybe it isn't, but it's a decision both companies made before they even got into deep learning.


Lush's heyday was the 01990s, and it's pretty much only useful for "scientific computing", which has a big overlap with "machine learning". Also, it's a Lisp. Python has more readable syntax and was used for all sorts of things even before Numpy existed.


> 01900s

Preparing for Y10K? That's exceptionally long-termist.


Can't even make an octal joke with that 9 in there...


Hey, 9 is an octal digit in K&R C!


I think we have to distinguish between languages that use s-expressions and lisps. Almost all languages have some sort of preprocessor that make it look like lisp but that doesn't turn it into lisp. From what I vaguely remember, lush lacked some of the defining qualities of a lisp.


"Lush" stands for "Lisp Universal Shell". It has not just S-expression syntax but recursion, setq, dynamic typing, quoting of S-expressions and thus lists and homoiconicity, cons, car, cdr, let*, cond, progn, runtime code evaluation, serialization (though bread/bwrite rather than read/print), and readmacros. Its object system is based on CLOS.

Lush is uncontroversially a Lisp.

However, the part of this that's relevant to the point I was actually making is that Lush uses S-expression syntax, which is less readable than, for example, Python syntax.


if lush were uncontroversially a lisp, I would have used it for at least one side project. IIRC things weren't that clear (back then when I stumbled over it) when looking at it in detail, which is why I decided not to use it (but don't ask me now why exactly).


But I have to ask, because I am curious what Lisp feature, that if missing, has you dismiss a language built to do a specific task you might be interested in. What were you trying to do then? Thanks!


I'm curious too!


In my lessons about scheme (given by the french translator of TAOCP), I have the following essential characteristics:

- static binding

- closures (true)

- tail recursion

- garbage collector

s-expression or typing is a matter of choice, but, IMHO, if you lack one of the four previous items, it is not really a lisp.


Emacs Lisp doesn't have tail call optimization, just like the Lisp it was inspired by.

I'm definitely not an expert in that area, but this list seems kind of arbitrary to me. Especially with s-expressions being optional, which are probably the widest-known feature of the language. According to that definition, Haskell is a "true" Lisp but at least 2 Lisps are not. That makes no sense to me.


The last sentence was from me. The four items are from the SICP (not TAOCP, my mistake). They all are mandated by IEEE scheme standard.

I have encountered many functionnal languages when I was student (caml-light (the ancestor of ocaml), lelisp, gofer (a cousin of haskell), miranda, graal, FP systems, yafool).

The typing may be dynamic or static. The evaluation may be strict or lazy. They may have homoiconicity or a more suggared syntax. All theses choices are valid. These languages have in common the list of fundamental properties. IMHO, this list of 4 items encompass many aspects of SICP. When I evaluate a language, this list helps me understand the qualities and limitations of a language. For example, Perl5 does not have a true garbage collector. javascript does not have tail recursion. Knowing these limitations, I will not code the same way. In Perl5, I will take care of breaking unused circular data. In javascript, I will reorganise highly recursive algorithms.


Almost every language can do tail recursion (in most languages it will blow the stack though) but not every language does tail call optimization. This is a requirement for scheme but it is not for common lisp and many other lisps. So, I don't think this should be on the list.


JavaScript does have tail recursion in the spec, and JSC (Safari) implements it. Other runtimes have thus far refused to because it can impact on debugging I believe.


I feel that static/lexical binding is more of a Scheme thing, though. The Lisp community used to be kind of on the fence on that for a while, even though it ultimately chose to go the same way.

Edit: And to be frank, while dynamic binding may be a horrible mistake in bigger projects, it sometime gives you exactly the easy way out that you may appreciate under time pressure. It's a classical case of "it seemed to be a good idea at the time".


Other than global variables, lisp-2 use lexical scoping …


Whether a Lisp is a Lisp-1 or a Lisp-2 is orthogonal to whether its scoping is lexical or dynamic. There are lexically-scoped Lisp-1s (like Scheme), lexically-scoped Lisp-2s (like Common Lisp), and dynamically-scoped Lisp-2s (like Emacs Lisp). Someone's probably written a dynamically-scoped Lisp-1 at some point, but I can't think of one.


Does Russell's original Lisp count as a Lisp-1? I guess it probably had a common namespace for functions and values.


It's an interesting question; maybe he remembers. If it was a Lisp-1 when originally implemented, it became a Lisp-2 within the first year or two, using different properties of atoms (symbols, as we say now) to evaluate them in "function position" than when they weren't. I suspect he might have made that change before he got it working at all because of properties like FSUBR and FEXPR.

It was definitely dynamically scoped, though.


So it did. But not without controversy. And consequently, there's still defvar. Emacs Lisp is binding dynamically by default to this day, iirc.


It's extremely valuable in Emacs for the same reason it's questionable in other contexts. It lets one person reach out across any scope distance and tweak behavior. Which is exactly what you want when you're configuring your text editor. And exactly what you don't want when you're writing a program for other end users that multiple people are working on.

Emacs Lisp will, I expect, never ever drop 'defvar' dynamic binding. It would break the entire world -- it's relied on far too widely to revoke. Having lexical binding alongside as we do now is probably sufficient.


You mean dynamic scoping and lexical scoping. Dynamic binding is something different; a dynamically-bound subroutine call may call different functions depending on runtime state, for example because it's indirected through a function pointer, for example in the class vtable of a method call receiver.

I agree that it's useful to be able to locally override variables like deactivate-mark and case-fold-search. (This kind of thing makes tail-call elimination more difficult: any dynamically scoped variables must be restored when the "tail-called" function returns.) But there are some other such things in Emacs that can be similarly locally overridden and then restored, but aren't variables: (current-buffer), (point), and (mark), for example, which can be restored with (save-excursion ...). And it's common to have such locally-override-and-restore facilities without using linguistic dynamic scoping for it; PostScript has gsave/grestore, for example, which were copied by Win32 GDI SaveDC and RestoreDC, but that doesn't give C dynamic scoping.

I don't think SaveDC and RestoreDC are known to give rise to problems when "writing a prgoram for other end users that multiple people are working on".

I agree that elisp will never remove dynamically-scoped variables; it would break compatibility with all existing code. Even Common Lisp has "special variables" that behave this way.


I mean, yes, you're not wrong, but the terms within Emacs itself for these features are dynamic/local "binding":

https://www.gnu.org/software/emacs/manual/html_node/elisp/Dy...

https://www.gnu.org/software/emacs/manual/html_node/elisp/Le...


Thank you for the correction; I did not know that. I can complain about the elisp maintainers using an established term in this confusing way†, but I shouldn't have claimed that your use of it was wrong; if you're discussing a particular program, it's better to use the canonical terminology used when talking about that program, even if it conflicts with usages in other contexts.

______

† Older versions of the Emacs Lisp reference manual mostly did not do this, except for one occurrence of "dynamic binding" in the "implementation of dynamic scoping" section.


Dynamic scoping relies on a form of dynamic binding: namely the dynamic binding of a symbol to its current value.


That's outside the standard meaning of Alan Kay's term "dynamic binding," which originates in a different meaning of "binding", namely, "linking object files (into an executable or other form of memory image)". The Mesa linker, for example, was called the "binder". These conflicting meanings of "binding" make for a good pun or joke, but I think there are people trying to read this discussion in all seriousness to understand the linguistic issues, and the pun could confuse them.

It happens that in ordinary Lisps†, the function called by invoking a symbol does depend on the run-time value of a symbol (its function binding in a Lisp-2), and that's the sense in which an ordinary Scheme or (non-generic) Common Lisp function call can be said to be "dynamically bound", but that isn't the case in general. So even in that sense it doesn't correspond to the static/dynamic scoping distinction that they seem to be trying to discuss.

______

† I think this may be one of the points where Lush is atypical; I think its interpreter supports runtime rebinding of the function bindings of symbols, but its compiler doesn't. I'm not sure, though. I may not have used Lush this millennium.


The Lisp term "dynamic" means "stack-like LIFO discipline". "Dynamic scope" is a shorthand that refers to "indefinite scope with dynamic extent", where "dynamic extent" refers to the bindings being tied to stack-like frames, such that they are torn down then constructs terminate.

When we "declare dynamic-extent" an object, the compiler may stack-allocate it.

The C language redefined "dynamic" from "stack" to "heap". If you look into the BCPL manual (one predecessor language that inspired Ken Thompson's B), it uses "dynamic extent" to refer to the stack, which C renamed to "automatic storage":

"[T]he extent of a dynamic data item starts when its declaration is executed and continues until execution leaves the scope of the declaration." (1967 BCPL Manual, 7.2)


I agree. Thank you.


MACLISP, Interlisp, Emacs Lisp, AutoLISP, Franz Lisp, and Lisp 1.5 lack all of these except the garbage collector*. Common Lisp and International Standard Lisp additionally have static scoping and consequently closures. Those are essential characteristics of Scheme, not of Lisp.

______

* I'm assuming by "static binding" you mean static scoping; if you actually mean that the association between callsites and functions is statically computable, then it's not even true of Scheme.


I believe Python won because of its popularity in academia.


I am using J on and off, but I am not aware of a ML package for it. Are you writing all algorithms yourself or could you recommend a package?


I have a bunch of links to ML material for either APL or J. I don't know of any particular library for J. J is interpreted, so it is not as fast as other implementations. I am mainly using it to experiment on concepts and teach myself more ML in J because of the iterative nature of the REPL, and the succinct code. I can keep what's going on in my head, and glance at less than 100 lines, usually 15 lines, of code to refresh it.

There is a series of videos of learning neural networks in APL cited by others here on this thread.

Pandas author, Wes McKinney, cited J as an influence in his work on Pandas.

Extreme Learning Machine in J (code and PDF are here too):

https://github.com/peportier/jelm

Convolutional neural networks in APL (PDF and video on page):

https://dl.acm.org/doi/10.1145/3315454.3329960

A DSL to implement MENACE (Matchbox Educable Noughts And Crosses Engine) in APL (Noughts and Crosses or Tic-tac-toe):

https://romilly.github.io/o-x-o/an-introduction.html


> J is interpreted, so it is not as fast as other implementations

Which implementations are you comparing J to? To my knowledge, all of APLs, K, J are interpreted. BQN is compiled, but still very new. I also know that Dyalog was experimenting on a byte code compiler. I don't think there exists convincing benchmarks comparing those languages.


Yes, not clear on my part. I meant to other ML languages, not other array languages. It's amazing the speed you get from J as an interpreted language; it's been honed by a bunch of clever people over the last few decades.


There's a handful of resources around that look fun though I haven't dug into them yet.

There's this on the J wiki: https://code.jsoftware.com/wiki/User:Brian_Schott/code/feedf...

And there's this YouTube series for APL: https://youtube.com/playlist?list=PLgTqamKi1MS3p-O0QAgjv5vt4...


I'd also like to know. I'm considering writing some for fun in GNU APL right now.


I write Clojure for food, and Common Lisp for fun. One reason for the latter is CL's speed -- awhile back I compared a bit of (non-optimized) Clojure code I wrote for a blog post with a rough equivalent in CL, and was stunned that the Common Lisp code ran about 10x faster. This made me curious as to how fast it could be made if I really tried, and was able to get nearly 30x more[1] by optimizing it.

Clojure is definitely fast enough for everything I've done professionally for six years. But Common Lisp, while having plenty of rough edges, intrigues on the basis of performance alone. (This is on SBCL -- I have yet to play with a commercial implementation.)

[1] http://johnj.com/from-elegance-to-speed.html


I have implemented real algotrading framework in Common Lisp (SBCL) that connected directly to Warsaw Stock Exchange.

The application was receiving binary messages from the exchange over multicast, rebuilding state of the market, running various (simple) algorithms and responding with orders within 5 microseconds of the original message, at up to 10k messages per second.

With SBCL you can write a DSL and have ability to fully control the resulting instructions (through vops). It is just the question how dedicated you are to writing macros upon macros upon macros.

I used this to the fullest extent and the result was as good as any hand optimized C/assembly.

For example, the binary message parser would receive a stack of complicated specifications for the messages in the form of XML files (see here if you are curious: https://www.gpw.pl/pub/GPW/files/WSE_CDE_XDP_Message_Specifi...), converted XML to DSL and then, through magic of macros, the DSL was converted to accessors that allowed optimal access to the fields. Optimal here mans the assembly could not be improved upon any further.

Large parts of the application (especially any communication and device control as it was done with full kernel bypass) was written in ANSI C but the Foreign Function Interface makes integrating it into the rest of application a breeze.

I write all of this, because I frequently meet complete disbelief from people that Lisp can be used in production.

I personally think it is exactly the opposite. Lisps offer fantastic development environment. The problem are developers who are mostly unable to use Lisp for work effectively, I guess due to too much power, freedom and complete lack of direction on how to structure your application.


It is similar to how many still don't belief in GC enabled systems languages.

If some big shot corporation with weight in the industry doesn't shove it down the throat of devs, then it isn't possible.

And regarding Lisp, most tutorials keep ignoring that arrays, structures, stack allocation, deterministic allocation,... are also part of Common Lisp.


> It is similar to how many still don't belief in GC enabled systems languages.

Well... don't give my application as an example.

I have worked around the GC problem by using CL as kind of compiler for the application -- I built high level DSL that was then compiled to binary supplemented with small blocks written in ANSI C.

If you think about it, SBCL already does this and so does JIT in JVM.

I will give you an example:

Before the order goes to the exchange, it needs to be validated against a set of rules. Like "do we have enough money to run this order?" or "How much of allotted limit this algorithm still has?" or "Is volatility on this market within bounds for this algorithm to be used?"

The rules were written in Common Lisp DSL, then Common Lisp code converted it to a decision tree, then optimized decision tree, then compiled optimized version of the decision tree to a binary function. That function itself had no longer anything to do with Common Lisp, it could have just as well been written in ANSI C.

Then Common Lisp code wired these functions to form the application.

While most of the application code was actually Common Lisp (and some 20% of ANSI C), if you were market order and observed what instructions are handling you, none of them would actually be Common Lisp.

After the initial setup, the Common Lisp application was running on only one core, and the constructed binary took over all over cores and was running without garbage collection on memory regions allocated outside of Common Lisp system.

I hope this description makes sense... I have never before or after met an application that was built this way. As far as I know it is only one of its kind.


You know, even beyond Common Lisp, lisp literature is often surprisingly low level. Lambda papers are dealing with assembly too. People may be confused about lisp being up high in its ivory towers but the people in this subculture were extremely diligent/smart on both end of the ladder.


Part of this is that when Lisps started ALL other languages were very low level and the hardware that the Lisps were running on did not afford to waste any memory or cycles.

So efficiency was an important consideration right from the start.

Other languages that most people use today grew at later times (borrowing from Lisp) when the resources wasn't such a big deal.

I consider languages like PHP, Ruby and Python peak examples of mainstream languages that were created with barely any consideration for efficiency. This coincides with 1990s and early 2000s where we saw dramatic increase of system resources and especially memory without much need to use it efficiently.

Nowadays it improved a little bit as we learn to run bigger loads so newer mainstream languages (like Rust) and some older (like Java, C# or JavaScript) are putting more effort on efficiency.


Indeed, when someone points out "you cannot do that because...", I feel the urge to point them out to what was happening on 1960's hardware and how much their watch outperforms a Connection Machine.


the memory capacity of society is astonishingly low


Christian Queinnec (Lisp in Small Pieces author) vaguely mentions that all dynamic languages are lisp under various clothings.


You sure of the 5us figure?

Working in the field, that seems overconfidently good, considering its faster than most wire to wire SLA of world class FPGAs doing market data translation.


I am sure, 5us was measured on the switch, not on the server.

5us is actually quite slow.

This is from 2014, around the time I worked on this project:

https://stackoverflow.com/questions/17256040/how-fast-is-sta...

They cite "750 to 800 nanoseconds" wire to wire latency and that their next platform is going to be even faster.

They were using FPGA and yes, FPGA is faster, but not as much faster as many people would think.

First, market events come in isolation. There are no concurrent messages. WSE guarantees 1/10000th of a secend between each message. So you have your entire machine dedicated to executing the order.

FPGAs are usually used to run multiple copies of same net to speed up a simple problem but that is not the case here.

Second, FPGAs are usually used as a shortcut to optimize the execution of the problem. With FPGA you say "rather than trying to solve this problem with generic instructions that add a lot of delays I will just design dedicated net that will not be bothered by the generic baggage".

So in generic assembly you may want to write a branch and the branch predictor may go the wrong way and that costs. On FPGA you design your net and so you just go straight to the point.

But it doesn't mean you can't design normal code to be fast. You just need to be aware of actual cost of every single instruction of the critical path.

And third, FPGAs are clocked slower. What this means you have to do a lot per clock cycle on FPGA just to be on par with x86 core.

That application I worked on it was not intended as HFT. 5 us was an arbitrary goal we wanted to reach knowing full well that it is way behind HFT-ers.


This is fascinating. Did you come across any good resources to help learn how to optimise aggressively on SBCL, as you're describing?


This was many years ago and I don't remember exact sources. It was more of a research than a normal development project. I spent most of my time researching techniques needed to write that kind of application.

The largest influence were definitely LMAX articles and Disruptor pattern.

SBCL was comparatively easy. Basically, if anything caused problems I just moved it to C and called using FFI. Think in terms of writing a C program but using pieces of assembly when C is not enough for some reason.

Even though a large part of small pieces was moved to C, the application still felt like Lisp. It just orchestrated a large library of utilities to do various things. I still had fully functional REPL, for example.

The hard part of the project was full kernel bypass. After the initial setup, the application stopped talking to Linux kernel except for one CPU core that was devoted to running OS threads, some necessary applications and some non-performance-critical threads of the algotrading framework (like REPL, persistence, etc).

All except for one core were completely owned each by a single thread of the application and never did any context switch after initial setup.


Thank you, that's very helpful. I've found the cepl videos instructive in a simlar area https://www.youtube.com/user/CBaggers/videos

I hadn't appreciated that level of thread isolation was possible in SBCL, but of course it makes sense. Presumably you had some kind of instrumentation to let you know if a stray syscall had slipped in?


> Presumably you had some kind of instrumentation to let you know if a stray syscall had slipped in?

Don't know if you can call it instrumentation... I wrote an extremely hacky patch for the kernel to detect when any piece of kernel is running on anything than core 0 after certain flag was set.

Remember, it is not just syscalls. Even something as simple as accessing memory can cause switch to kernel to resolve TLB entry if you don't set up your memory correctly to prevent this from happening.

Yeah... I know. Now a bunch of people will come and explain how this could be done the right way. I just didn't care at the time to invest more time than necessary to get this right.


Got it. That's rather lovely.


I assume that you used something like isolcpus kernel parameter, then sefaffinity to put the process on the isolated CPU.

I'm very interested in the kernel bypass technique to talk to the hardware, I'm assuming it was the NICs ring buffer.. how was this achieved in userspace?


Hmmm cl for composition of software components, most software, and anything needing low level access in C. I could drink that coolaide.


That is how Python got its spotlight with "Python" libraries, which are actually C.

And that is how plenty of polyglot devs can enjoy their favourite language.


Funnily enough, a lot of low-level coding is easier in CL than C (especially bit wrangling, but CL:DISASSEMBLE makes for easy introspection of performance critical code too), it's usually the interface part with C centric OS that get easier using FFI wrappers.


Others have made similar comments on comparing apples to oranges when comparing optimized CL to idiomatic Clojure, but what they didn't address was "idiomatic Clojure". Threading a sequence through a bunch of lazy operations is good for development time, but at this point I wouldn't consider it idiomatic for "production" code.

Two things in the implementation are performance killers: Laziness and boxed maths.

- baseline: Taking your original implementation and running it on a sequence or 1e6 elements I generated, I start off at 1.2 seconds.

- Naive transducer: Needs a transducer of a sliding window which doesn't exist in the core yet[0], 470ms

- Throw away function composition, use peek and nth to access the vector: 54ms

- Map & filter -> keep: 49ms

- Vectors -> arrays: 29ms

I'd argue only the last step makes the code slightly less idiomatic. Might even say that aggressively using juxt, partial and apply is less idiomatic than simple lambdas

You can see the implementation here

[0] https://gist.github.com/nornagon/03b85fbc22b3613087f6

[1] https://gist.github.com/bsless/0d9863424a00abbd325327cff1ea0...

Edit: formatting


Your post is a lot of fun! I have a fondness for these kinds of deep dives. That being said, I feel like comparing the initial unoptimized Clojure code to highly optimized Common Lisp is kind of unfair. I wonder how fast the Clojure code could run if given the same care. Maybe I'll give that a try tonight!


That would be great! I agree it's not a fair comparison -- the post was meant more along the lines of, "how far can I push Common Lisp and learn some things?" rather than a strict comparison of performance of optimized code in each language. As I said, Clojure is fast enough for my purposes nearly all the time.


I wrote it up here: http://noahtheduke.github.io/posts/2021-10-02-from-elegance-... I'm not a great writer, but hopefully you enjoy it!

I also found a small bug, that you'll want to use `(elt s (+ x 7))`, not `(+ x 8)`. `elt` is 0-indexed, so first and last of an 8 element list will be 0 and 7.


You're going to be severely constrained by the fact that idiomatic clojure code uses immutable data structures which cannot theoretically be as fast as mutable ones in sbcl. Even with transience, the Clojure hash map trie is an order of magnitude slower than sbcl's hash table.


Of course. I don't mean to imply that optimized Clojure could match optimized Common Lisp, just that the disparity wouldn't be quite as stark. For truly speed-focused code on the JVM, you have to write speed-focused Java.


I also write Clojure for food, but also for fun. I found that a lot of Clojure is fast enough but also speeding it up is relatively easy with some type hinting and perhaps dipping into mutable data structures or other tricks.

Relevant Stackoverflow which contains a lot of simple suggestions to hopefully shore up the deficit compared to SBCL: https://stackoverflow.com/questions/14115980/clojure-perform...


I'm sorry to say, but I'm not a fan of articles like this, you're not using the same data-structures and algorithms, so what's the point?

To compare language compiler and runtime performance you should at least use similar data-structures and algorithms.

> but, you can generate very fast (i.e., more energy efficient)

I'm actually not sure this is true, I was surprised the other day to find a study on this and to find that the JVM was one of the most energy efficient runtime.

This was the link: https://thenewstack.io/which-programming-languages-use-the-l...

And what you can see is that execution time doesn't always mean more energy efficient. For example you can look at Go and see that it beats a lot of things in execution time, but loses to those in energy efficiency. Like how Go was faster than Lisp yet less energy efficient than Lisp.


Quite a few lisp implementations allow you to drop into inline assembly. That makes them theoretically per close to maximum efficiency.


I'm not sure in what way you're using the word "efficiency" here?

The research showed that the Go version of the programs were in average faster in terms of execution time and consumed less memory than the alternate Lisp versions, yet the Lisp versions consumed less electricity and were thus more energy efficient.

So the interesting bit here is that better performance and lower memory consumption doesn't always mean more energy efficient.

Now, there is definitely a link between performance and energy use, the research did show that for the most part, faster execution often reflected in lower energy spent, but there were surprising variation within that range where it wasn't always clear cut.


It would be cool if you updated the link to the Cookbook in your article from the old one (on Sourceforge) to the newer one (on Github): https://lispcookbook.github.io/cl-cookbook/ Best,


I will, thank you!


Your website inspires me, I've been on a similar career trajectory to you in some ways although at the opposite end of the earth. Starting in Physics and moving away in time, although I'm only 33 now. I love that you're putting your art on display. It's very rare to find scientists pursuing art in such a direct way.


I would be interested in at least the performance of type hinted clojure code.


The closest translation of the code translation, having already dropped the laziness of the Clojure version, was 4x faster. The rest of the speedups came from rewriting the code!


I miss my common lisp days, and I think rust being able to export C ABIs makes it a really great companion language for CL. I also think common lisp (or really, any fast lisp) is a really great tool for game development. The fact that you can redefine a program as it's running really helps iterate without having to set up your state over and over. Pair that with a fast, low-level language that can handle a lot of the lower level stuff (been trying to find time to look at Bevy on rust), and you have a killer combo.

The main reason I stopped using lisp was because of the community. There were some amazing people that helped out with a lot of the stuff I was building, but not a critical mass of them and things just kind of stagnated. Then it seemed like for every project I poured my soul into, someone would write a one-off copy of it "just because" instead of contributing. It's definitely a culture of lone-wolf programmers.

CL is a decent language with some really good implementations, and everywhere I go I miss the macros. I definitely want to pick it up again sometime, but probably will just let the async stuff I used to work on rest in peace.


I can understand the failure to build a critical mass of contributors, but can you share some examples of where your work was duplicated instead of built upon?

I’ve read through some of your async work in the past and from an initial glance, it seemed like you had the right idea by wrapping existing event libs and exposing familiar event loop idioms. At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.


> can you share some examples of where your work was duplicated instead of built upon?

Wookie (http://wookie.lyonbros.com/) was the main one, or at least the most obnoxious to me. I was trying to create a general-purpose HTTP application server on top of cl-async. Without naming any specific projects, it was duplicated because it (and/or cl-async) wasn't fast enough.

> At the very least, it seemed uncontroversial so I’m interested to see why others would choose not to build upon it.

A superficial need for raw performance seemed to be the biggest reason. The thing is, I wasn't opposed at all to discussions and contributions regarding performance. I was all about it.

Oh well.



I was not! Thanks for the tip.


I regularly work with Clojure.

This is probably an unpopular opinion in this thread, but despite having worked with it for years, I still don’t much like it, mostly because it’s far too terse and the syntax is so far removed from that of C-based languages. The other day I wrote a Java-based DTO and it was refreshing how clear and verbose everything was, despite the almost comical volume of code it took in comparison to what similar functionality would look look like in Clojure. Plus, the state of tooling in Clojure is not the best.

I would also add that while you might initially do well with something like Clojure, it may be difficult to maintain, especially if you plan to make it a legacy product with O&M support in the future.


I'm having trouble thinking how the dto would look bad in a lisp. Should just be close to (defstruct field_name next_field...). What makes that so much worse than what it would be in Java?

The difficult to maintain line really rubs me the wrong way. I've seen messes in all languages I've worked with and see no reason to think a lisp would be worse. Just conjecture from all around.


Funny, I regularly work in Java. And I'm tired of writing stupid bags of data.


I agree with both of you :)


Well this article is about Common Lisp :).

You could easily use CLOS or Structs in Common Lisp to do DTO. If you were using CLOS for DTOs you'd also have generic functions to dispatch on said DTOs.


The hard to maintain issue seems to follow most dynamic languages around. What kind of those issues have you run in to in your time working with Clojure professionally?


To be honest; "hard to maintain", "technical debt", "total rewrite required", "monolith to micro service", probably "micro service to monolith" at some point, all of these appear no matter what the language, I really don't think static analysis is the problem. Though I will give you that pretty much every popular dynamic language has terrible runtime environments, which doesn't help at all. If I don't have comprehensive static analysis I want good introspection and reflection, I'd ideally want static manipulation and continuation.


In maintaining verbose codebases I often find that certain refactors or paths to improvement are closed off, because they just wouldn't be worth that much typing. It may make it easier to understand why something is going wrong, and that's usually the bulk of the intellectual work, but it also means the constructs around which the program was initially designed will be its constructs forever.


As with all of these forays, I applaud the author for learning new things, but these benchmarks are primarily testing reading from stdin and format printing to stdout, followed by testing bignum impls as mentioned elsewhere. For this reason, the title and benchmarks are a bit misleading.


...also testing half-compiled code. JIT compiled tests that run few seconds just discredit their authors. It's yet another example of how not to perform microbenchmarks.


Worth pointing out for those who don't know that the Java runtime does JIT optimizations of bytecode based on observed runtime patterns. Blew my mind when engineers were talking about "warming up the code paths" during startup of a large Java service and I found out it wasn't joke/superstition.


Warming is a tiny part of microbenchmarking actually. Knowing how and what the compiler optimizes, like purpose lattice checks to ensure no extra array boundaries in the code. Call site optimizations (class hierarchy analysis) - single site (direct calls, inclines), dual site - guarded checks, vs 3-5 - inline caches vs. full virtual calls. Card-marking of stores for concurrent GC. L2 and L3 CPU cache sizes awareness (certain dataset may fit and the results are misleading)... There are a lot other techniques to consider, including looking directly at the assembly code.

Microbenchmarking requires rather deep knowledge how the JIT works, how the hardware - CPU/cache/memory operates, the costs of calls certain system calls and what not.


I just tentatively found a 10% speedup in the D compiler frontend by enabling profile guided optimization.


Java effectively always have guided compilation with perf. counters and all.


It doesn't discredit anything. Sometimes what you care about is performance from cold start. Serverless And CLIs for example. JIT aint gonna help you there , and this problem falls into that category.

EDIT: did you read the article? It is not a microbenchmark.


If you want a fast cli and java, you'd have the java process running and feed the input/output where by some means. Overall Java is not a good fit for a classic CLI, at least not with the JIT - it works in a pinch of course but it's well known it has a slow bootstrap.

Of course I read the article, the benchmarks are at the end. The rest is about LoC (which is yet another benchmark, albeit even more useless). I even read all the code on github, there is a room for improvement there as well.


That’s such an extreme niche that unless specifically pointed out, it should not be given such attention.

But I do agree that the post is not a benchmark.


Is it weird that I don’t like Common Lisp at all but I like Scheme a lot? I just never liked Lisp 2s and separate name spaces for functions and variables. But really that is the biggest issue for me. I’m sure if only Common Lisp existed it wouldn’t bother me at all.

That being said, I think CL is a fantastic language and there is a lot more libraries out there to do useful things than in scheme. My C programming is really weak so I find it challenging whenever I come across a library in c that isn’t already wrapped


I'm a bit the same. I've been writing Racket for a number of years now and looking back at Lisp I see a lot of ugliness that I don't really think I enjoy.

Racket has a nice package manager and module system that kind of works for me, and the documentation is honestly some of the best I've ever used, if not my favorite. Comparatively, I've tried using GNU Guile and found the online documentation to be horrendous, and trying to find online documentation for what's considered to be the "standard library" in Common Lisp still confuses me.

I love seeing people use CL and other Lisp-likes in the wild, and Norvig was a big inspiration for me.


What IDE do you use for racket? In emacs, I've found SLIME and its associated debugger to be more powerful than GEISER. I never could come to like Dr. Racket due to its lack of autocomplete and things like parinfer / paredit.


Racket-mode is available for emacs (and it's good)!



I liked the way coalton built a statically typed lisp-1 in CL.

Undoubtedly there are some issues I haven't thought through, and I'm too lazy to actually try to implement it, but I've always thought one should be able to make "CL1" (or some such) that's basically common lisp but with a single namespace for functions and variables.


Not really. I'm a diehard Schemer as well, for the same reasons: the Lisp-1 nature helps you treat procedures as true first-class values and fits in with Scheme's more functional style. Something you can do in CL also, just with a bit more ceremony. And CL programmers are more "haha, setf go brrrr" anyway.

That said, I'd rather use CL by far than any other language except Scheme, and there are cases where CL is the right thing and Scheme is not. The most brilliant, beautiful, joy to maintain enterprise system I've ever seen was written in CL.


I used to think it was irritating, until the very moment I tried naming an input parameter in Scheme "list" (and also construct lists using "list" in the same function).

That was the moment I started my path to liking a separate variable and function namespace.

I also occasionally get bitten by this in Python, isn't "file" a perfect variable name for holding a handle to an opened file?


Not at all weird. I'm the opposite; I never liked Lisp 1s, and whenever I use one I always end up shadowing function or macro bindings by accident.


I’m exactly the same way: also, Lisp-1 code is frequently full of weird abbreviated variable names like “lst” when in CL I can just write “list” without worrying about clobbering an important function.


Not the same phone encoding challenge, but an interesting feature of the bsd globbing built into bash. The map is the typical map of letters on a touch-tone phone..where "2" can be a, b, or c.

All letter combos for the number "6397":

  #!/usr/bin/bash
  m[0]=0;m[1]=1;m[2]="{a,b,c}";m[3]="{d,e,f}";m[4]="{g,h,i}";m[5]="{j,k,l}";
  m[6]="{m,n,o}";m[7]="{p,q,r,s}";m[8]="{t,u,v}";m[9]="{w,x,y,z}"
  var=$(echo ${m[6]}${m[3]}${m[9]}${m[7]})
  eval echo $var


I found Common Lisp to be surprisingly ahead of its time in many regards (debugging, repl, compilation and execution speed, metaprogramming), but unfortunately it doesn't have a large community, and it's showing its age (no standard package management, threading not built into the language). It's also dynamically typed which disqualifies it for large collaborative projects.


It has great package management with https://www.quicklisp.org/beta/ and some truly great and high quality libraries, especially Fukamachi's suite of web libraries and so many others. Woo, for example, is the fastest web server. https://github.com/fukamachi/woo (Faster then the runner up Go by quite a bit)

For parallel computing, we use: https://lparallel.org/ Its been great at handling massive loads accross all processors elegantly. And then for locking against overwrites on highly parallel database transactions we use mutex locks that are built into the http://sbcl.org/ compiler with very handy macros.


Slightly off-topic but I'm in awe of Fukamachi's repos. That one person has built so much incredible stuff is amazing to me. Not sure it's enough to get me using CL all the time, but it's very impressive.


The math library we use is incredibly fast with quaternion matrix transformations: https://github.com/cbaggers/rtg-math/

The only gaps we've had with our production code and lisp is PDF (we use the java pdfbox), translating between RDF formats (also a java lib) and encrypting JWP tokens for PKCE dPop authentication (also java)

The complete conceputal AI system and space/time causal systems digital twin technology is all in common lisp (sbcl)

Also fantastic is the sb-profile library in sbcl that lets you profile any number of functions and see number of iterations and time used as well as consing all ordered by slowest cummulative time. That feature has been key on finding those functions that are slow and optimizing leading to orders of magnitude speed improvements.


are you able to go into detail as to what sort of AI technology you are using ? when you mention causal systems do you mean causal inference ?


We basically build a space/time model of the world, where systems are performing functions in events that take input states and change them into output states such that those events either causally trigger each other or the causal link that the input states to an event means that the event outputting that states is the cause of the current event.

The conceptual AI models operational concepts based on an understanding on how human concepts work, inference and automatic classification using those concepts, and then learning new concepts. The operational side of the digital twin uses functional specifications held elsewhere, which is also true of the operational concepts which use specifications in the form of conceptual definitions.

And the technology takes in RDF graph as data for input, builds the digital twin model from that data with extensive infererence, then expresses itself back out with RDF graph data. (Making https://solidproject.org/ the ideal protocol for us where each pod is a digital-twin of something)


Do you have links to more information?


We have a beta running on that framework: https://graphmetrix.com The live app and Pod Server is at https://trinpod.us

We are working toward commercial launch in the coming weeks. (We are adding Project Pods, Business Pods, Site Pods with harvesting the sematic parse we do of PDFs into the pod, so we handle very big data)


I don't really consider quicklisp to be "great package management" since you have to download it, install it, then load it. And don't forget to modify your sbcl init script to load it automatically for you. It felt quite cumbersome to get started using it, even though it was simple enough after that. Rust has truly great package management in my opinion. I run one command to install Rust and I immediately have access to all crates in crates.io.

EDIT: It's kind of ironic for me to make this claim since I use Emacs as my editor...


> It's also dynamically typed which disqualifies it for large collaborative projects.

I've been around the block for long enough to see how far the pendulum swings on this one. I'm guessing that it starts going the other way soon.


In my opinion after years in the industry, the benefits of type safety are too compelling and well known, to the point that I don't even feel like debating it. It's not a fad that will change periodically.


Its a fad that has changed periodically, though I think the convergence of static and dynamically typed languages to include dynamic holes in the former and gradual typing or optional static typecheckers for the latter will continue to reduce the significance of the current state of the fad on language selection in practice.

It probably won't reduce the intensity of the way a small minority of the community treats the language-level difference in holy wars, though.


I'm not following, could you elaborate?


Dynamic/static typing came in/out of fashion several times already. Any trend is temporary; neither kind of type system is a help or impediment in collaboration.


I think it originally tanked as a backlash against how verbose and painful it was in Java and friends (as well as the limited benefits because of things like nullability)

Modern static type systems are a totally different beast: inference and duck-typing cut down on noise/boilerplate, and the number of bugs that can be caught is dramatically higher thanks to maybe-types, tagged unions/exhaustiveness checking, etc. I think we've circled around to a happy best-of-both-worlds and the pendulum is settling there.


Common Lisp was on the right track with gradual typing and decent inference, which makes a nice compromise compared to jumping static hoops to wrap the problem around a more rigid language.

The type declaration syntax definitely could use some love, I think it's a shame a more convenient syntax was never standardized. And sum types etc would be nice of course. It's all perfectly possible.


I agree with everything you said here.

However, you have to consider that Common Lisp itself is quite different from other dynamically typed languages.

I find that, after the initial adjustment period with the language (which is significant, I admit), it's surprisingly hard to write messy code in CL, certainly harder than in Python or Ruby. At the very least, the temptation to do so is lower, because there are fewer obstacles to expressing sophisticated ideas succinctly.

And no, I am not talking about the ability to define your own macros and create DSLs. I think it has to do with the extensive selection of tools for creating short-lived local bindings, the huge selection of tools for flow control, and the strict distinction between dynamic and lexical variables.

There's just something about it that sets it apart from other dynamically-typed languages, even without the gradual typing aspect and even without the speed difference. Navigating a source codebase in Python without strict type annotations is like navigating in the dark in a swamp. I don't have the same issues in Common Lisp for the most part.

Maybe this has more to do with undisciplined programmers self-selecting out of CL than it has to do with any aspect of CL itself.

And on top of the excellent and unique language design, you have:

* A powerful CFFI

* An official specification

* The "REPL-driven" development style (if you want it)

* Several well-maintained implementations that generate high-performance machine code

* The unique condition system

* Literally decades of backward compatibility

* A core of stable, well-designed packages, including bindings to a lot of "foundational" libraries

* Macros if you really do want to invent your own syntax or DSL

Probably the only big downside is that the developer ecosystem is still focused around Emacs. That too is changing gradually but steadily, with Slyblime (SLY/SLYNK ported to Sublime Text), Slimv and Vlime (Vim ports of SLIME/SWANK), the free version of LispWorks for light-duty stuff, and at least one Jupyter kernel.

Also, Roswell (like Rbenv or Pyenv) and Qlot or CLPM (like Bundler or Pipenv) help create a "project-local" dev experience that's similar to how things are done in other language ecosystems.

And of course there is Quicklisp itself, which is a rock solid piece of software, and fast too!

Python and Ruby have their own merits, for sure, and there are plenty of things I have in Python that I wish I had in CL. But it really doesn't seem right to compare them, CL seems like a totally different category of language.


I think the tradition of using long-descriptive-names in Common Lisp for all identifiers cements much of that experience. Using 1-3 character variable names feels natural in C but (outside of most trivial use circumstances) faux pas in Common Lisp. A better vocabulary allows for clearer formulation of nature and intent of the constructs, improving readability greatly.


Line noise is a red herring in the static/dynamic comparison. You will still run into serious problems trying to shove ugly human-generated data into your nice clean type system.

For mechanical things where you the programmer are building the abstractions (compilers, operating systems, drivers) this is a non-issue, but for dealing with the ugly real world dynamic is still the way to go.


I'm not sure I understand what makes dynamic typing better for handling real-world data. Yes, the data is messy, but your program still has to be prepared to handle data in a specific shape or shapes. If you need to handle multiple shapes of data, e.g. varying JSON structures, you can still do that with static types, using sum types and pattern matching.


Most modern static languages have some way to hold onto a dynamically typed object, stuff them into containers of that dynamic type and do some basic type reflection to dispatch messy dynamic stuff off into the rest of the statically typed program. Sometimes it does feel fairly bolted on, but the problem of JSON parsing tends to force them all to have some reasonably productive way of handling that kind of data.


Yes but this same argument works the other way, dynamically typed languages can do a half-assed impression of static languages as well. So its a tradeoff depending on your domain.


Having programmed for 10 years in a fully dynamic language though I think I prefer the other way around. You tend to wind up documenting all your types anyway one way or another either with doc comments or policies around naming parameters, and wind up building runtime type validation systems. Statically typed languages with cheats seem like it gets you to the right sort of balance much sooner.


The right balance really depends on your domain. The reason I'm so big on dynamic typing is because the most important part of the product I work on is a ton of reporting from an SQL database. I shift as much work as possible to the database, so the results that come back don't need to be unpacked into some domain model but are ready to go for outputting to the user. If I tried to do this in a static language I'd have a new type for every single query, then have to convince my various utility functions to work with my type zoo.


People seem to interpret blacktriangle's post in a parser setting. I don't know why, but if you're writing parsers, you're explicitly falling into the category he's mentioning where static types make a lot of sense.

GP's claim was that Java was too verbose. But verbosity isn't really the problem. There are tools for dealing with it. The problem is a proliferation of concepts.

A lot of business applications goes like this: Take a myriad of input through a complicated UI, transform it a bit and send it to somewhere else. With very accurate typing and a very messy setting (say, there's a basic model with a few concepts in it, and then 27 exceptions), you may end up modeling snowflakes with your types instead of thinking about how to actually solve the problem.


If you're referring to the "parse, don't validate" article, it's using the word in a different sense. The idea is that you make a data model that can only represent valid values, and then the code that handles foreign data transforms it into that type for downstream code to handle instead of just returning a boolean that says "yep, this data's fine"


Right, but where this gets obnoxious is when you're writing code at the "edge", where customers can send you data, and the formats which you accept and process can change wholly and frequently. I've dealt with this problem before in a Scala setting where we were created sealed traits to have Value classes for each of our input types, but it was obnoxious enough that adding a new form of input was pretty costly from an implementation time perspective, enough that handling a new input format was something we planned explicitly for as a team. Sure, you could circumvent this by using something like Rust serde_json's Json Value type, but then you're basically rolling an unergonomic form of what you could do in a couple lines of Python.

I've mostly come to the conclusion that dynamic languages work well wherever business requirements change frequently and codepaths are wide but shallow (e.g. many different codepaths but none of them are particularly involved). Static languages work better for codepaths that are narrow but deep, where careful parsing at API edges and effective type-level modelling of data can create high-confidence software; in these situations the logic is often complicated enough where requirements just can't change that frequently. I wish we had a "best of both world" style to help where you have wide and deep codepaths, but alas that'll have to wait for more PLT (and probably a time when we aren't forming silly wars over dynamic vs static typing as if one was wholly superior than the other.)


I've found this to be a non-issue in Clojure with specification validation. Some call this gradual typing


This has not been my experience.


Alexis King's "Parse, don't validate" is pretty much the final word on using type systems to deal with "messy real world data": https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

tl;dr when used properly, static type systems are an enormous advantage when dealing with data from the real world because you can write a total function that accepts unstructured data like a string or byte stream and returns either a successful result with a parsed data structure, a partial result, or a value that indicates failure, without having to do a separate validation step at all -- and the type system will check that all your intermediate results are correct, type-wise.


I’ve been using the techniques in that article for years in JavaScript, CL and Clojure. While static types are a notable part of it, the more important point is just learning to design your systems to turn incoming data to domain objects as soon as possible.


There are runtime analogs for most of the modeling techniques people use in statically typed languages.


You're not the first, nor fourth for that matter, person to respond to dynamic typing advocation with that blog post, and it's an interesting post but it misses the whole point. The problem is not enforcing rules on data coming in and out of the system. The problem is that I have perfectly valid collections of data that I want to shove through an information processing pipeline while preserving much of the original data and static typing systems make this very powerful and legitimate task a nightmare.


Not a nightmare at all. For example, if you're doing JSON processing in Rust, serde_json gives you great tools for this. The serde_json::Value type can represent any JSON value. You parse JSON to specific typed structures when you want to parse-and-validate and detect errors (using #[derive(Serialize, Deserialize)] to autogenerate the code), but those structures can include islands of arbitrary serde_json::Values. You can also parse JSON objects into structs that contain a set of typed fields with all other "unknown" fields collected into a dynamic structure, so those extra fields are reserialized when necessary --- see https://serde.rs/attr-flatten.html for an example.


> The problem is that I have perfectly valid collections of data (...) and static typing systems make this very powerful and legitimate task a nightmare.

What leads you to believe that static typing turns a task that essencially boils down to input validation "a nightmare"?

From my perspective, with static typing that task is a treat and all headaches that come with dynamic typing simply vanish.

Take for example Typescript. Between type assertion functions, type guards, optional types and union types, inferring types from any object is a trivial task with clean code enforced by the compiler itself.


Presumably the GP's data is external and therefore not checkable or inferrable by typescript. This makes the task less ideal, but still perfectly doable via validation code or highly agnostic typing


> Presumably the GP's data is external and therefore not checkable or inferrable by typescript.

There is no such thing as external data that is not checkable or inferable by typescript. That's what type assertion functions and type guards are for.

With typescript, you can take in an instance of type any, pass it to a type assertion function or a type guard, and depending on the outcome either narrow it to a specific type or throw an error.


You said:

> inferring types from any object is a trivial task

This is true for values defined in code, but TypeScript cannot directly see data that comes in from eg. an API, and so can't infer types from it. You can give the data types yourself, and you can even give it types based on validation logic that happens at runtime, and I think this is usually worth doing and not a huge burden if you use a library. But it's disingenuous to suggest that it's free.

The closest thing to "free" would be blindly asserting the data's type, which is very dangerous and IMO usually worse than not having static types at all, because it gives you a false sense of security:

  const someApiData: any = { foo: 'bar' }

  function doSomethingWith(x: ApiData) {
    return x.bar + 12
  }

  type ApiData = {
    foo: string,
    bar: number
  }

  // no typescript errors!
  doSomethingWith(someApiData as ApiData)
The better approach is to use something like io-ts to safely "parse" the data into a type at runtime. But, again, this is not without overhead.


> This is true for values defined in code, but TypeScript cannot directly see data that comes in from eg. an API, and so can't infer types from it.

No, that's not right at all. TypeScript allows you to determine the exact type of an object in any code path through type assertions and type guards.

With TypeScript you can get an any instance from wherever, apply your checks, and from thereon either throw an error or narrow your any object into whatever type you're interested in.

I really do not know what leads you to believe that TypeScrip cannot handle static typing or input validation.


They don't, though.

For example: a technique I've used to work with arbitrary, unknown JSON values, is to type them as a union of primitives + arrays of json values + objects of json values. And then I can pick these values apart in a way that's totally safe while making no dangerous assumptions about their contents.

Of course this opens the door for lots of potential mistakes (though runtime errors at least are impossible), but it's 100% compatible with any statically-typed language that has unions.


At the edges sure, but why allow that messiness to pervade the system instead of isolating it to the data consuming/producing interfaces?


You still have to deal with the ugliness in a dynamic language too, but you might be tempted to just let duck typing do its thing, which could lead to disastrous results. Otherwise, you'll have to check types and invariants, at which point you might as well parse the input into type-safe containers.


Can you elaborate on that? As I see it dynamic was popular in the 90's (Python, JS, Ruby), but outside of that it's always been pretty much dynamic for scripting and static for everything else.


Consider that first Fortran (statically typed) and Lisp (dynamic) implementations date back to late 1950s. Since then there was a constant tug of war between these camps, including BASIC, COBOL, Smalltalk, Pascal, and trends falling in and out of favour.

All this however is rather orthogonal to the strengths of type systems. CL type system, for instance, is stronger than one of Java or C.


None of those languages were popular in the 90s.


> None of those languages were popular in the 90s.

JS was, because browsers. Python was starting to be toward the end of the 90s. Ruby (as I understand) was in Japan though it wasn't until Rails took off that it became popular elsewhere. Perl (not on the list but similar to those on the list) definitely was.


Pearl was!


common lisp supports type annotations. there is even an ML-type language impletmented in it (see coalton). quick lisp [1] is used for package managment, bordeaux-threads [2] for threading.

1. https://www.quicklisp.org/index.html

2. https://github.com/sionescu/bordeaux-threads


Common lisp has a smaller community than the currently most popular languages, but I'm consistently impressed by the range of and quality of libraries the community has created (despite all the "beta" disclaimers) [1]

Regarding type-checking, common lisp is expressive enough to support an ML dialect (see coalton), and is easily extended across paradigms [2]

1. https://project-awesome.org/CodyReichert/awesome-cl

2. https://coalton-lang.github.io/


> It's also dynamically typed which disqualifies it for large collaborative projects.

Like Github or WordPress?


.. are those considered "good" ? github is meh at best considering it's 13yo and the billions of dollars poured into it, and wordpress, I don't think anyone can reasonably say that it's a sane software. They are both good arguments against dynamic typing imho (especially the latter).


If what you're saying is true and those software are mediocre at best, this implies that software quality and success have no correlation. Or perhaps that software quality is not at all what the users of this site tend to think.


> this implies that software quality and success have no correlation.

I mean, like in most other fields no ? Most successful movies, books, foods, artworks, furnitures, ... are fairly different from the best ones.


But the least successful art is in every measure bad. I mean for example the movies that score 1.0-2.0 in imdb. But yeah, perhaps the comic book movies are not unlike Wordpress.


Quality of code is not necessary for success, nor does it guarantee it.

It does improve your quality of life as an engineer, I can promise you that.


> It's also dynamically typed which disqualifies it for large collaborative projects.

That’s a quite absolute statement. At least Erlang/Elixir users would tend to disagree. “Dynamically typed” can still represent a huge variety of approaches, and doesn’t have to always look like writing vanilla JavaScript for example.


I hear your argument as "there exist dynamically typed languages therefore the benefits of typing don't matter". To be positive I'll say that it's better than the pendulum argument.

I'm aware that there exist dynamically typed languages in which large projects are written, I'm saying that they would be better off with type safety.


> It's also dynamically typed which disqualifies it for large collaborative projects.

You can add type declarations and a good compiler will check against them at compile time: https://medium.com/@MartinCracauer/static-type-checking-in-t...


It is non-optionally strongly typed.

It is optionally as statically typed as you want, depending on what compiler you use, I am mostly familiar with SBCL, which does a fair bit of type inference and will tell you at length where your lack of static typic means it will produce slower code.


I’ve been increasingly coming around to the opinion that computers have gotten kind of boring. I miss how weird and different computers used to be, both in terms of hardware and software.

For lisps, I think Racket and Clojure feel the most modern.

But I’m starting to come around to the idea that there are enough modern, hyper modern, post-modern, languages and programmers out there.

Some of the value in learning Common Lisp might be in its value for living software archaeology. You can find actual code written at the time people were figuring out Big Ideas that we take for granted today. And usually the process of synthesis means that Big Ideas lose a lot of associated commentary and thinking that fed in.

There’s a place for modern languages that conform to the same general set of sensibilities. But there’s also joy in gettin’ weird with some of the old stuff. It’s smugly satisfying to beat the new kid on the block with old tricks on occasion.


If you're coming from Java, you might want to look at Clojure. It has immutable datastructures and Java interop.


From the author:

> Even though Clojure might be a more obvious choice for someone, like me, who is used to working with the JVM, I actually wanted to try using something else with a lighter runtime and great performance.


I just used a Clojure library from a Kotlin project via Java interop. And it wasn't even weird.


Or Armed Bear Common Lisp.


Unless you go out of your way in writing non-idiomatic Clojure like in Java with parenthesis Clojure, Clojure will the slowest of the three by far.


I've been playing around with designing programming languages in Common Lisp lately.

I'm curious how far it's possible to push performance by generating Lisp code in SBCL compared to the classical C interpreter goto loop.

https://github.com/codr7/snabl


This is a pretty introductory CL article, mostly a commentary on Norvig's solution to the problem. Still, I learned about the #. readmacro from it. The conclusion: "[The Lisp implementation] was the fastest implementation for all input sizes except the largest one, where it performed just slightly worse than my best Java implementation." GH repo at https://github.com/renatoathaydes/prechelt-phone-number-enco.... Sounds like he was mostly measuring the performance of the SBCL bignum implementation.


The tests are just few seconds, it's measuring bootstap, compilation time for the most of the part.


It's true that they're just a few seconds, but if he were measuring compilation time then none of the Java tests would come in under 10 000 ms, and (although the graph isn't usefully labeled) it looks like they're around 100 ms.


In one of the articles, it shows SBCL performance does not increase by first compiling it to a binary executable. Compilation overhead in SBCL is therefore negligible.


I tried experimenting with Common LISP but never understood how one was supposed to deploy a finished binary to the end user. It seemed like you were supposed to load up an environment? It made me leery of continuing.


For most common-lisp implementations you can just save the current state as an executable, along with specifying an entry-point.

Obviously doing this from your dev environment is a bad idea, so most people write a script that loads the system and dumps the image. Or you can use one that someone else has already written.

Not all lisps work this way (most notably ECL does not), and you can use ASDF's program-op to create an executable directly from a package definition, which should work on any implemntation supported by ASDF.


Thanks, the saving the dev environment part was the part that scared me. It certainly helps to have someone tell you you're not crazy. Maybe I'll give it another try sometime!


One of the things I've learned with powerful tools is that, by their nature, they empower you to do bad things almost as much as they empower you to do good things.

It's even alluded to in the ruby bare-words example in this famous talk[1]

1: https://www.youtube.com/watch?v=3se2-thqf-A


Nothing to worry about. Yes, the minimal executable you get with SBCL is a bit fat, but for any real-world application which do some actual work, the size is a non-issue. Especially, as the executables are starting up very fast, as no dynamic linking is necessary, most of the code are static references.

For any sizable Lisp project you would have an ASDF-file to load the system, with that you can create an easy build-script or even a make-file, which loads your program via ASDF and then builds the executable, it is described well here: https://stackoverflow.com/questions/14171849/compiling-commo...

For consistancy, I would recommend to build the executable from the freshly loaded source, not a lisp image which has been used to develop in.


You can just run make when you are ready to make a clean build and executable. Same as any other language.


We have a makefile and: sudo make it creates an executable.

Its slightly tricky in that the whole make process can only happen on a single thread, so you have to turn off all parallel threads during make - so we have a key on many functions :make that turns off parallel for using make. Ultimatel the make file uses (asdf:make :package-name) to compile to an exe. The other tricky part is to get a web server in the exe to work, and thats handled like this:

(handler-case (bt:join-thread (find-if (lambda (th)

         #+os-unix (search "clack-handler-woo" (bt:thread-name th))

         #+os-windows (search "hunchentoot" (bt:thread-name th))
         )

       (bt:all-threads)))

     ;; Catch a user's C-c

     (#+sbcl sb-sys:interactive-interrupt

       #+ccl  ccl:interrupt-signal-condition

       #+clisp system::simple-interrupt-condition

       #+ecl ext:interactive-interrupt

       #+allegro excl:interrupt-signal

       () (progn

     (format *error-output* "Aborting.~&")

     (stop)

     (uiop:quit)))

     (error (c) (progn (format t "Woops, an unknown error occured:~&~a~& - restarting service..." c)

         #+os-unix (trivial-shell:shell-command "service trinity restart"))

     ))


See this recipe: https://lispcookbook.github.io/cl-cookbook/scripting.html The core of it is to call "sb-ext:save-lisp-and-die" after you loaded the system, and the portable way across implementations is to use asdf:make, with 2 lines on your system declaration (.asd).


It's not quicker to write code in Lisp when you don't know Lisp, and you have to learn it to make a change to some Lisp code that someone randomly added to the codebase.

This is true of all niche languages that are supposedly quicker to use than regular mainstream languages.


Your first part is tautological but worthless. It's never faster to write in a language you don't know versus one you do know.

EDIT: To amend the preceding sentence, it's almost never faster. There are probably languages which people know that are sufficiently in conflict with the problem domain that they could be faster in some other new-to-them language versus the one they know for specific problems.


APL comes to mind as a language that could be faster to learn and use for a particular task, if your particular task involves a lot of pure matrix combination and manipulation.

though, the fact that it wants its own keyboard is a bit of a hurdle :)


Some weeks ago I tried to start learning Lisp too.

I found it odd, but I’ve seen weirder. Then I tried the simple example of writing a function to compute the nth fibonacci number (it’s almost the first example). After testing some numbers, I thought it seemed slow, so I quickly implemented the same function in Rust (with recursion too).

Rust took less time in compiling, and computing fib(n) for n 1..50 than Lisp to compute fib(50). Maybe I did something very wrong, but for now I’d rather learn Rust better.


I believe that unless you put the inline pragma on the Lisp function, the compiler isn't allowed to optimize the recursion.

In general, Common Lisp doesn't favor recursive functions (as opposed to Scheme), and as far as I know, most implementations don't put huge amounts of effort into optimizing it.

I'd be interested to see whether the same written using the loop (or iterate:iter) macros would still be as slow.


If you make it tail recursive, its insanely fast in common lisp. (sbcl)


How would you code a tail recursive Fibonacci in LISP?


If you were a non LISP programmer and thought "Fibonacci using recursion", you might think you'd have something like fib(a)=fib(a-1)+fib(a-2), and scoff at its possible performance, and also wonder how that would be tail call optimized. But as a LISP programmer, you'll think of this in the way a Java programmer would think of a loop, and just treat this as a count over two values n times, adding along the way. And this is why I don't like LISP: because I have programmed assembly and built up from there, and doing loops using tail call recursion seems to be a lie. It is a faulty abstraction. It is a loop. You'd be better off doing it in a loop, in any sane language, and if you do it right in LISP, the code will generate a loop in assembly language. Just the fact that this thread devolved into how to specify compiler options so that it actually generates that loop shows how absurd the whole thing is.


Do "for" and "while" constructs also seem silly? After all they "compile to a loop" as well.

The purpose of all these loop constructs is to place constraints on some essential aspect of the loop up front, thus narrowing the possible range of effects of that wicked goto, by encoding common patterns. "For" loops guarantee the number of iterations. "While" loops guarantee the exit condition. And tail recursion guarantees what state gets modified in the loop.

It is strange the way you say "better off doing it in a loop". As you say, it is a loop. What construct would you prefer? GOTO?


I think the appeal of expressing loops with recursion is that it lets you to avoid mutation. But I agree with the point that Rust folks frequently make, about how being able to mutate things safely is a lot nicer than avoiding all mutation.


And where exactly is the problem with the compiler rewriting tail calls into a loop on the assembly level? Calling this a "lie" is pretty childish.


There may be a faster or cleverer way to do it but here's a basic tail recursive fibonacci:

    (defun nth-fibonacci (n &optional (a 0) (b 1))
      (if (= n 0) 
          a 
          (nth-fibonacci (- n 1) b (+ a b))))


WEB> (time (nth-fibonacci 9999)) Evaluation took: 0.005 seconds of real time 0.004520 seconds of total run time (0.000168 user, 0.004352 system) 100.00% CPU 13,120,881 processor cycles 4,716,256 bytes consed

20793608237133498072112648988642836825087036094015903119682945866528501423455686648927456034305226515591757343297190158010624794267250973176133810179902738038231789748346235556483191431591924532394420028067810320408724414693462849062668387083308048250920654493340878733226377580847446324873797603734794648258113858631550404081017260381202919943892370942852601647398213554479081823593715429566945149312993664846779090437799284773675379284270660175134664833266377698642012106891355791141872776934080803504956794094648292880566056364718187662668970758537383352677420835574155945658542003634765324541006121012446785689171494803262408602693091211601973938229446636049901531963286159699077880427720289235539329671877182915643419079186525118678856821600897520171070499437657067342400871083908811800976259727431820539554256869460815355918458253398234382360435762759823179896116748424269545924633204614137992850814352018738480923581553988990897151469406131695614497783720743461373756218685106856826090696339815490921253714537241866911604250597353747823733268178182198509240226955826416016690084749816072843582488613184829905383150180047844353751554201573833105521980998123833253261228689824051777846588461079790807828367132384798451794011076569057522158680378961532160858387223882974380483931929541222100800313580688585002598879566463221427820448492565073106595808837401648996423563386109782045634122467872921845606409174360635618216883812562321664442822952537577492715365321134204530686742435454505103269768144370118494906390254934942358904031509877369722437053383165360388595116980245927935225901537634925654872380877183008301074569444002426436414756905094535072804764684492105680024739914490555904391369218696387092918189246157103450387050229300603241611410707453960080170928277951834763216705242485820801423866526633816082921442883095463259080471819329201710147828025221385656340207489796317663278872207607791034431700112753558813478888727503825389066823098683355695718137867882982111710796422706778536913192342733364556727928018953989153106047379741280794091639429908796650294603536651238230626 WEB>


You may want to put extra spaces in front of those lines, and also manually break up that massive number.


Yep, that's how you'd do it. So long as your CL implementation supports tail call optimization, that will be on par with the same algorithm using loop or another looping construct.


SBCL doesn't optimize tail calls by default.

If I recall you have to set the optimization level prior to compiling the function using `(declaim (optimize xxx))` - where xxx is something I've forgotten. Perhaps someone can come along and point out what xxx should be?


  ; disassembly for NTH-FIBONACCI
  ; Size: 94 bytes. Origin: #x2264E531                          ; NTH-FIBONACCI
  ; 31:       498B4510         MOV RAX, [R13+16]                ; thread.binding-stack-pointer
  ; 35:       488945F8         MOV [RBP-8], RAX
  ; 39:       488B55F0         MOV RDX, [RBP-16]
  ; 3D:       31FF             XOR EDI, EDI
  ; 3F:       E8AC343BFF       CALL #x21A019F0                  ; GENERIC-=
  ; 44:       750A             JNE L0
  ; 46:       488B55E8         MOV RDX, [RBP-24]
  ; 4A:       488BE5           MOV RSP, RBP
  ; 4D:       F8               CLC
  ; 4E:       5D               POP RBP
  ; 4F:       C3               RET
  ; 50: L0:   488B55F0         MOV RDX, [RBP-16]
  ; 54:       BF02000000       MOV EDI, 2
  ; 59:       E8C2323BFF       CALL #x21A01820                  ; GENERIC--
  ; 5E:       488BC2           MOV RAX, RDX
  ; 61:       488945D8         MOV [RBP-40], RAX
  ; 65:       488B55E8         MOV RDX, [RBP-24]
  ; 69:       488B7DE0         MOV RDI, [RBP-32]
  ; 6D:       E84E323BFF       CALL #x21A017C0                  ; GENERIC-+
  ; 72:       488BF2           MOV RSI, RDX
  ; 75:       488B45D8         MOV RAX, [RBP-40]
  ; 79:       488BD0           MOV RDX, RAX
  ; 7C:       488B7DE0         MOV RDI, [RBP-32]
  ; 80:       B906000000       MOV ECX, 6
  ; 85:       FF7508           PUSH QWORD PTR [RBP+8]
  ; 88:       E99507DBFD       JMP #x203FED22                   ; #<FDEFN NTH-FIBONACCI>
  ; 8D:       CC10             INT3 16                          ; Invalid argument count trap
Looks like it's doing tail call optimization to me, this is without doing anything special with declaim. Note that where it returns to the top is with JMP not CALL.

http://www.sbcl.org/manual/index.html#Debug-Tail-Recursion


Ah, so as long as optimize is 2 or less you should get tail call recursion. I found when I tried it (several years ago) the default was debug optimize.

I wonder if different installs (or perhaps it's Slime) sets this value to different levels.

Something to bear in mind anyway..


Wow, that worked, thanks!!!

WEB> (declaim (optimize (debug 0) (safety 0) (speed 3)))

NIL

WEB> (defun nth-fibonacci (n &optional (a 0) (b 1))

      (if (= n 0) 

          a 

          (nth-fibonacci (- n 1) b (+ a b))))
WARNING: redefining LOBE/SRC/WEB::NTH-FIBONACCI in DEFUN NTH-FIBONACCI

WEB> (time (nth-fibonacci 9999))

Evaluation took: 0.002 seconds of real time 0.001887 seconds of total run time (0.001887 user, 0.000000 system) 100.00% CPU 5,476,446 processor cycles 4,715,760 bytes consed


Oh thats interesting!, we don't use that but use tail recursion extensively

https://0branch.com/notes/tco-cl.html#sec-2-2


I haven't looked at trying and don't really have time to, so I don't know, but with lisp, you can make anything fast as a general rule, very very fast. (at least with compiled sbcl)


If I absolutely had to write a recursive solution, something like this (should have all recursive calls in a tail position, may require trading debugability for speed):

  (defun tail-fib (n &optional (v0 0) (v1 1)
     (cond ((zerop n) v0)
           (t (tail-fib (1- n) v1 (+ v0 v1)))))


I don’t know much Lisp. Is that why I, except for syntax, see no difference from emptybits’s nth-fibonacci?


The only way anyone but you can know if you did anything wrong is if you include your code, and your common lisp implementation. They don’t all have the same performance.


It’s not like I did something weird, I pretty much copy pasted from

https://lisp-lang.org/learn/functions

I know the reason for the time taken is because it’s not tail recursive, and that making it tail recursive would make it almost immediate, but for me that was not the point.

The thing is, I don’t really know how to write performant rust code, and yet my naive implementation in rust works better than the example in learn-lisp.

To me that means performant Lisp is non trivial (meaning you need deep understanding to achieve it). If you show a slow but easier to understand example, it’s because fast examples are way harder to understand.


> I know the reason for the time taken is because it’s not tail recursive, and that making it tail recursive would make it almost immediate, but for me that was not the point.

Assuming your rust version is not tail-recursive, this is not true. Of course a tail-recursive (or iterative for that matter) version would be way faster, but that is independent of the language used[1]. The naive algorithm in both Rust and Lisp should be relatively similar in runtimes, and for me (after I properly enabled optimiations) they were.

Also, tail recursion is a red-herring here for another reason. For both Rust and Lisp, I would use an iterative approach, not a tail-recursive approach, like the example in the Rust num_bigint docs[2].

1: A theoretical language that automatically memoized pure functions would make this false, but that doesn't apply to the current discussion.

2: https://docs.rs/num-bigint/0.4.2/num_bigint/

The equivalent Lisp would be:

  CL-USER> (defun fib (n)
             (let ((f0 0) (f1 1))
               (loop repeat n
                     do (shiftf f0 f1 (+ f0 f1)))
               f0))
  FIB
  CL-USER> (time (fib 1000))
  Evaluation took:
    0.000 seconds of real time
    0.000171 seconds of total run time (0.000136 user, 0.000035 system)
    100.00% CPU
    471,103 processor cycles
    65,456 bytes consed
    
  43466557686937456435688527675040625802564660517371780402481729089536555417949051890403879840079255169295922593080322634775209689623239873322471161642996440906533187938298969649928516003704476137795166849228875
  CL-USER> 
For small values the difference is lost in the noise. For larger values (e.g. 1000000) Rust is about 2x faster than SBCL, which is about what I would expect for a test like this.


Just because someone else asked:

here's the Rust code: fn fib(n: usize) -> u64 { match n { 0 => 1, 1 => 1, _ => fib(n - 1) + fib(n - 2), } }

  fn main() {
    println!("{}", fib(50));
  }
Here's the Lisp code:

  (defun fib (n)
  "Return the nth Fibonacci number."
  (if (< n 2)
        n
        (+ (fib (- n 1))
           (fib (- n 2)))))
Again, I'm not trying to benchmark the languages, I'm not interested in this language drag racing competition/flamewar, I don't care about how performant a language is in the end. I care about how fast a language is for unit of time I spent. This metric is obviously only useful to me, because if you know a lot of Lisp but little Rust, your time spent will be very different.

I also know the Rust snippet will crash at fib(~100), and that I could write the thing in a loop and get there under 1ms. The same is surely true about Lisp.


Once you exceed the size of a fixnum (this depends on the implementation), Common Lisp will silently spill to bignums (since you have not declared your variable to be a fixnum; you could declare it as an integer and still get the fixnum->bignum spill).


Maybe not knowing how to use a tool efficiently is a good reason to refrain from strong criticism?


I don't know for certain at all, but I'd propose that you are being downvoted for not going far enough with lisp to grasp the point of lisp, which has more to do with recursive data structures--and data that is code--than just recursive procedure calling.

Factorial and Fibonacci as a pair are good introductions to the issue of recursion--and lisp had recursion back in the dark ages before other languages did--but they're aren't what lisp is about.

so, were some other newbie to read your comment, it might really put them off learning lisp for entirely the wrong reasons.

cheers :)


That's pretty darn close to the single worst benchmark you could use for CL vs. rust.

It's going to make a lot of memory allocations and make a lot of uninlinable function calls, both of which are places that I would expect rust to have an advantage. That being said, I'd be curious to see your rust version, as just the number of bignum operations done for fib(50) is going to take a long time.


> worst benchmark you could use for CL vs. rust.

I wasn’t benchmarking CL vs Rust, but my CL vs my Rust. What I was trying to see is, can I do something useful with this? Or do I need to invest a good amount of time (something I don’t really have now) before being productive.

As for the rust version, I did the “trivial” (and terrible) translation. Match on input, recursive call on the _ arm.


Here's my code, which is about 30x faster on lisp than on rust for fib(40). I'm currently waiting on fib(50)...

Lisp (SBCL):

  (defun fib (n)
    (case n
     (0 0)
     (1 1)
     (otherwise (+ (fib (- n 1)) (fib (- n 2))))))
  
  (time (fib 40))
Rust:

  use num_bigint::BigUint;
  use num_traits::{Zero, One};

  // Calculate large fibonacci numbers.
  fn fib(n: usize) -> BigUint {
      match n {
          0 => Zero::zero(),
          1 => One::one(),
          _ => fib(n-1) + fib(n-2),
      }
  }

  fn main() {
      println!("fib(40) = {}", fib(40));
  }


FWIW, the number of recursive calls to compute fib(n) is fib(n+1). So, expect fib(50) to take approxímately 20365011074/165580141 (roughly 123) times longer than fib(40) took (this is taking the somewhat optimistic assumption that there's no actual slow-down from the larger bignums, this is NOT a safe assumption).


I get ~1.5s on the Lisp program and ~2.1s on the Rust program for fib(40), and ~186s vs. ~257s for fib(50). Did you forget to compile the Rust program with --release?


I didn't use --release, but I also didn't use an optimization declaration in lisp.


You should never benchmark Rust without the --release flag - it can regularly speed up the code by 1-2 orders of magnitude.


I think I just inadvertently proved the point that a lot of replies to the original commenter were making. If you just say "I benchmarked this and X was way slower than Y" without posting code or other details, then it's likely that you're doing something wrong, particularly when you aren't familiar with X or Y.

The original poster said:

> To me that means performant Lisp is non trivial (meaning you need deep understanding to achieve it). If you show a slow but easier to understand example, it’s because fast examples are way harder to understand.

And

> What I was trying to see is, can I do something useful with this? Or do I need to invest a good amount of time (something I don’t really have now) before being productive.

I could have equally said that my example demonstrates that "To me that means that performant Rust is non trivial" or that I would "need to invest a good amount of time ... before being productive"

Both of which are clearly not true. Posting my code let other people find the extremely trivial change to fix the huge performance difference.


Did you use num_bigint for bignums?


Maybe it came out wrong (English is not my first language).

I did not mean to say that I believe it inferior or anything, just that I felt writing performant code in Lisp is non trivial to learn, while rust feels pretty natural (to me).

I’ll try again at some point, but I don’t think it’s a wise investment of time at this point, I’d rather be “fluent” in rust first, then maybe try lisp.


I feel that playing saxophone is non trivial to learn, while guitar feels pretty natural (to me).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: