Hacker News new | past | comments | ask | show | jobs | submit login
Python vs Common Lisp, workflow and ecosystem (2019) (lisp-journey.gitlab.io)
135 points by meistro on May 2, 2021 | hide | past | favorite | 82 comments



Speaking of Common Lisp - the European Lisp Symposium starts tomorrow (May 3 and May 4, https://european-lisp-symposium.org/2021/index.html). The entire conference will be broadcast on Twitch. Python programmers are invited, too :)


Thanks for sharing this. Call-site optimization for Common Lisp seems quite interesting after looking at the article.

http://metamodular.com/SICL/call-site-optimization.pdf


This is exciting. Thanks for the reference.


Nice article.

One thing, re “In Python we typically restart everything at each code change“: I sometimes run Python in Emacs with a REPL. I evaluate region to pick up edits. Not bad.

The big win for the Common Lisp REPL is being able to modify data, do restarts, etc. I usually use Common Lisp, but for right now I am heavily using Clojure to write examples for a new Clojure AI book that I am writing. I miss the Common Lisp REPL!


The first time I encountered "interactive development" was when I had to use python with jupiter notebook. I really liked this style of development. The tdd approach I practiced before was somewhat similar but by far not as visual.

Clojure was an eye opener for me and I think it offers a great developer experience (e.g. I'm addicted to C-c C-p)

It seems my journey hasn't ended and I definitely have to check out CL!

But atm it's hard for me to give up the things clojure offers to me: persistent datastructures, access to a great ecosystem and a very good designed standard library.


Not to start a flamewar, every time I see a python talk (pycon or else), with fancy tricks like metaclasses.. all I can think is that, well, CLOS would have been perfectly fit for this too.

I know people are tired of the "lisp/smalltalk did it better" but what features of python are not possible (or hard) in CL[OS] ?

ps: how many CL shops are out there ? I'd work near free just to try a CL team once.


CLOS (as you presumably know) models method application as calls to generic functions, instead of the now-more-mainstream Smalltalk-like message-dispatch approach which Python uses. The latter allows for things like overriding __getattr__ to intercept ‘all’ method calls and property accesses, for which I don't think there's any equivalent in CLOS.

The way methods are ‘attached’ to classes gives you a natural form of type-directed name lookup. CL generics have the advantage that you can define your own methods on existing classes while naming the methods in your own package so they don't conflict, but also have a curse of inconvenience along the way where importing a class doesn't naturally pull in everything associated with it, and you wind up writing the class name again and again when dealing with fields of an object. with-slots et al. are poor substitutes. (In an experimental sublanguage at one point I actually had local variables with object type declarations implicitly look up the class using the MOP and symbol-macrolet every available var.slot combination within the scope as a brute hack around the most common desirable case.)

Python's short infix/prefix operators are naturally generic, since they're implemented as method calls. In CL there's the generic-cl extension, but I haven't seen it have that much uptake… in particular, any library code that isn't explicitly aware of it won't use it ‘naturally’ on foreign objects, which could be good or bad.

That shades into the very-concrete type system that CL starts out with, where any attempt at ad-hoc polymorphic interop is a disaster unless everyone already agrees on what methods to use. I can't make a thing that acts like a hash table but uses a different implementation underneath, then pass it to something that expects to be able to gethash on it. I especially seem to get bitten by this in cases where alists are the expected way of representing key-value maps: there's no way to extricate yourself from the linear search without rewriting every piece of code that touches it, there's often an implicit contract that you don't want duplicate keys but it's easy to violate by accident and create bad behavior down the line, and so on. By comparison, Java collections in particular got this very right in terms of decoupling intention from implementation, and Python does basically the same thing but with a looser set of ‘expected’ methods.

By default, Python objects have a ‘purely’ dynamic set of properties, rather than the fixed slots CLOS imputes on an object via its class. Indeed the class-level property one can set in Python to constrain this for possible performance gains is called __slots__.


> The latter allows for things like overriding __getattr__ to intercept ‘all’ method calls and property accesses, for which I don't think there's any equivalent in CLOS.

CLOS already offered AOP, which you can use to control such calls.

https://lispcookbook.github.io/cl-cookbook/clos.html#dispatc...


I'm not sure what you're pointing at there. I'm aware of method combinations and method qualifiers—are you referring to being able to add a general :before/:after/:around on a single generic? If so, that's not what I mean; what I mean is vaguely similar but on the first-arg dispatch axis. Here's a toy Python example. Given:

  class KnowItAll(object):
      def __getattr__(self, attr):
          return lambda: "yes, I know how to " + attr
We then have:

  >>> k = KnowItAll()
  >>> k.reveal()
  'yes, I know how to reveal'
  >>> k.transfigure()
  'yes, I know how to transfigure'
  >>> k.make_sandwiches()
  'yes, I know how to make_sandwiches'
Ruby does this with method_missing instead, which is where I'm most used to it happening (and I think it's used a lot more in Ruby than in Python owing to the language-culture's higher tolerance for magic). Smalltalk used doesNotUnderstand, IIRC. One of the key secondary results of this is that you can do things like https://paste.ee/p/tUaRP, which is a toy “Tracer” class which intercepts, prints, and forwards method calls and attribute accesses (ignoring some edge cases).

If we were in CLOS, and started with:

  ;;; widget.lisp
  (defclass widget () ((radius :initarg :radius)))
  (defmethod grow ((w widget)) (incf (slot-value w 'radius)))
What I would expect for the equivalent is that, given:

  ;;; tracer.lisp
  (declaim (ftype (function (t) t) make-tracer))
  (defun make-tracer (object) ...)
  ;; ... further code goes here ...
Somewhere else, we can do:

  ;;; fiddle-with-widgets.lisp
  (let* ((w (make-instance 'widget :radius 3))
         (w* (make-tracer w)))
    ;; ???
    (grow w*))
Can you add code to tracer.lisp, without specific reference to anything from widget.lisp, such that this has the effect of (grow w) but prints what it's doing? Note also that my use of slot-value above is very deliberately a ‘raw’ access. I'm here completely ignoring the “make up entirely new methods on the fly as needed” part that method_missing also gets used for, which is even more impossible in CLOS given that it would require intercepting, what, all symbol lookups…

In CLOS, the class doesn't ‘own’ the method, it provides a type for dispatching on, so there's no way to do “give me some control over every generic function so long as the first arg is of ‘my’ type”. Which is a reasonable model, but means you can't do the same thing. Indeed the flip side is that in the Smalltalk-like model, generics are not reified, and methods that are specializations of the ‘same’ thing have no ‘real’ identity to them, so you can't do a type-ignoring :around method for an ‘entire generic’. (Often there will be a superclass to attach to instead, but it's considered dangerous “monkey patching” to mess around with someone else's class hierarchy like that, and in the case of more abstract interfaces there's nothing.)

Does that make sense?


Aha! I was half-wrong, but it's also horrible…

  (defclass tracer () ((actual :initarg :actual)))
  (defun make-tracer (object)
    (make-instance 'tracer :actual object))

  ;; But please don't.
  (defmethod no-applicable-method :around (gf &rest args)
    (if (typep (car args) 'tracer)
        (let* ((tracer (car args))
               (args* (cdr args))
               (actual (slot-value tracer 'actual)))
          (format t "Calling ~S on ~S" gf (cons actual args*))
          (apply gf actual args*))
        (apply #'call-next-method gf args)))
You can't do this with a real specialization on no-applicable-method, incidentally, because the first arg isn't special enough, it's just folded into the &rest. And that, I'm pretty sure, means this doesn't coexist with other uses of no-applicable-method properly… and you still can't do on-the-fly method names that aren't attached to a generic, and so on, but this does sort of account for the object forwarder case (and in fact you could extend it to allow tracers on more of the arguments!). It does, I expect, remain extremely unidiomatic by comparison.


thanks for your points, it's true that python dynamic operator genericity is very handy


> every time I see a python talk (pycon or else), with fancy tricks like metaclasses..

All I can think of is what a mistake those features were in python. "Fancy tricks" are generally the author trying to be clever (in the Kernighan sense) and ends up obfuscating the result. Not saying it works this way in CL, I don't have the experience to make the call, but in Python it was (and probably still is) prevalent.


The goal is for a little bit of clever work to make the computer do a lot more stupid work. People get bored and make mistakes, and our brains are not getting any faster, so this kind of leverage is the only game in town.


I am sorry but what does this mean:

> the author trying to be clever (in the Kernighan sense)

I know that Kernighan is the author of the C book, but its been a while since I skimmed it.


Please email hello@atlas.engineer with your details. We are a CL shop responsible for Nyxt browser. Thanks for your time!


Beginner approachability seems to be the key feature. Possibly also integration with external libraries.


This is my impression, too. When I was TAing an intro CS course back in university, I saw students new to programming struggling with Lisp (specifically Scheme in this case) much earlier in the learning process than even C.

This manifested in a few spots. The first is, yes, s-expressions. Yes, yes, I know, s-expressions are integral to the power of lisp, and they're really not hard to read once you get used to them. All of that is beside the point. The reality I saw on the ground, when teaching people to program, is that even people who have no prior programming experience whatsoever, and therefore no preconceptions to get over, have a harder time grasping s-expressions than they do algol-style syntax. I don't know why. I didn't have the same problem myself. But it's a real phenomenon that I struggled to help people through on a regular basis, and the lisp community's defensiveness about it is not going to make it go away.

Arbitrary-seeming names with zero mnemonic value is another problem. Car, cdr, progn, etc. - it takes a special kind of personality to not be put off by this sort of stuff. Not everyone has that kind of personality. Not everyone should have that kind of personality.

Finally, all the hair-splitty (at least to a newcomer) distinctions to contend with. =, eq, eql, equal, and equalp, or let, let* and letrec. Sure, there are reasons for these distinctions. But a language that can get by without quite so many of them is going to be a lot more attractive to newcomers. Even if it comes at the cost of footguns, if they're unlikey to be discovered until later.


That's my take on it, but I've ran into people praising it like it was alien theory of everything dropped by gods as a gift.

I like python but I'm a bit fed up with the mobthink (how surprising).


Another factor, I think, is that a heck of a lot of people don't think they will still be doing this for a long time.

Programming is something interesting and fun they are going to do for a few years while young, until (pick one) {their band takes off, someone funds their startup idea and they hire others to do the programming while they generate genius ideas and run the business, they get promoted to a high paying executive position that involved management and architecture and others do the coding, their podcast becomes a hit and they can live off that, the small company they work at IPOs or gets bought and they make enough to retire at 30, etc).

So they learn a fairly easy language that has lots of libraries that cover most things you do in a routine developer job.

...and before they know it they are 50-60 and still writing a lot of code, and realizing that if they had known they would still be doing this 30-40 years later they would have been better off if they had learned and used and gotten good with some of the languages that have a reputation of being very productive but hard to learn.

I'd also add spreadsheets and database to that. At one point I was the database guy at work, because no one else was available. I learned enough SQL to get by, but was in no way a database expert. Heck, we had to pick job titles at one point that described what we did to have on the business cards the company was giving us, and I put down "Database Roustabout" [1], which should give you an idea of where I stood. That was 20 years and I'm still the database guy at work. It would have been a lot better if sometime early on I had said "I'm going to become really good at SQL even though I'm sure someone else will become database guy in a year or so".

[1] Roustabout. NOUN. An unskilled or casual laborer. (North American) A circus laborer.


Yeah life as a weird tendency to not turn out like one anticipates :)


It might serve to evaluate why you feel the way you feel on a deeper level. Based only on your comments here I don't think any ire is justified.

For someone to to even make the determination that "Lisp did my better" they must

A. Have a comprehensive knowledge of both Python and Lisp.

B. Have some understanding of the history of the languages.

C. Understand the problem on an intrinsic, fundamental level that enables them to evaluate which approach is better.

D. Generalize the problem to a broader scope to demonstrate why one language is better on a broader level.

E. Understand every other language ever used so when they show why Lisp does it better, they can defend themselves when someone comes up and tells them, "actually, Pascal does this even better yet."

Furthermore, it doesn't really matter what language did it better. The point of most talks is simply to demonstrate how to solve some problem in a certain language. If every talk started with "you can do this in Python, and I'll show you how, but you should probably just switch to Lisp because it does it better," that isn't very helpful.


Oh I don't feel ire, maybe some form of frustration at best (i'm not young enough anymore to get angry for a talk :)

It's more about the 'wheel reinvention' syndrome that creates fatigue in me.


I like Lisp, and I'm not a fan of e.g. Python's whitespace sensitivity. That said, for niches such as ML and data science, I find you just can't beat the Python ecosystem.


> for niches such as ML and data science, I find you just can't beat the Python ecosystem

There definitely are many areas where Python is best, but for ML and data science, Julia is (i) very competitive in library coverage, (ii) more performant and flexible, and (iii) has a very good Python bridge if it's needed.

I can imagine there are niches within ML and data science where what you need are Python-only libraries, you don't miss anything restricting yourself to the numpy type hierarchy and there's no advantage to calling the libraries from Julia, but I'm curious to check if that is what you actually meant and if so, what you are doing.


After using python for data science for 3 years (since founding the current startup), I mostly switched to Julia about 4 months ago. So far the only Python libraries I've really missed are boto3 and sqlalchemy for sql generation – both of which can easily be called in Julia using PyCall.

I think people often underestimate just how much faster Julia is than numpy, I've consistently seen performance improvements on the order of 10x-30x when porting code.


Many of those libraries are implemented in C++, any language can call them.


Are people actually doing science with Python or are they talking about doing science?

There's so much buggy low quality stuff in that space that I'd write a serious application in C or C++ from scratch.

It would be a custom application, sure, but not everything needs to be general.

Also, I find Lisp much more natural for mathematical reasoning.


Yeah, they're very much doing it.

Pandas is huge, libraries like Spacy, NetworkX, etc exist. It's a massive and good ecosystem. Python is the goto for scientific computing in most of the sciences for newer students I'd hazard a guess over the older R and Julia.

This will be blindingly obvious if you work in that area. Yes, you can do it in another language, but you're missing out on a lot of stuff that is just done and is state of the art and is fast because the speedy parts aren't in Python. The complaints about parens for lisp are superficial, but it's my experience the same same goes for whitespace in Python. They just don't matter.


> and is fast because the speedy parts aren't in Python.

Having worked months with a slew of senior data scientists, this was a bit painful. Python is so slow and those data scientists were very good at coming up with solutions for the issues of the company, but the implementations (using Spacy, Pandas and other libs) had enough Python in them to make them not practical for the company use case. Nice prototypes which I then had to fix them or even rewrite to C/C++(we worked Rust as well to try it out) to make them usable in the company data pipeline.

I think companies are burning millions (billions in total?) on depressingly slow solutions in this space by throwing massive power at it all to make them complete their computations before the sun dies out.

Example: we needed a specific keyword extraction algorithm for multiple languages; my colleague used Spacy and Python to create it. It took a couple of seconds per page of text; we needed max a few ms on modern hardware. He spent quite a lot of time rewriting and changing it, but never got it under 1s per page on xlarge aws instances. My version takes a few ms on average executing the same algorithm but in optimised c/c++.

Sure we could've spun up a lot more instances, but my rewrite was far cheaper than that, even in the first month.


(I'm the creator of spaCy)

If you want to email me at matt@explosion.ai , I'd be interested in the specifics of the algorithm and why the implementation was slow.

The idea for something like that keyword extraction algorithm would be that if the Python API is slow, you should just use Cython. The Cython API of spaCy is really fast because the `Doc` is just a `TokenC*`, and the tokens just hold a pointer to their lexeme struct, which has the various attributes encoded as integers.

I've never really done a good job of teaching people to use the Cython API though. I completely agree that it's not productive to have slow solutions, and using too many libraries can be a problem. The issue is that Python loops are just too slow, you need to be able to write a loop in C/Cython/etc. Thinking through data structures is also very important.

I get very frustrated that there's this emphasis on parallelism to "solve" speed for Python. Very often the inputs and outputs of the function calls are large enough that you cannot possibly outrace the transfer, pickle and function call overheads, so the more workers you add, the slower it is. Meanwhile if you just write it properly it's 200x faster to start with, and there's no problem.


Sorry if you think I blamed spaCy for anything; it was not intended; I know it was due to the way Python was used which I tried to convey. Your product is excellent and yes, I probably should've reached out more anyway; I just know how to solve things my way and did not wanted to waste more time (there was an investor deadline).

Cython part sounds good; I will try it out and email you if I get totally stuck, thanks!


Oh, I didn't take it as a pointed criticism or anything. It just seemed like it would be an instructive example.

The underlying point I often make to people is that Python's slowness introduces a lot of incidental complexity, and you find yourself fiddling with numpy or something instead of just writing normal code and expecting it to perform normally.


But that's fine, no? I mean, it's a pretty common workflow where the people close to the science part of something write a prototype in their language/ecosystem of choice, and then the engineering side is in charge of taking the prototype implementation and making it performant enough for production use. Finding people who know both, data science, and low level programming languages well enough to be able to implement data science applications directly for production is pretty hard, I'm sure.

In either case, I much prefer prototypes in Python than, say, Matlab. To speed things up I once rewrote an internal Scipy function to a version that allowed me to use it in vectorized code on my end. If the prototype is in Matlab, the optimization and integration possibilities are much more limited due to licensing, toolboxes, and the closed ecosystem in general.


Also I think it is good to be able to use the python code as testcases/validation on smaller datasets for the C code.


Yes, I guess it is fine if that is the flow. I just didn't expect it upfront (my bad).


Yeah, if it's actually OK or not depends a lot on the particulars. Like, if it's not actually your job, and the data people were supposed to produce production ready stuff themselves, and then you have to go out of your way to actually make it work, then it's not OK. But that's more to do with how organizations function, not technical merits of the involved programming languages.


Python performance is something that I see about 15-20x as much in discussions about python than I do wrestling with real life problems.


Agreed. The only time in practice (working on datasets consisting of millions of rows) that Python has been too slow was when I was taking courses in college and their online code thing timed out on some specific graph problems. I rewrote the algo in C# without any fuss


All analysis we run is over 100s of millions of objects (objects can be a few megabytes large) every time it runs. The slightest increase or decrease for 1 object obviously makes a huge difference overall; either in cost, time or both.


Absolutely, but I don't feel the vast majority of corporations are doing that type of computation. I feel that even with hundreds of millions of rows Python can be a great solution (I have done multiple projects generating fairly complex projections from a few hundred million rows) for most projects.


Nice prototypes which I then had to fix them or even rewrite to C/C++

Even as someone who 'knows' C and C++ I still find it faster and easier overall to do the exploratory and 'science' part in Python, making sure it works and gives me the answers I need in the format I want etc. only to then rewrite and optimize the slow parts in C or C++ if necessary.


You could probably use something like Chez Scheme for that, especially the new Racket fork of it with unboxed flonums and flonum vectors, except with significantly decreased need to rewrite stuff in C for performance. Also with proper threads to boot.


You're missing the point. What makes python 'fast' is that it comes with 'out of the box' support for everything I might need. Reading and writing obscure file formats. Every sparse matrix, image processing and graph algorithm I might need has already been implemented. Do I need to all of sudden solve an optimization problem or a differential equation? Already nicely integrated with the library using. If I need some obscure domain specific algorithm, there is almost certainly a library for that already. I don't have to worry about any of that stuff and can focus on solving my problem.


There's quite a few C/C++ libraries for those things, right? Integrating them with proper FFI (of the kind that Chez has, for example - or LuaJIT, for that matter) seems hardly difficult...although differential equations or optimization problems may be better served without any language interfaces whatsoever since they involve functional parameterization. That generally sucks with mixed-language solutions, to the extent that GSLShell, the LuaJIT interface to GSL, completely skipped the DE solvers in GSL and reimplemented them in Lua for higher performance. I imagine that 'fast' in case of Python really needs to be quoted the way you did.


"seems hardly difficult" and "could probably use" is seldom faster than "out of the box" and "already nicely integrated"


Maybe, but the latter definitely isn't the case with Python, especially in case of PyPy (or at least it wasn't the case the last time I tried that), so there's that.


Have you considered how much time the "senior data scientists" saved, and how much better algorithmic solution they were able to develop by being able to iteratively explore the problem space and refine the algorithm in a convenient-to-them environment?

This development process allowed creation of a good solution that you were then able to quickly port to a performant production platform.


I've worked in this industry for a bit and had the opposite experience. Most of the slow solutions have been in other languages due to poor design and algo choice. There have been several projects I've been able to rewrite from R to Python where the runtime on a workstation went from days with R to seconds minutes with Python in a tiny tiny VM (like $10 DO box).

Sure maybe a 3 minute task in python that reconciles a few million transactions and builds some very useful projections is too slow for some pipelines, but it worked for my clients.


So the R code was pretty bad and your solution was more optimal - that's the only bit of information in here. The point is that rewriting something means you have extra domain knowledge, bottleneck knowledge, etc that you didn't have during the initial write. Tt might have gone the other way too, initial write in Python is too slow, rewrite in R is faster.


Oh yeah, it definitely was - my point wasn't that R is slower than Python. My point is the only time I've encountered "slow apps" in practice was when they were written in a much faster language, and rewriting them in a slower language resulted in really good performance.


The vast majority of the time that I've seen slow Python code, it's because people are not leveraging the libraries in the right way, e.g. by using groupby-apply in Pandas or not vectorising while using NumPy.

I can't speak as to the specific use cases that you've encountered, but performance wise, I have found Python to be a fine choice for several ML services.


> Are people actually doing science with Python or are they talking about doing science?

People are actually doing it. And a lot of it too. Both in terms of data science (as a broad term that can mean a bunch of different things) and in terms of computation for specific scientific fields like physics or biology.


Very much actively doing so on my end where nearly all work in the industry is in Python, with some Matlab, C, C++, and Julia sprinkled in.

Python is a great high level language for basically everything, but hardcore low latency apps. I can parse text, connect to databases, do sparse matrix computations on massive matrices, calculate network flows, generate large node-graph diagrams, use a Python based API to connect to any vendor software I've seen, do any kind of statistical analysis thing I need with pandas, amazing and free IDE allows me to use a REPL, code editor, and data structure viewer with ease, Python notebooks for education...etc etc. I've frequently found that I can rewrite a vendor's 10k line C++ program in a few pages of Python as the built-in Python data structures make text parsing extremely flexible and simple.


> do sparse matrix computations on massive matrices

This is completely impossible to do in the Python language, unless you resort to external tooling written in C or Fortran. Sure, you can call these codes from Python, as you can call them from any other language.


Nobody cares about what you're saying though (with respect) in this area. It's all about the ecosystem or Anaconda distribution itself rather than just the core language. I agree that what you're saying is accurate, but it also happens to be irrelevant in this particular case.

Numerical methods and data science are mostly done by engineers, mathematicians, and other random stem folks. I've yet to meet someone who is even cognizant that Numpy is really calling out to some low level C, C++, or Fortran library. They just know that you call a library like any other and the code works.

If you're trying to say that any language with FFI capabilities can do that, you'd be right, but it also doesn't matter much. Python has somehow found a sweet spot where it's easy to learn and onboard people and there is support for a lot of stuff with relatively low hassle. It certainly isn't lisp, but somehow seems to be orders of magnitude more successful.

I've been searching for a tool/language/ecosystem to replace Python for ages, but nothing ends up becoming close. I spent a significant amount of time learning lisp, but a lot of what I saw (besides the power of macros and restarting) was just a less intuitive way of doing things I could easily do in Python, Ruby, or Perl. Lisp is secret alien technology if you're coming from C or C++, but coming from Python it seems closer to a wash.


> I've yet to meet someone who is even cognizant that Numpy is really calling out to some low level C

then you've never met anybody who builds the tools that you use. Which is alright. But if you disparage their point of view then you sound a bit funny.


In this particular niche', yes. But again, that is almost entirely irrelevant to the vast majority of the millions of scientific Python users.

I'd love to write my own solution in assembly or C where I give birth to every function, but nobody has time for that level of monumental effort. Low level matrix libraries have a lot of inertia for a reason.

I'm not disparaging anybody's point of view. Yours is certainly valid for a small group of elite users. I'm just trying to point out that it is only a valid point for a very small group. Most simply view these things from the perspective of the entire ecosystem. Even scientists well aware of the C internals will not always use that knowledge.


The ecosystem matters. I'm a developer and not a scientist, but having everything inside an environment that's at least workable is a huge boon.

Of course you could call the same functions from the ffi of any other language, but nobody does that for the same reason that nobody writes web applications in C.

I hate python, as far as I'm concerned it's a nightmare hell of a language that does everything wrong, and yet it's probably the language I use the most due to its sheer convenience and massive ecosystem.


This is really splitting hairs isn't it? Plenty of languages are not bootstrapped. Isn't that essentially the same thing?


No, nothing to do with bootstraping, this is completely different. My point is that you cannot develop the very algorithms that you are using. Numerical math is not only about using ready-made algorithms, it is mostly about implementing new algorithms. For example, if you invent a new matrix factorization algorithm, it is very likely that you cannot implement it in Python (or if you can, it will be either very slow or very cumbersome). Python+numpy is not a natural way to write many numerical algorithms, based on explicit loops and new conditions inside them. Whereas in Fortran or in C, the implementation is likely to be much simpler, natural and fast.


Nobody is arguing that, but they're saying it doesn't matter to the majority of scientists who just want to invert a matrix for some study and don't need to implement a new matrix inversion algorithm. I would use C or C++ for that most likely. That is a valid use case for some scientists, but I would expect it to be a very small number compared to those that just need to use the existing tools in the ecosystem.

I think we may be speaking past each other a bit.


Every single bit of Python relies on code written in C. What is it that you are trying to refute here?


> Every single bit of Python relies on code written in C.

There's pypy, a jit python interpreter writen entirely in python, and it does not depend in C. It is also much faster than the common interpreter, cpython. Unfortunately it is still not appropriate for numerical computation, as the language itself makes working directly with numbers very cumbersome (and this was the point I wanted to make).


Yes. Python is pretty much the main tool in biology, for example. C or C++ would be abysmal for similar exploratory scientific takes.


I use hot reloading with an iPython repl. I write my code in such a way that I can interact with any individual part of the system via a REPL. Lisp excels here, but you can have a decent approximation of a real-time evaluation loop going.


This might seem slightly unrelated, but I was reading Elixir in Action and one of the statements is along the lines of a debugger being difficult to use in its naturally concurrent environment. The Elixir strategy is to kill erroring processes, capturing their exit signals with supervisor processes and then possibly recreating a replacement process.

Can the common lisp condition system be adapted to Elixir? Is there an advantage to doing so? Is there some obvious tradeoff between the two I'm not expressing?

Thanks.

see this thread form HN for more about adapting the condition system elsewhere.

https://news.ycombinator.com/item?id=26852309


Elixir is based on the BEAM virtal machine developed for Erlang. Restarting crashed (very lightweight) processes to handle errors is the normal way of doing things in that system.

LFE(Lisp-flavored Erlang) is an existing language that combines Lisp syntax with Erlang's backend, though I haven't used it myself yet.


He forgot: in python you read the code of somebody else and it's familiar.

In lisp, you are learning a new language with every lib because each author think they are a god language designer and that their macro rock. Also they don't need a good doc cause they are obvious. Or good error message cause they never break.

Also the way the author dismiss the number gap of packages available is ignoring the elephant in the room.


> In lisp, you are learning a new language with every lib because each author think they are a god language designer and that their macro rock. Also they don't need a good doc cause they are obvious. Or good error message cause they never break.

You have any specific examples of Common Lisp libraries you found hard to understand? I had a hard time when first learning the language, but once I got proficient to write my own code, I found it easy to understand most libraries I ended up using myself, the same as any language really. That the REPL makes it so easy to explore them with your own context, helped a lot as well.

> Also the way the author dismiss the number gap of packages available is ignoring the elephant in the room.

In the very same section, the author describes why the number of package don't matter as much as you think it does. Curation VS free-for-all-publishing (like APT vs NPM). Add together that Common Lisp "the language" has been stable for decades, makes it much more possible to be able to use any of the libraries you find as well, where in the Python world, we both know this not to be true (just Python2 VS Python3 makes this a whole other world of messes).


There are some famous examples[1], but to be honest, they are rare outliers for libraries published. And even for internal use, the amount of "local custom style" seems pretty minimal in my experience.

[1] Off the top of my head, I can recall Cells (early reactive/dataflow system - weirdness included symbol names made to sort first in Allegro CL IDE) and hu.dwim.* stuff which had its own wrapper around CL:DEFUN and CL:DEFMETHOD, iirc. But I successfully used their stuff without caring about that.


> There are some famous examples

As someone new to the ecosystem of Common Lisp and in general a bit sadist (ref https://www.youtube.com/watch?v=mZyvIHYn2zk), could you share which ones these are so I can enjoy not understanding them at all?

Edit: I see now after I made my comment you added examples, thanks :)


There's a long story of macros wrapping DEFUN/DEFGENERIC/DEFMETHOD/DEFCLASS. I'll admit I even used some for shortcuts in declaring types and constraints.

But their use isn't that widespread, and in practice you can pretty quickly get used to the rare case that needs you to understand them.

I think the most complex is stuff that requires code-walkers and involved things like macros for CPS transformers of code.


> In lisp, you are learning a new language with every lib because each author think they are a god language designer and that their macro rock. Also they don't need a good doc cause they are obvious. Or good error message cause they never break.

I want to say this is an over generalization but I am living this in clojure right now.


I'm not a particularly experienced or good Lisp programmer, I can customize Emacs, but that's pretty much it.

However, I think this article is a bit skewed and not highlighting things Python has.

For instance, the standard library means that Python is more usable out of the box.

Also when you've got things like iPython or Jupyter it means you can get off the ground easily.

So, in the end, they're two different languages, and I do not think either is better of the two. Right tool for the job and all that.


This point seems to be addressed in the “State of the Libraries” section[0]

[0]: https://lisp-journey.gitlab.io/pythonvslisp/#state-of-the-li...


Not really, the basic Python install has a lot of things ready to use without any external libraries.


There is popular python environment where you do not restart the environment after an edit: Jupyter notebooks. This comes with its own set of problems, which CL also has, but I still find notebooks a worthwhile mindset for writing applications. There’s no need to equate python with the script mindset.


Obligatory reference to Hy / Hylang, a Lisp that compiles to the Python AST.


And Hissp, a Lisp which compiles to Python expressions.


If you like lisp and you like python you might like libpython-clj


python != lisp. It's really that simple.


What about 1 != 2?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: