Of course people "realise" this. But those REPLs are not actually REPLs. They are interactive language prompts. They aren't actually REPLs. As the joke goes, Python doesn't have a REPL: it lacks READ, EVAL, PRINT and LOOP.
Being able to type in code and have it evaluated one line at a time isn't a REPL.
i have no idea what subtle or nuanced distinction you're trying to strike so what exactly do you imagine is the difference between a lisp repl and a python repl?
Edit: people that aren't familiar with python (or how interpreters work in general) don't seem to understand that being able to poke and prod the runtime is entirely a function of the runtime, not the language. In cpython you can absolutely do anything you want to the program state, all the way up to, and including, manually push/pop from the interpreter's value stack (to say nothing of moving up and down the frame stack), mutating owned data, redefining functions, classes, modules, etc. You can even, again at runtime, parse, to AST, and compile source to get macro-like functionally. It's not as clean as in lisp but it 100% gets the job done.
CL-USER 43 > (+ 1 (foo 20))
Error: Undefined operator FOO in form (FOO 20).
1 (continue) Try invoking FOO again.
2 Return some values from the form (FOO 20).
3 Try invoking something other than FOO with the same arguments.
4 Set the symbol-function of FOO to another function.
5 Set the macro-function of FOO to another function.
6 (abort) Return to top loop level 0.
Type :b for backtrace or :c <option number> to proceed.
Type :bug-form "<subject>" for a bug report template or :? for other options.
CL-USER 44 : 1 > (defun foo (a) (+ a 21))
FOO
CL-USER 45 : 1 > :c 1
42
Note that we are not in some debug mode, to get this functionality. It also works for compiled code.
Lisp detects that FOO is undefined. We get a clear error message.
Lisp then offers me a list of restarts, how to continue.
It then displays a REPL one level deep in an error.
I then define the missing function.
Then I tell Lisp to use the first restart, to try to invoke FOO again. We don't want to start from scratch, we want to continue the computation.
Lisp then is able to complete the computation, since FOO is available now.
Hmm, what advantage does Lisp offer here over Python?
>>> 1 + foo(20)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'foo' is not defined
>>> def foo(a):
... return a + 21
File "<stdin>", line 2
return a + 21
^
IndentationError: expected an indented block
>>> def foo(a):
... return a + 21
...
>>> 1 + foo(20)
42
>>>
Mind the hilarious indentation error, as I had not touched the old-school REPL in ages.
In normal day to day operations, I do the same thing daily with Jupyter Notebooks. I get access to as much state as I need.
With notebooks workflow it is normal to forget to define something and then redefine in the next cell. You could redefine function signatures etc. Ideally then you move cells in the correct order so that code can be used as Run All.
I "feel" ridiculously productive in VS Code with full Notebook support + copilot. I can work across multiple knowledge domains with ease (ETL across multiple database technologies, NLP-ML, visualization, web scraping, etc)
Underneath it is same as working in old school Python REPL just with more scaffolding.
I have been playing again with CL recently and am doing some trivial web-scraping of an old internet forum. I don't use a REPL directly, but just have a bunch of code snippets in a lisp file that I tell my editor to evaluate (similar to Jupyter?). I haven't bothered doing any exception (condition) handling, and so this morning I found this in a new window:
Condition USOCKET:TIMEOUT-ERROR was signalled.
[Condition of type USOCKET:TIMEOUT-ERROR]
Restarts:
0: [RETRY-REQUEST] Retry the same request.
1: [RETRY-INSECURE] Retry the same request without checking for SSL certificate validity.
2: [RETRY] Retry SLIME interactive evaluation request.
3: [*ABORT] Return to SLIME's top level.
4: [ABORT] abort thread (#<THREAD tid=17291 "worker" RUNNING {1001088003}>)
plus the backtrace. This is in a loop that's already crawled a load of webpages and has some accumulated some state. I don't want a full redo (2), so I just press 0. The request succeeds this time and it continues as if nothing happened.
You got a lot of correct but verbose responses. Put in layman's terms you had to run 1 + foo(20) again. If 1 + foo(20) were replaced by a complex and long winded function you would have lost all of that state and needed to run it all again. What if 1 + foo(20) had to read several TB of data in a distributed manner. You would have to do that all again.
There are ways around this and of course you could probably develop your own crash loop system in python but in lisp you simply continue where it failed. It's already there.
You mention doing things in Jupyter and ETLs which are often long running. This could be hugely beneficial to you.
From what I see in your example, you invoke the form again. In Common Lisp you don't need that. You can stay in a computation and fix&resume from within.
You are not fixing the issue in the dynamic context of a running program. Doesn't matter in this trivial example but is very noticeable when you have a loaded DB cache and a few hundred active network connections.
The advantage is that Python has just diagnosed the error and aborted the whole thing back to the top level, whereas in the Common Lisp, the entire context where the error happened is still standing. There are things you can do like interactive replace a bad value with a good value and re-try the failed computation.
In lispm's example, the problem is that there is no foo function, so (foo 20) cannot be evaluated. You have various choices at the debugger prompt; you can specify a different function, to which the same arguments will be applied. Or just specify a value to be used in place of the nonworking function call.
Being able to fix and re-try a failed expression could be valuable if you have a large running system with hundreds of megabytes or even gigabytes of data in the image, which took a long time to get to that state.
> Hmm, what advantage does Lisp offer here over Python?
In lisp, I never edit code at the REPL, yet the REPL is what enables me to edit code anywhere. I edit the source files and have my editor eval the changes I made in the source. This gets me the benefit that should my changes work, I don't have to retype them to get them into version control. This works because the Lisp REPL is designed to be able to switch into any existing package, apply code there, and also switch back to the CL-USER package after. My editor uses the same mechanism and only has to inject a single prefix (`in-package :xyz`) before it pastes the code I've selected for eval.
In Python, editing a method in a class inside some module (i.e., not toplevel) is less easy. At least, I haven't found any editor support for it. What I did find is the common advice to just reload the whole module/file.
Okay, so let's reload the whole module, then? Well, Python isn't really built for frequent module reloads and that can sometimes bite. In Common Lisp, the assumption that any code may be re-eval-ed is built in. For example, there's two ways of declaring a global value in CL: defvar and defparameter. The latter is simply an assignment of a value to a variable in the global scope, but the former is special. By default, `defvar` defines a variable only if it's not already defined. So that a CL source file may be loaded and reloaded any number of times without resetting a global variable.
Then there's classes. Oh my. Common Lisp has the most powerful (in terms of flexibility) OO system I know of. Not only can you redefine functions and methods, you can even redefine classes dynamically. Adding a property to a class adds that property to all existing objects of that class. Removing a property from a class removes it from all existing objects of that class. This feature is no longer CL-exclusive, but it is sufficient to offer a massive advantage over Python. I don't need to talk about method combinations, multi-methods and the many other cool features of the Common Lisp Object System here.
Then there's the debugging system. In Python, when an exception is thrown, it immediately unwinds the stack all the way up until it is first caught. So not only do you need to know beforehand where to catch what exception, if you get it wrong you cannot inspect the site of the error. In CL, a condition ("exception") does not unwind the stack until a restart is chosen. Not when it is caught, but rather when — after being caught — a resolution mechanism has been chosen. This allows interactive debugging (another cool CL feature) to inspect the stack frames at (and above) the site of error, redefine whatever code needs to be corrected, all before the error is allowed to unwind and destroy the stack. You still need to set-up handlers (and restarts) before the error happens, but you can be absolutely wildly lax and use catch-all handlers anywhere on the stack and restarts that take absolutely anything (even functions) at debug-time so you don't really need to be prescient with your error handling code unlike in Python.
I'm sure there's more, but I think this is pretty sufficient.
>Note that we are not in some debug mode, to get this functionality.
Jesus Christ I swear it's like you ascribe mysterious powers to the parens. Do you think the parens give you the ability to travel through time or reverse the pc or what? Okay it's not in a debug mode but it's in a "debug mode". Like seriously tell me how you think this works if it's not effectively catching/trapping some sigkill or something that's the equivalent thereof?
I have never in my life met this kind of intransigence on just manifestly obvious things.
Common Lisp programs run by default in a way that calls to undefined functions are detected.
Here the Lisp simply tries to look up the function object from the symbol. There is no function, so it signals a condition (aka exception). The default exception handler gets called (without unwinding the stack). This handler prints the restarts and calls another REPL. I define the function -> the symbol now has a function definition. We then resume and Lisp tries again to get the function definition. The computation continues where we were.
That's the DEFAULT behavior you'll find in Common Lisp implementations.
>Common Lisp programs run by default in a way that calls to undefined functions are detected.
Cool so what you're telling me is that by default every single function call incurs the unavoidable overhead of indirecting through some lookup for a function bound to a symbol. And you're proud of this?
I thought you know Lisp? Now you are surprised that Lisp often looks up functions via symbols -> aka "late binding"? How can that be? That's one of the basic Lisp features.
Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
At no point in time did I claim to know lisp well. I stated my familiarity at the outset. But what you all did was claim to know a lot about every other interpreted runtime without a grain of salt.
>Next you can find out what optimizing compilers do to avoid it, where possible or where wanted.
But compilers I am an expert in and what you're implying is impossible - either you have dynamic linkage, which means symbol resolution is deferred until call (and possibly guarded) or you have the equivalent of RTLD_NOW ie early/eager binding. There is no "optimization" possible here because the symbol is not Schrodinger's cat - it is either resolved statically or at runtime - prefetching symbols with some lookahead or cabinet is the same thing as resolving at calltime/runtime because you still need a guard.
What you're missing is that, unlike any other commonly used language runtime, compilation in CL is not all-or-nothing, nor is it left solely to the runtime to decide which to use. A CL program can very well have a mix of interpreted functions and compiled functions, and use late or eager binding based on that. This is mostly up to the programmer to decide, by using declarations to control how, when, and if compilation should happen.
It should also be noted that by spec symbols in the system package (like + and such) should not be redefined. This offers “unspecified” behavior and lets the system make optimizations out of the box.
Outside of that you can selectively optimize definitions to empower the system to make better decisions at the cost of runtime protection or dynamism. However these are all compiler specific.
To be fair, any dynamic language with a JIT will mix interpreted and compiled functions, and will probably claim as a strength not leaving to the programmer the problem of which to compile.
You are incorrect; optimizations are possible in dynamic linking by making first references go through a slow path, which then patches some code thunk to make a direct call. This is limited only by the undesirability of making either the callling object or the called object a private, writable mapping. Because we want to keep both objects immutable, the call has to go into some privately mapped jump table. That table contains a thunk that can be rewritten to do a direct call to an absolute address. If we didn't care about sharing executables between address spaces we could patch the actual code in one object to jump directly to a resolved address in the other object. (mmap can do this: MAP_FILE plus MAP_PRIVATE: you map a file in a way that you can change the memory, but the changes appear only in your address space and not the file.)
Okay well when pytorch, tensorflow, pandas, Django, flask, numpy, networks, script, xgboost, matplotlib, spacy, scrapy, selenium get ported to lisp, I'll consider switching (only consider though since the are probably at least another 20 python python packages that I couldn't do my job without).
i said ported not implemented; the likelihood that any of those libraries sprout lisp bindings is about as likely as them being rewritten in lisp. so it's the same thing and the point is clear: i don't care about some zany runtime feature, i care about the ecosystem.
Stop moving the goalposts: your answer to a commenter who stated that Common Lisp was faster than Python (a fact) was a list of packages, many of which are (1) not even written in Python and (2) some of them actually do have Common Lisp bindings.
Firstly, functions that are in the same compilation unit that refer to each other can use a faster mechanism, not going through a symbol. The same applies to lexical functions. Lisp compilers support inlining, and the spec allows automatic inlining between functions in the same compilation unit, and it allows calls to be less dynamic and m more optimized. If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.
Lisp symbol lookup takes place at read time. If we are calling some function foo and have to go through the symbol (it's in another compilation unit), there is no hashing of the string "foo" going on at call time. The calling code hangs on to the foo symbol, which is an object. The hashing is done when the caller is loaded. The caller's compiled file contains literal objects, some of which are symbols. A compiled file on disk records externalized images of symbols which have the textual names; when those are internalized again, they become objects.
The "classic" Lisp approach for implementing a global function binding of a symbol is be to have dedicated "function cell" field in the symbol itself. So, the compiled module from which the call is emanating is hanging on to the foo symbol as static data, and that symbol has a field in it (at a fixed offset) from which it can pull the current function object in order to call it (or use it indirectly).
Cross-module Lisp calls have overhead due to the dynamism; that's a fact of life. You don't get safety for nothing.
(Yes, yes, you can name ten "Lisp" implementations which do a hashed lookup on a string every time a function is called, I know.)
> If f and g are in the same file, where g calls f, then implementations are not required to allow f and go to be separately redefinable. So that is to say, if f is redefined only, the existing g may keep calling the old f. The intent is that redefinition has the granularity of compiled files: if a new version of the entire compiled file is loaded, then f and g get redefined together and all is cool.
That depends. The Common Lisp standard says nothing on the subject. CMUCL[1] and its descendent SBCL[2] do something clever called local call. It's not terribly difficult to optimize hot spots in your code to use local call. Outside of the bottlenecks, the full call overhead isn't significant for the overwhelming majority of cases. It's not like full call is any more expensive than a vtable lookup anyhow.
Do you think Python or Ruby or PHP are any different? And yet, not one of them actually chose to use this in a sane way, where a simple lookup error doesn't have to crash the whole program.
Restarting from the debugger keeps state without third party Python hacks that you mention. In this example Python increments x twice, Lisp just once:
>>> x = 0
>>> def f():
... global x # yuck!
... x += 1
...
>>> def g(y):
... h()
...
>>>
>>> g(f())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in g
NameError: name 'h' is not defined
>>>
>>> def h(): pass
...
>>> g(f())
>>>
>>> x
2
Versus:
* (setf x 0)
* (defun f() (incf x))
* (defun g(y) (h))
* (g(f))
debugger invoked on a UNDEFINED-FUNCTION in thread
#<THREAD "main thread" RUNNING {1001878103}>:
The function COMMON-LISP-USER::H is undefined.
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [CONTINUE ] Retry calling H.
1: [USE-VALUE ] Call specified function.
2: [RETURN-VALUE ] Return specified values.
3: [RETURN-NOTHING] Return zero values.
4: [ABORT ] Exit debugger, returning to top level.
("undefined function")
0] (defun h() nil)
; No debug variables for current frame: using EVAL instead of EVAL-IN-FRAME.
H
0] 0
NIL
* x
1
Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?
I think that is that is the big difference.
It’s a triple edged sword bonded to a double barreled shotgun though, and the very antithesis of the idea of functional programming vs mutable state.
>Can you connect to a running server or other running application, inspect live in memory data, change live in memory data, redefine functions and classes and have those changes take immediate effect without restarting the server or app?
The answer to all of these things, at least in python, is emphatically yes. I do this absolutely all the time. You can debug from one process to another if you've loaded the right hooks. You don't need to take my word for it or even try to do it; you just need to reason a fortiori: python can do it because it's an interpreter with a boxed calling convention and managed memory, just like lisp interpreters.
It's amazing: people will die on this hill for some reason but lisp isn't some kind of mysterious system that was and continues to be beyond us mere mortal language/runtime designers. the good ideas in lisp were recognized as good ideas and then incorporated and improved upon.
The answer to all these things should be "just doesn't work in practise", not for real programs anyways. Unlike Lisp, Python doesn't lean itself well to this mode of development.
Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.
>Primitive CLI-like tinkering, figuring out language features, calc-like usage - maybe. But not a single time in 15 years of doing Python across the industry I saw anybody using these features for serious program development, or live coding, or REPL-driven development.
I swear you people are like ostriches in the sand over this - Django, pytest, fastapi, pytorch, Jax, all use these features and more. I work on DL compilers and I use those features every day - python is a fantastic edsl host for whatever IR you can dream of. So just because you're in some sector/area/job that doesn't put you in contact with this kind of python dev doesn't mean it's not happening, doesn't mean that python doesn't support it, doesn't mean it's an accidentally supported API (as if such a thing could even be possible).
Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over. So thanks!
> Really what this convo is doing is underscoring for me how there really is nothing more to be learned from lisp - I had a lingering doubt that I'd missed some aspect but you guys are all repeating the same thing over and over.
No, you keep on misunderstanding what people are trying to tell you. It’s a communication failure. The thing that you think you are doing in Python is not the thing that people are doing in Lisp.
As an example, I suppose that when you’re developing code in Python’s pseudo-REPL you often reimport a file containing class definitions. When you do that, what happens to all the old objects with the old class definition? Nothing, they still belong to the old class.
If you did this on a REPL connected to a server, what would happen to the classes of objects currently being computed on? Nothing, they would still belong to the old class.
In Lisp, it’s different. There is a defined protocol for what happens when a class is redefined. Every single object belonging to the old class gets updated to the new class. You can define code to get called when this happens (say, you added a new mandatory field, or need to calculate a new field based on old ones — and of course ‘calculate’ could also mean ‘open a network connection, dial out to a database and look up the answer’ or even ‘print the old object and offer the system operator a list of options for how to proceed’). And everything that is currently in-flight gets updated, in a regular and easy-to-understand way.
People are telling you ‘with Lisp central air conditioning, I can easily heat my house in the winter’ and you are saying ‘with Python, I can easily build a fire whenever my house gets cold too!’
As others here, I don't understand how those features (seamless continuation of a program with the exact state) could possibly work in Python.
What I do know is that the Python community has a propensity for claiming that approximations of complex features by gigantic hacks work and are sound, while they are not.
The Python community also has an extreme tolerance for unsound and buggy software that is propped up by censoring those who complain. Occasional complaints are offset by happy (and selected) marketing talks at your nearest PyCon.
I think in just about every response you left in these threads, you misunderstood what was being said. Possibly through impatience, or just plain arrogance. I really encourage you to spend some time trying to understand how interactivity/restartability (as in Lisp restarts, not process restarts) is built into the language. Especially if you're specializing in the compilers of dynamic languages.
You might also check out Smalltalk, which has a similar level of dynamism.
Listen, I've been on many sides: Lisp stuff, Python stuff, C stuff, etc. I don't think that "something has to be learned". Lisp has many good ideas, Python has good ideas. But REPL-driven development is not one of them. But let me explain.
You see, it's not about how REPL in Python just does not allow something (even though it is rather primitive). Python makes it superhard to tweak things, even if you can change a certain variable in memory. Here's why.
Think about Lisp programs, including OOP flavours. These fundamentally consist of 2 things: a list of functions + a list a variables. If you replace a function or a variable then every call will go through it. And that's it. You change a function - all calls to it will be routed through the new implementation. Because of REPL-centric culture of things people really do organise their programs around this style of development.
Python was developed with an dynamic OOP idea in mind where everything is an object, everything is a reference. Endless references to references of references to references. It's a massive graph, including methods and functions and objects and classes and metaclasses. There is no single list of functions where you can just replace this name-to-implementation mapping.
TL;DR Replacing a single reference doesn't change much in the general case. It does work in some cases. But that's not enough for people to rely on it as main development driver.
Python fundamentally makes a different tradeoff than your average lisp.
I've mentioned this in a sibling thread, but it's interesting to compare this to Ruby. Ruby does support the sort of redefinition you're talking about. And yet REPL-centric development isn't primary there, either. Yes, there are very good REPL implementations, but I don't know of anyone who develops at the Ruby REPL the same way you would in a Lisp REPL. Maybe it's a performance thing? Maybe it's the lack of images?
BTW, you mentioned that classes can be redefined in Ruby.
How does this work for existing class instances? Anonymous pieces of code, methods, etc? Even lisp itself does not save from all the corner cases, it's the dev culture that makes all these wonderful things possible.
The first time you do `class A....end` you're defining the class. Instances when they are created keep a reference to that class - which itself is just another object, an instance of the class `Class` which just so happens to be assigned to the constant `A`. If you later say `class A... end` and redefine a method, or add something new, what you're actually doing is reopening the same class object to add or change its contents, so the reference to that class doesn't change and all the instances will get the new behaviour. If you redefine a method, calls to that method name will go to the new implementation.
So in that sense it works like you'd expect, I think. As I said, Ruby is very lispy - Matz lists Lisp as one of the inspirations, and I think I'm right in saying he even cribbed some implementation details from elisp early on.
It just shows that there's no understanding of the depth of the problem.
Years ago I tried doing something like this (redefining functions, classes, etc) in a dev environment of a MMO game. This would be crazily useful as the env took 5-10 mins to boot. And game logic really needs tweaking A LOT.
I really wanted this to work. After all, it really feels as if python has everything for it. Banged my head against the wall for weeks, failed ultimately and gave up on live development in python completely.
In contrast, as a heavy emacs user, I tweak my environment a couple of time a day. I restart this lisp machine a couple of time a month.
No, you're fine. For certain things Python REPL-like live development is ok indeed. Say, if your program boils down to a list of functions. Think request handlers or something.
I need to be very clear so that no one misunderstands: this is not proprietary pycharm functionality - this is all due to to sys.settrace and the pydev debug protocol
>The answer to that question is the differentiating point of repl-driven programming. In an old-fashioned Lisp or Smalltalk environment, the break in foo drops you into a breakloop.
do you want me to show you how to do this in a python repl? it's literally just breaking on exception...
Since Smalltalk was mentioned, please consider following points:
1. Smalltalk has first class, live activation records (instances of the class Context). Smalltalk offers the special "variable" thisContext to obtain the current stack frame at any point during method execution.
If an exception raised, the current execution context is suspended and control is transferred to the nearest exception handler, but the entire stack frame remains intact and execution can be resumed at any point or even altered (continuations and prolog like backtracking facilities have been added to Smalltalk without changing the language or the VM).
2. The exception system is implemented in Smalltalk itself. There are no reserved keywords for handling or raising exceptions. The implementation can be studied in the live system and, with some precautions, changed while the entire system it is running.
3. The Smalltalk debugger is not only a tool for diagnosing and fixing errors, it also designed as tool for writing code (or put differently, revising conversational content without having to restart the entire conversation including its state). Few systems offer that workflow out of the box, which brings me to the last point.
4. I said earlier that Racket is different from Common Lisp. It's not only about language syntax, semantics, its implementation or other technicalities. It is also about the culture of a language, its history, its people, how they use a language and ultimately, how they approach and do computing. Even in the same language family tree you will find that there are vast differences, if you take said factors into account, so it might be worthwhile to study Common Lisp with an open mind and how it actually feels in use.
No, it’s not: an exception unwinds the stack all the way up to where the exception is caught. By the time the enclosing Python pseudo-REPL sees the undefined function error, all the intervening stack frames have dissolved. The way it works is that a function tries code, and catches exceptions.
In Lisp (and I believe Smalltalk), it doesn’t work that way: there is an indirection. Rather than try/except, a function registers a condition handler; when that particular condition happens, the handler is called without unwinding the stack. That handler can do anything, to include reading and evaluating more code. And re-trying the failed operation.
It would be possible to implement this in Python, of course, but it doesn’t offer the affordances (e.g. macros) that Lisp has, and it’s not built into the language like it is in Lisp (e.g., every single unoptimised function call in Lisp offers an implicit ‘retry’).
IMO, the downside with the term REPL is that if you don't understand the specific Lisp definitions of the terms, it sounds like any other interactive execution environment.
The repls you mention are not like lisp repls. You're being downvoted because your comment makes it sound like you've never programmed a lisp but have strong opinions nonetheless.
Not the OP but would somebody be able to summarize HOW are the lisp REPLs different then to me?
I've written limited amount of clojure and common lisp just to play around and I don't recall any difference between Clojure REPL and the REPL I get for say Kotlin inside IntelliJ idea.
Maybe the ability to send expression from the IDE into the REPL with one keybind but I cannot say it's not possible with the Kotlin one right now because that's not what I use it for.
Thanks, I forgot about this aspect of live program editing. Whether or not it's possible (or how close just quick live reload) is to this it' definitely not a first class citizen like you presented. It also reminds me of Pharo (or maybe just smalltalk, I've only played with Pharo) where you build the program incrementally "inside out".
It does make me wonder how aplicable this way of programming is to what I do at work but that is more because of the technologies and architectural choices where most of the work is plumbing stuff that is not local to the program itself together. And maybe even for that with the edges mocked out it would make sense to work like this.
Again, interesting video that made me think. Thanks.
Being able to interactively update code in response to an error, without leaving the error context and being able to restart stack frames (not just a “catch” or top level, as in most languages) is one of the key features that makes REPL-driven development possible. Or at least that’s how I see it.
It’s not something you always need to use, but it can be handy, especially for prototyping and validating fixes.
There's a person above saying that it's about being to able to mutate program state from the repl, which is a thing that's also possible in any repl for a language with managed memory.
Not just from the REPL, but from the REPL in the context where the error occurred, without having to structure the code ahead of time to support this. It’s not always an important distinction, but it’s handy when prototyping or if the error is difficult to reproduce.
There are some other affordances for interactive programming, such as a standard way to update existing instances of classes. I’m sure you could implement this sort of functionality in any language, but this is universal and comes for free in Common Lisp.
CL also has other interesting features such as macros, multiple dispatch, compilation at runtime, and being able to save a memory snapshot of the program. It’s quite unique.
Cl Condition system + repl = godmode. Your software crashes? Do you go back and set a breakpoint? No, because you’re already in the stacktrace in the repl exactly where the crash occurred. You fix the code, reload it, tell it to ether run where it left off, or restart from an earlier point.
That is definitely not the same. I write a lot of python code and the interpreter / interactive development is just not as good as it is in Common Lisp.
To my knowledge there’s no real “mainstream” language that goes all in on interactive development. Breakpoints and traceback are all perfectly cromulent ways to debug, but it’s really not the same, sadly.
>you've never programmed a lisp but have strong opinions nonetheless
i've written racket and clojure (and mathematica, which is a lisp). not multiple 10kloc but enough to understand what the big ideas are. claiming i just haven't written enough lisp is basically the logical fallacy of assuming the premise.
But Racket and Clojure are very different from Lisps such as Common Lisp that embrace the idea of a lively, malleable and explorable environment, which is arguably the biggest idea.
The content on the pages clearly explain the differences.
Mathematica is a symbolic language based on 'rewriting' There are other examples - Prolog would be an example, a logic language. Also most other computer algebra systems are in this category, similar to Mathematica: Macsyma/Maxima, Axiom, ...
> WolframLang has all the characteristics of LISP
It has many, but there are a lot of differences, too.
The big difference is the actual engine. Mathematica is based on a 'rewrite system'. It translates expressions by applying rewrite rules.
Lisp evaluates expressions either based on an interpreted evaluator or by evaluating compiled code. Lisp has macros, but those can be transformed before the code is compiled or running. The practical effect is that in many Lisp implementations usually all code is compiled, incl. user code. Mathematica uses C++ then. Most of the UI in Mathematica is implemented in C++, where many Lisp systems would implement that in native compiled Lisp.
Thus the computation is very different. Using a rewrite system for programming is quite clunky and inefficient under the hood. A simple example would be to look how lexical closures are implemented.
Another difference is that Mathematica does not expose the data representation of programs to the user all the time, where Lisp programs are also on the surface written as s-expressions (aka symbolic expressions) in text.
The linked page from the Mathematica book also claims that Mathematica is a higher level language. Which is true. Lisp is lower level and languages like the Wolfram Language can be implemented in it. That's one of its original purposes: it's an implementation language for other ('higher-level') languages. Sometimes it already comes with embedded higher-level languages. CLOS + MOP (the meta-object protocol) would be an example for that.
> Another difference is that Mathematica does not expose the data representation of programs to the user all the time, where Lisp programs are also on the surface written as s-expressions (aka symbolic expressions) in text.
>Thus the computation is very different. Using a rewrite system for programming is quite clunky and inefficient under the hood. A simple example would be to look how lexical closures are implemented.
You're skimming a couple of paragraphs without actually knowing much about Mathematica. It's absolutely not the case that Mathematica is purely a redex system; it's just that it's very good at beta reduction because it has a strong focus on CAS features.
> Lol I am 100% sure that the majority of lisps cannot be aot compiled.
Ahead-of-time compiling has been the principal method in mainstream Lisps going back to the 1960's. The Lisp 1.5 Programmer's Manual from 1962 describes ahead-of-time compiling.
The curious thing is how can you be "100% sure" in making a completely wrong statement, rather than some lower number, like "12% sure".
>The curious thing is how can you be "100% sure" in making a completely wrong statement, rather than some lower number, like "12% sure".
The reason is very simple and surprisingly straightforward (but requires some understanding of compilers): dynamically typed languages that are amenable to interpreter implementations are very hard to compile AOT. Now note I have since the beginning emphasized AOT - ahead of time - but this does not preclude JITs.
But in reality I don't really care about this aspect - it was the other guy who for whatever reason decided to flaunt that clisp can be compiled when comparing it with Mathematica.
For someone playing with Mathematica, you have a curious intellectual process. To be clear, I'd rather have you doing that than hocking loogies at cars from an overpass.
How can I make this any more clear? You are able, in Mathematica, to write Plus[a, b] with your own fingers on your own keyboard and it will be interpreted as the same thing as a+b
> I'd expect that they can.
Clisp is not the only lisp - I can name 10 others that cannot be compiled.
The CS implementation of Racket supports several compilation modes: machine code, machine-independent, interpreted, and JIT. Machine code is the primary mode, and the machine-independent mode is the same as for BC."
CS is the new implementation of Racket on top of the Chez Scheme runtime. Chez Scheme is known for its excellent machine code compiler.
"Machine code is the primary mode"
> Do you really know what you're talking about here?
If you have time to research Lisp implementations until you gather ten that don't have compilers, you might want to take a few seconds to visit https://clisp.cons.org to find out what Clisp means.
> "seems you either don't know what lisp or you've never written mathematica"
Meanwhile, you brought up examples from Mathematica docs that talk about head/tails (car/cdr) but by that logic, Python is a Lisp too because you have:
list[0]
and
list[1:]
Maybe your Clojure/Racket experience wasn't enough to teach you what the essence of Lisp was. From your first link:
"Mathematica expressions are in many respects like LISP lists. In Mathematica, however, expressions are the lowest-level objects accessible to the user. LISP allows you to go below lists, and access the binary trees from which they are built."
That right there is telling you that Mathematica is not a Lisp.
I'm sorry but are you really going to pretend like car and cdr are not core to lisp?
>list[0] and list[-1]
That is not car and cdr; closer would be list[0] and list[1:] if lists were cons in python.
>Mathematica expressions are in many respects like LISP lists. In Mathematica, however, expressions are the lowest-level objects accessible to the user. LISP allows you to go below lists, and access the binary trees from which they are built
This is a quote from 1986. I wonder if the language has changed much since then
A REPL isn't just a REPL. You are comparing modern day Toyota Corollas to a Spaceship sent from the future to the 80s. One is just different level radical.
At least when it's baked by SLY or SLIME
and i'm still wondering which of these things i can't do in a python repl? note macroexpansion doesn't count because that's not a dimension of the repl.
I don’t think i can patch a function at runtime without losing state either in python - the act of redefining the function causes the variables to be reset but in lisp the bindings are untouched.
I just did it - it works perfectly fine. Debug-run your code, an exception will be thrown at the call site, step up one frame from the exception (ie module level), define the missing function, call again and it succeeds - all without leaving the same repl instance. Don't believe me? Try it.
I'll say it again: you guys are in plain denial not about python or lisp as languages but about how interpreters work. There's just nothing more to be said about this dimension of it.
What's being asked is, after defining the missing function, whether it's possible to clear the exception and continue the execution without having to restart from the beginning. This is very useful when you hit an exception after 10 minutes of execution. (This is a real usecase which would have saved me untold hours.)
I hope it's possible somehow, but if you just load pdb (e.g. with %pdb in ipython), pdb is entered in post-mortem mode, from which it's impossible to modify code/data and resume execution. Setting a breakpoint (or pdb.set_trace()) would requiring knowing about the bug ahead of time. Does it only work when interrupting with a remote debugger rather than on exception?
However, it wouldn't be possible if the interpreter unwinds the stack looking for exception handlers before finding that there is none? In other languages/VMs such as SBCL the runtime can look up the stack for handlers, and invoke the debugger before destructively unwinding.
The other guy up above claims this is a feature unique to calling functions, rather than all error states, and that the lisp runtime specifically guards against this. If that's the case then my answer is very simple: it would be trivial to guard function calls (all function calls) to achieve the exact same functionality in python. I'm in bed but it would literally take me 5 minutes (I would hook eval of the CALL_FUNCTION opcode). Now it would be asinine because it's a six-sigma event that I call a function that isn't defined. On the other hand, setting a breakpoint and redefining functions as you go works perfectly well and is the common case and simultaneously the kind of "repl driven development" discussed all up and down this thread.
Thank you, you're very helpful despite this raging flame war. I'm glad to hear you can hook opcodes like that, then you really can do anything. And I really need to give "set a defensive breakpoint and then step through the function" an honest go. Now that you say it, I realise I haven't.
>I'm glad to hear you can hook opcodes like that, then you really can do anything
Just in case someone comes around calls me a liar: the way to do this is to spread the bytecodes out one per line and set a line trace. Then when your bytecode of choice pops up, do what you want (including manipulate the stack) and advance the line number (cpython let's you advance the manipulate the line number).
Calling again and continuing are not the same thing. Sure, with the above trivial example it is. But if the parent function has non idempotent code before calling the missing function (like doing some global change / side effects), then calling again will give a different result than just continuing from the current state.
So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user (just for small/medium scripts).
>So is it possible to define the missing function and continue from the same state in Python? I don't think so, but I'm not a heavy Python user
This is a pointless debate - someone has to catch the exception, save caller registers, handle the exception (if there's a handler) or reraise. Either you have to do it (by putting a try except there) or your runtime has to be always defensively saving registers or something. Lisp isn't magic, it's just a point on trade-off curve and I have without a shadow of a doubt proven that that point is very close to python (wrt the repl). So okay maybe clisp has made some design decisions that make it a hair more effective at resuming than python. Cool I guess I'll just ignore all the other python features where there's parity or advantage because of this one thing /s.
I'll take this as an answer to my sibling comment that the answer is "No". I'm really sad CPython can't do that, but maybe some other Python can. It shouldn't necessarily be any slower for the interpreter to figure out where to jump to before saving the execution trace and jumping.
It's not "pointless", I was tearing out my hair and losing days because I couldn't do this in CPython. Yes, I'd much rather use Python than Common Lisp regardless.
It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.
I'll say it again: what exactly do you think your magical lisp is doing that defies the laws of physics/computing?
> It works in code compiled from c++ too: define and associate a signal handler for sigkill, call a function whose symbol can't be runtime resolved by the linker, sigkill is sent and caught, define your function (in your asm dejure), patch the GOT to point from the original symbol to wherever the bytearray is with your asm, and voila.
I don't need to do anything like that in Lisp. I just define the function and RESUME THE COMPUTATION WHERE IT STANDS in my read eval print loop. << important parts in uppercase.
> My point is very simple: I can do it too, in any language I want, and so there's nothing special about lisp.
The big difference is: "I can do it too" means YOU need to do it. Lisp does it for me already, I have not to do anything. I don't want to know what you claim you can do with C++, show me where C++ does it for you.
Telling me "I can do it too" is not a good answer. Show me where the language implementation (!) does it for you.
do people not realize that basically everything vm/interpreted language has a repl these days?
https://www.digitalocean.com/community/tutorials/java-repl-j...
https://github.com/waf/CSharpRepl
https://pub.dev/packages/interactive
not to mention ruby, python, php, lua
hell even c++ has a janky repl https://github.com/root-project/cling
edit: i get downvoted by the lisp crowd every time i bring up that the repl isn't a differentiating feature anymore :shrug: