> The Rust or Haskell compilers, for example, insist on policing whatever private understanding you might have about the meaning of your code.
The thing about "private understandings" is that they're just another way of saying "we expect ceaseless, flawless vigilance" (as one writer put it), not to mention "flawless communication and training."
Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.
There are certain programming idioms that work brilliantly in Haskell. Those same idioms would be utterly miserable in the presence of unrestricted mutation.
Or to take the author's other example, Rust forbids shared mutable state, and it keeps careful track of ownership. But I can freely use native CPU threads without worrying about anything worse than a deadlock, and I can bang directly on raw bytes without worrying about anything worse than a controlled runtime failure. And this remains true even if team communication occasionally fails or if someone makes a mistake.
Sometimes I want to rule out entire classes of potentially dangerous mistakes, and not just have a "private understanding" that nobody will ever make certain mistakes.
As always, it's a matter of using the right tool for the job. If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions. If you need to do highly exploratory programming and invent new idioms to talk about your problem domain, Common Lisp is awesome. And if you need to build libraries where everything has a rigid mathematical structure, Haskell is a great tool.
In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.
It's all about thinking frameworks. Ensuring that effects are declared, that data has a static lifetime, or that your program is rewritable data are tools you can use to focus your mind in one part of the problem and less in another. Just like having irestricted access to everything your computer can do, or data + code encapsulation with multiple access levels.
Not recognizing this (like in the article) is a clear sign of a single-paradigm developer that never really understood anything else but is too arrogant (insecure maybe?) to assume he doesn't understand something. So, the problem must be with everybody else.
Anyway, a funny red-flag is that reaction to Haskell IO. People that never really learned Haskell tend to complain about IO with a deeper complaint of "why to I have to keep using monads?", while after people learn it they tend to complain about the lack of monad composition, with a deeper complaint of "why can't I use monads for everything?"
After you learn how to solve states, input data, output data, error handling, parsing context, resource mocking, guaranteed resource release, and a lot of other things just by declaring the correct monad when you call your code, people tend to want to solve several of those at the same time.
But for solving several of them, you need to compose your monads, and there is no good general way to do that. There are context dependent ways to compose some of them, but you are always thinking about their compatibility and they often require that you specialize your code.
> If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions.
If dealing with Untrusted File Formats perhaps you should use a tool purpose-built for Wrangling them Safely, WUFFS.
You won't find a Hello, World example for WUFFS because Hello World prints out text to your console, which is exactly the sort of nefarious stuff bad guys might try to do and so WUFFS doesn't even provide any mechanism you could use to do that even if you wanted to. But it does Wrangle Untrusted File Formats Safely, shrinking your remaining problem space.
For example WUFFS would be appropriate for taking files your users claim are JPEG "photographs" they uploaded for their "profile picture" and turning each one into either a 64x64 pixel RGB array or an error without any risk that they seize control of your profile picture program and do goodness knows what else instead.
Although Rust's memory safety means you can achieve confidence a "photograph" doesn't corrupt memory it doesn't require the rigour WUFFS brings to file parsing, so a Rust program could end up confused about unforeseen state while parsing the file. For example in Rust you can write a function that might mistakenly overflow a 32-bit integer and, in production it will just silently wrap. In WUFFS that function won't compile until you either decide explicitly what should happen (e.g. wrapping, saturation) for each overflow, or you trap all the cases where it could overflow as an error. This is very annoying of course, but we're parsing Untrusted File Formats and if we leave anything to chance that will be exploited.
I'm pretty confident that I can parse untrusted binary data in Rust with nothing worse than a denial of service. (And I have over a billion 'cargo fuzz' iterations to prove it.)
But WUFFS is even more opinionated and strict than Rust, and so it can offer even stronger guarantees. Admittedly, it looks pretty obscure and the documentation is a little light, but it's a great idea.
I am really just done with CVEs, or at least the sort of CVEs that appear in C programs. We know how to prevent so many classes of security holes completely, allowing us to focus on the harder challenges.
> For example in Rust you can write a function that might mistakenly overflow a 32-bit integer and, in production it will just silently wrap. In WUFFS that function won't compile until you either decide explicitly what should happen (e.g. wrapping, saturation) for each overflow, or you trap all the cases where it could overflow as an error.
You can do this in Rust, up to a point. There's a lint to ban dangerous arithmetic: https://rust-lang.github.io/rust-clippy/master/index.html#in... . You can then use the {saturating,wrapping,checked,overflowing}_{add,div,mul,abs,...}() methods to decide exactly what should happen on overflow.
But WUFFS seems a lot nicer. Judging by the README it first tries to determine whether overflow is actually possible, while Clippy will happily forbid you from running "1 + 1".
> As always, it's a matter of using the right tool for the job.
Nice in theory. In practice a manager will ask you to build A, then later they will ask you to add B and C, and these components should interact seamlessly of course. If you chose your language based on A, then you might get stuck on B and C.
Some of my favorite organizations have been the ones where someone senior picked a very tiny number of "official" languages early on, and established clear rules about when to use which. Some examples:
- "Our web apps get written in Typescript."
- "Our data science gets written in Python."
- "Our inner loops get written as Rust CLI tools that contain no business logic." (Or whatever. It depends on the problem domain.)
This works because TypeScript, Python and Rust are all very good at certain tasks, but they're each also fine general-purpose languages with lots of third party libraries. If you do somehow wind up writing a webserver in Python, you'll be fine. Of course, something like Haskell is a deeper commitment with more tradeoffs, and it's the wrong answer for most organizations. (But if I were mathematically modeling the pricing of complex derivative contracts, Haskell would make the list.)
But this is why good senior people with taste are so useful, and why competent technical management is worth its weight in gold. Sometimes you want a toolbox with 2 or 3 well-chosen, versatile tools. And sometimes you get organizations that try to pound in nails with a screwdriver.
It sort of makes sense. If things can still change. Have you never been at a company where the above list was:
Our web apps get written in Perl (later added: with jQuery for FE interactivity)
Our data science gets written in Oracle stored procedures
Our core business logic is written in Oracle stored procedures!
Fat clients are written in Smalltalk
These were arguably even sort of well chosen at the time. (I won't say which company this was as that would identify me and I changed some details but this is a real life example)
In such an environment, there is usually next generation after next generation of these software types, with upgraded (or at least updated) basic technological choices. Some of these choices are recent and possibly bleeding edge, some are old and possibly only good at that time, some prove misguided immediately or in hindsight.
For example I've seen:
- COM and ASP.Net to Java and JSF (general web apps)
- Forté 4GL (look it up, it was very interesting) to J2EE (general enterprise software)
- MIDP Java to React Native, following mobile phone evolution (mobile app front ends of general enterprise software)
- HTML frame messes to Java applets to JavaScript interactivity (high profile public web site)
- ColdFusion, or something of the sort, to Sharepoint (another high profile public web site)
- SharePoint to a BPEL workflow engine (important workflow-oriented application)
In my case there were true believers using their own COM components to complicate the architecture of web apps and web services, without the slightest interest for Windows APIs beyond "we are stuck with a fun server OS".
I was somewhat involved in this process at an org I worked at, the thing is, some languages really are the best tool for the job, and trying to build something in the wrong language can really hurt software quality or reliability. That said, I agree you should still try to standardise to a reasonable degree.
You mean, your manager calls you for an embedded software project, and next thing you know you have to turn your code into an user facing web application?
Stranger things have happened. At a previous job there was a 20+ year old codebase that targeted embedded Windows CE, which it was requested to be used as the backend of a mobile app. The solution was a bit Heath Robinson, but you could remote desktop onto the server and watch the windows flicker as they processed web requests as if they were input sequences.
Was this mad? Yes. Was it quicker to market than a rewrite? Yes.
Also, if things do move this far away from the original vision -- and it can happen especially in a start-up -- that's a rare case where "total rewrite" is a valid software engineering strategy that can deliver benefits out-weighing its cost.
I've done a few total rewrites now, and the only one I am sure was a good idea was for this reason (DPX, the Java re-implementation of Garlik's core software before it was purchased by Experian Consumer Services many years ago).
I have worked for someone that believes you should rewrite every five years or so, just do the current team knows how it works and also do not needed stuff can fall away. I think it presupposes enough modularity that you can do it without having everything in the org have to change.
Business logic features are rarely constrained by the choice of language. All that really changes are probability of implementation error and ease of implementation. I’ve rarely had the experience that a particular business capability is easy in one language and hard in another language, but vice versa for a different feature.
Are you mentally defining "business logic" as the 20 or so lines of code in a program that are completely independent of all input, output, OS libraries, and frameworks?
Pretty much every app, driver, and framework I've worked on across the web, desktop, and mobile has been mostly glue holding different components together. Choosing a different language would require a nearly-complete rewrite in every case except for a tiny section of independent interesting logic (and of course, if you arbitrarily picked some meme language for your "business logic" you'd have to write more glue code to attach it to the rest of your software..)
It sounds like the kind of code we're writing is very different. I'm definitely not just gluing things together, and 90% of the stuff I am gluing uses some kind of language-agnostic interface like JSON over HTTP.
Most of my code is probably glue code. My largest consideration is making things scale, which makes the glue code larger and more complex.
Anywhere you're e.g. dealing with an eventually consistent database... that is not business logic, that is all glue code that exploded due to scaling concerns.
Rust does not "forbid shared mutable state". Instead it offers both the usual usual shared references along with a new unique reference, but restricts shared references so the type being pointed to must opt into aliased mutation. Making sharing and aliased mutation explicit in the type system allows both the programmer and the compiler to reason about which variables can change behind your back when calling functions or mutating through other pointers, and which ones are referentially transparent. (Though this property does come with sacrifices to programmer convenience, namely the restrictions on Cell, the runtime overhead and panic-proneness of RefCell, and the unsafe blocks associated with UnsafeCell. Hopefully GhostCell can do better.)
Also, the article explicitly says that the rust compiler is usually right, with - IMO - the implication that haskell is likely usually right for its purposes too.
But people have to get mad about the one throwaway comment at the top without even reading the next paragraph, let alone the rest of the article :(
Probably good feedback for the author. People don't have time to read every article they come across fully so they're evaluating quality as they go. Throwaway comments like this will have the effect of losing audience so best to leave them out.
I don't have anything in particular to contribute, but it struck me as interesting that the term private understanding is used here. It reminds me of the subtitle of one of the more well-known books on formal methods, The B-Book: Assigning Programs to Meanings. [0]
Of course, using a formal methodology like B, ceaseless, flawless vigilance is mandatory.
How so? You can do whatever paradgim you want in CL. There is no policing involved. Not in the sense of Haskell or Rust at least. CL will let you modify globals or share data willy nilly just fine.
Mixing functions and values in lists is not really a restriction either, it opens up for all kinds of shenanigans.
The deepest and most radical opinions of Lisp are:
- You should be able to implement Lisp-in-Lisp within a few pages of code. This a profound and remarkable design constraint. (See http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf (PDF) for an introduction.)
- Syntax would only hide the fact that code is a simple data structure that can be manipulated like any other data structure.
Just like Rust's strong opinions buy you stress-free threading and bit-banging, Lisp's opinions ensure that it's an almost ideal environment for inventing domain-specific language extensions.
But Rust is poorly suited to object graphs where everything mutates everything else, and Common Lisp is poorly suited to proving that a large program obeys certain strict rules.
I love opinionated tools, but I try to choose ones with the "right" opinions for the task at hand.
> Common Lisp is poorly suited to proving that a large program obeys certain strict rules
As I'm sure you know, a Lisper would probably design a domain specific language that was well suited for such proofs. One of the projects I'd like to get around to one day is a Lisp hosted general purpose DSL[1] based on predicate transformer semantics[2] that combines writing specification and program text into a single interactive experience. Think something along the lines of smartparens[3] strict mode, where the editor doesn't allow you to make an edit that will violate the count("(") == count(")") invariant and you modify the tree structure with operations like slurp and barf, but a bit more general.
The thing is, the sea-of-objects-where-everything-mutates-everything-else programming model is fundamentally bad. It's too hard to reason about. Programs and data need structure. There certainly are structures that do not easily fit into Rust's constraints (e.g. when you're modelling an arbitrary graph), but when you choose a sea-of-objects design when not absolutely required by the problem domain (and it almost never is), you choose poorly.
Even if you do need to model an arbitrary graph (for example), though a traditional programming language would make it a lot easier than in Rust, I still wouldn't necessarily let that dictate the choice of programming language. Most likely your program will end up doing a lot more than just banging on the graph and those parts may benefit from a more disciplined programming language.
You can structure mutation just fine by organizing those objects into hierarchies of abstraction, such that a sea of objects on one level becomes a single black box on the next. This keeps the "seas" small enough to reason about.
The object graph problem really hurts Rust imo. In C# I can just mutate and forget about ownership. In Haskell I can solve this using an immutable graph and a state monad. In Rust it gets pretty awkward.
Yea, but in those languages you have a garbage collector working in the background to clean up after you. You don’t always want to pay that cost. From time to time I work on an open source application that spends 60–70% of its time in garbage collection, at least on jobs at the extreme end of the size we can handle. I’m starting to think about ridiculous hacks like combining millions of moderately–sized objects into a giant arena. With no pointers into the arena (just indicies), I can eliminate a huge chunk of the time wasted scanning the heap over and over again. But at that point, why am I writing this in Go? My Rust prototype is very incomplete, but for the parts that are implemented it uses way less memory and completes a lot more quickly. And I haven’t done _any_ performance optimization or ridiculous hacks there yet.
Reference counting has completely different properties than other forms of automatic garbage collection. In particular, reference counting still requires some amount of ownership tracking, whereas this concept just doesn't exist (for memory resources) with tracing garbage collection.
This is particularly relevant for certain algorithm, such as atomic initialization, that get significantly more complex with in-line ownership tracking.
Also, because of Rust’s wonderful type system it will be fairly straight forward to replace the string keys in this code with interned strings instead. (The amount of strings we use for paths in a large repository is pretty astounding.) That might now be almost possible in Go, if I learn the new generics.
That’s true; the prototype uses Rc/Arc for things where I don’t yet know for sure how the ownership should really work out, and doubtless a fair amount of that will stay.
Rust has been my favourite language since 2013. I constantly miss its ownership model when working in other languages (which most commonly means JavaScript for me).
Certainly not everything is a good fit for Rust’s ownership model, but honestly most code benefits from it at least a bit, and steadily more and more is being figured out about how to mesh other models into it without too much pain. It’s very liberating, protecting from various mistakes that are too easy in other languages (and that’s specifically where I miss it).
Ownership modelling can require more effort initially if the problem doesn’t match Rust’s capabilities very well (or put another way, if the problem doesn’t match how hardware works without garbage collection), but subsequently it’s liberating to not need to worry about various problems that are ubiquitous in other languages.
I reckon it similar to the debate of static versus dynamic typing, though more subtle. For toys, dynamic typing is adequate and static typing a burden, but as you scale a system up, dynamic typing makes life harder and requires more discipline to avoid ossification or drowning from technical debt. It’s taken time, but there has been a significant resurgence in static typing as people slowly come to realise its value through bitter experience in dynamic typing. Like static typing, a strict ownership model requires more effort initially, but makes life easier down the path, and I expect that at least the basic concepts of it will be picked up in more languages new and old over time (this has started already, actually, with Swift and D springing to mind), though it’s probably a harder concept to beneficially retrofit to a language than static typing.
You may want to mutate and forget about ownership, but Rust doesn’t let you do this, and it’s for your own good ;-). In time you get more of a feeling of why it is the way it is and learn to appreciate it.
There is a fairly high bedrock of abstraction, mostly because the compiler is available at runtime, the blurring of the distinction between compile-time and run-time, and image-based development. Common Lisp programs can be as fast as Java, but the executables are huge because they have to bundle the entire image inside. And tree-shaking is less effective because the language is so dynamic it's hard to guarantee some code won't ever be called.
And if the abstraction bedrock is high, then problem domains below that bedrock can't be approached with the language.
Yes, LispWorks has a powerful tree shaker and I think SBCL now has one as well. Arguably images could get even smaller by distributing the runtime as a shared library. I admit this is an implementation detail.
It was already years ago a problem for some companies or organisations wanting to deploy Common Lisp software on tiny (embedded) machines or without GC. IS Robotics (later called iRobot) developed L, a small Common Lisp, for robots with small embedded computers - I think it also has been used on the Roomba devices.
There are or were a bunch of such small delivery oriented implementations: Oracle bought one years ago, Gensym has one, mocl is derived from CLICC, ...
Hello, former minor L contributor here! (I last worked on it in the late 90s, and I'm only going to talk about stuff that iRobot has mentioned publicly.)
L was actually a subset of Common Lisp. It would run pretty nicely on a machine with 1 MB total RAM and a 16 MHz CPU. All the basics were there: lambdas and macros and symbols and modules, and they all worked how they did in Common Lisp. (Or they were close enough that you could write and test in Common Lisp and then cross-compile.) There was a REPL that could run most code, and an ahead-of-time compiler.
It was a fantastically nice environment for embedded systems too small to run something like Alpine Linux. You could compile a program, load it onto actual hardware, and then tweak things interactively using the REPL. And you could invent all kinds of macro DSLs.
Of course, all this becomes moot once your system is large enough to support Alpine Linux and an SSH server. If you can run Linux, then you can use Lisp or Python or PLT Scheme or Rust or anything else you want, and you have half a million open source libraries available.
Still, L is proof that Lisp is a great embedded language for anything with a MB of RAM. You can have a pleasant development environment and tons of flexibility, with an excellent REPL.
ECL is based on earlier compilers, which go back to KCL from 1984.
The idea of CLICC was to compile programs to small static C programs. Really small and with little or no dynamic features of Lisp (code loading, runtime compilation, eval, redefinition, ...). It did not have all the bells and whistles of ECL (virtual machine, runtime code loading, debugging, full dynamic Common Lisp, ...).
The thing is, if you start with Common Lisp, it's pretty easy to write a DSL that adds the constraints and provides the guarantees that you need. If you start with Rust or Haskell, it is impossible to remove the constraints those languages impose short of invoking Greenspun's Tenth Law and re-implementing big parts of Common Lisp.
If you start with Common Lisp it is not "pretty easy" to write a DSL that provides the constraints of Rust. You'll need to reimplement type checking and borrow checking at least, and as others pointed out, it's a ton of work to match the ergonomics of the Rust compiler. It's even more work to implement IDE support comparable to say rust-analyzer (which takes advantage of Rust type inference). (And of course this all assumes you're happy to jettison Rust performance features like absence of GC etc.)
But say you do all that work. Congratulations, you now have an ecosystem of one person. To do anything in this language you need to reimplement a standard library plus third-party libraries, or at least write Rust-type-safe glue code to existing Common Lisp libraries. If someone wants to join your project, they'll need to learn your DSL first.
When you write a bespoke DSL for a problem already solved by a programming language with a sizable community you're giving up a lot of network effects. That is something the OP completely failed to account for.
> If you start with Common Lisp it is not "pretty easy" to write a DSL that provides the constraints of Rust.
It's a lot easier than writing Rust.
> you now have an ecosystem of one person
Do you think it would have even been possible for one person to write Rust? Imagine if all the effort that went into writing Rust -- and Haskell and Swift and C-sharp and Python and Java and Javascript all the other myriad balkanized languages that have come along since CL was standardized -- had gone instead into improving CL. Do you really think we would be behind where we are today?
Inventing a new and different syntax for every new language you design is a choice, not a technical necessity. The fact that languages have communities is survivorship bias. It's a bug, not a feature. In Common Lisp, it is possible for a single person to write something truly useful. In other languages, it is not. This is not to say that Lisp's lone-wolf culture is desirable; it's not. But that is NOT a reflection of any technical problem with Lisp. It's a political and cultural problem. And the first step to solving that problem is getting more people to recognize it for what it is.
> Do you think it would have even been possible for one person to write Rust?
No, but I don't think it would be possible for someone to implement an S-expression Rust dialect on top of CL either.
> Do you really think we would be behind where we are today?
Hard to say. The counterfactual world where everyone decided to build on top of CL is so weird it's hard to say anything about it. Also, "improving CL" is an unclear term. For example if you want to replicate all the benefits of Rust by "improving CL" then that means, among other things, you need an implementation of "CL" that can compile your Rust-equivalent subset/superset of CL to bare metal with no runtime (including no GC). That implementation will take advantage of Rust ownership and other properties thus will not compile regular Lisp code. If you have a separate implementation for a special dialect of CL with distinct static and dynamic semantics from normal Lisp, it is only very loosely still "CL".
> In Common Lisp, it is possible for a single person to write something truly useful. In other languages, it is not.
This is an absurd overgeneralization that makes you sound like a nut.
You're right. Allow me to rephrase: in CL it is easy to write a compiler for a new language because you don't have to write a parser or a back-end. It is so easy that it is a plausible task for one person to write such a compiler and make it truly useful, and indeed there are several extant examples. In other languages it is much harder.
> The thing is, if you start with Common Lisp, it's pretty easy to write a DSL that adds the constraints and provides the guarantees that you need. If you start with Rust or Haskell, it is impossible to remove the constraints those languages impose short of invoking Greenspun's Tenth Law and re-implementing big parts of Common Lisp.
This surely cuts both ways: if you write a DSL that adds the constraints and provides the guarantees that you need, you've re-implemented big parts of Rust or Haskell.
Or maybe you only need a tiny fraction of the constraints and guarantees that they provide. But it's likely that you'll need more over time, and then you wind up with not just a re-implementation but a gradually accreted, rather than coherently implemented, one, and that's unlikely actually to provide the guarantees it's meant to provide.
> you've re-implemented big parts of Rust or Haskell
Maybe. Or maybe all I had to do to turn CL into Haskell is implement the Hindley-Milner algorithm. I didn't have to write a parser or a compiler because I can just re-use those from CL.
I used to use CL quite a bit, but have since abandoned it for Haskell, so I'm a bit biased.
There's a number of issues with that:
- I'd be missing all the optimizations that can be performed due to purity.
- There's more to Haskell's type system than just vanilla Hindley-Milner, and the implementation of it isn't particularly trivial. https://github.com/stylewarning/coalton is the closest thing and it's still missing a large amount of the type system.
- Doing the implementation would be a significant amount of work to get it to integrate well with the language, and it would be a layer tightly glued on top instead of integrated with the language. I've seen many good DSLs embedded in lisp, but a type system is hard to embed in any language because it changes fundamental semantics of the language. Typed Racket is a massive project and it's lacking things like ADTs.
- A major part of Haskell is the standard library, a good chunk of the semantics of Haskell people use on a day to day basis, like monads and etc, are a part of the standard library.
> I'd be missing all the optimizations that can be performed due to purity.
What I'm advocating here is not retrofitting Lisp but embedding DSLs within Lisp. In a DSL you have complete control over the semantics, and so you can do all the optimizations that Haskell does.
> coalton is the closest thing and it's still missing a large amount of the type system.
Note that this was written by one person. You have to distinguish between what has actually been done given the current state of the world and what would be possible if people made different choices. If a tenth of the effort that has gone into implementing Haskell had instead gone into implementing a Haskell-equivalent embedded in CL, that effort would plausibly be competitive with actual Haskell if not superior.
Rust didn't have to do those things either, parser generators and LLVM meant that much of the work was done for them. So they spent their time on borrow checking.
Now they are reimplementing some of those things for performance reasons, but you're going to have a hard time convincing anyone that a lisp-based rust you wrote is going to be more performance than the one that exists today, so they'd need to make those optimizations anyway.
And even if it does, that misses the point. You still have to put in effort to design and write the grammar. If you embed a DSL with Rust's semantics within Lisp you don't have to do that. You just use S-expressions and use the Lisp reader to parse them.
You're assuming that creating a fully featured dsl that is ergonomic enough will be vastly simpler. "Just" is doing a lot of work, and hiding complexity, as it often does.
> If a tenth of the effort that has gone into implementing Haskell had instead gone into implementing a Haskell-equivalent embedded in CL, that effort would plausibly be competitive with actual Haskell if not superior.
It's such a wildly ridiculous claim though. It implies either
1. that the syntax of Haskell is the majority of the work, such that using a dsl would remove 90% of the effort.
2. Somehow a dsl for Haskell embedded in a lisp would be more efficient to work in, so much so that development would be an order of magnitude faster.
1 is absolute nonsense. 2 might be true in a sense. I doubt it, but even if it were, such a language would no longer be Haskell, so you'd lose out on the value you gain from it's syntax. The same is true for rust.
S-expressions are nice, but they aren't always the best way of structuring things, and if you're going to forego them to better embed another language, why constrain yourself to the lisp runtime?
You can create a DSL for a team working on a specific problem. Lisp is good at that. What I have not seen is a DSL that anyone else wants to use.
Someone builds a DSL, a small team uses it. It just has to meet the needs of that team. It doesn't have to be bulletproof or elegant. It can have all kinds of rough corners. It can take undocumented tribal knowledge to know how to use it.
The comparison to Rust is therefore almost completely misleading. It's like comparing Linux in 1992 to a commercial Linux distribution today.
Yes, that is a totally fair point. But Linux is where it is today because some people in 1992 looked at Linux in the state it was in at the time and decided that it had potential. If that hadn't happened, no one outside of a small group of hackers would ever have heard of Linux. The idea that embedding languages in CL is a bad idea is likewise a self-fulfilling prophecy.
I'm not sure that it would be "pretty easy" to write a DSL that adds the constraints and provides the guarantees that Rust does. I'm not sure that it's that easy to get it consistent and bulletproof.
Yes, you can't remove the constraints of Rust or Haskell (other than using unsafe, I suppose). But if those languages have the constraints that you want, then trying to write that as a DSL instead is... probably hubris. Also wasteful.
> I'm not sure that it would be "pretty easy" to write a DSL that adds the constraints and provides the guarantees that Rust does.
It would be a lot easier than inventing Rust, for two reasons:
1. You don't need to write a parser.
2. The target language for your DSL compiler can be Common Lisp rather than machine language without sacrificing performance.
Those two things make it absolutely certain that however much work it is to re-invent Rust in CL it will be strictly less work than inventing Rust from scratch.
There are other benefits to this approach as well. For example: if, after implementing your DSL you discover that you've made a sub-optimal design decision somewhere along the line, it will be a lot easier to tweak your DSL than it is to tweak Rust.
One of the things people really like about Rust is its fantastic error messages. They're very helpful.
But, having decided you "don't need to write a parser" you're in a tough place for your DSL since there are problems where Lisp's parser will just tell your DSL programmers something went wrong here and leave them clueless as to what exactly is wrong in their program. Rust wouldn't have done that, but of course its compiler owns the parser too so it gets to handle all the corner cases correctly.
You can add descriptive error messages when an error happens (like multiple mutable borrows) during macro expansion. Macros are just regular list code, so all runtime features like printing functions are available.
You could even write a FPS using OpenGL where shooting functions, or pieces of suctions removes them from the current compile job (or even deletes them from the source code entirely). In a macro.
I mean, that's nice, Rust's proc macros get to emit descriptive error messages† too, but for that to happen the macro expansion occurred and that means we didn't have a parse error. So you're dodging the problem.
† Or of course, to deliberately not emit descriptive error messages, e.g. whichever-compiles!
Or maybe using Rust is an instance of the sunk-cost fallacy. It's possible that the net costs of living with Rust's constraints are larger than the cost of re-implementing it in CL. This has undoubtedly been the case for C and C++. The cost of living with C's constraints (or lack thereof) is incalculable, surely billions of dollars, possibly trillions if you add up the costs of all the buffer overflow vulnerabilities that have cropped up through the ages. You could probably write one hell of a CL compiler for a billion dollars.
This is why Apple, Google and Microsoft have finally started to (slowly) move away from them (or deeply push mitigation strategies), with the increase of networked devices, fixing those CVEs is finally making visible results in what would be otherwise profits.
Common Lisp does have a "proper strong type system". What it doesn't have OOTB is is static type checking at compile time. Those two are orthogonal issues. Although with CLOS (object system) you get pretty close. Some compilers like sbcl have type inference and you can declare types of variables and function arguments. There are also libraries that take this much further to make it work more like a statically typed language. There are even libraries that add things like dependent types and such crazy things.
Note that static type checking can be difficult to work with when using the prominent interactive programming style since the types aren't always wellhdefined and/or in flux while writing and adding new bits and pieces of code, or redefining types, adding struct/object members at runtime, etc.
I’m curious, but my reading of your comment essentially boils down to ‘the actual data in my code is vanishingly rare’. Am I reading that right? Is the comment actually saying that in any given program most of the data is in some structure, but compared to the rest of the code is it almost insignificant? I’m not judging or contradicting, but if my reading is right, that’s the fist time I’ve heard anyone make that point in such a straightforward manner.
> In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.
Code in common lisp is almost entirely in a tree structure via nested linked lists. However, when programming in Common Lisp, I don't use this specific data structure very often for things that aren't code. Furthermore many other Common Lisp programmers write software this way as well, which is a bit of a counterexample to the assertion I was replying to.
[edit]
I think an opinion common lisp does have is that code should be easily manipulable as data. As much as scheme programmers like to call CL macros "unhygienic" it's far easier to write a correct general-purpose CL macro than a correct macro in other languages.
Lisp has a lot of functions for working with linked lists, which do double duty as functions for working with parsed code. But optimized Lisp tends to use structs and arrays and hashtables and avoids creating a lot of lists at runtime.
>Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.
Yes, but oftentimes this is more like:
"If we tie you to this bed, you will never have a car accident! You'll also never get lost! This also makes it easier to stay healthy as we'll be feeding you the best food!"
Sure, but I'll also miss all kind of activities I could be doing outside on my own. I'll also get bed sores. And I often want a steak or ice cream, even if it's not ideal, but you insist bringing me this organic gourmet vegan crap.
No one sets out to create programming languages that tie people to their beds. If people add restrictions, it's not to make users miserable.
It's fine if you like being unrestricted, I like that too.
However, when you see people going out of their ways to add barriers to their own work, you should assume they reap a benefit that you're not fully appreciating, not that they hate fun and want to outlaw ice cream.
>No one sets out to create programming languages that tie people to their beds.
People can still end up designing such languages, even if they didn't set out with that specific goal.
(Kind of like how you can bring war and division, even if your goal is to bring some religious peace on Earth or "the revolution").
>If people add restrictions, it's not to make users miserable.
That's already explicit in my comment though. They didn't tie the person to his bed to make him miserable but to spare them from car accidents, to help them never get lost, and other such niceties!
There's a reason they were called "bondage and discipline languages". They created barriers for what they assumed to be your own good. But as Edward Abbey said, "When I see someone coming to do me good, I reach for my revolver." That is, when someone wants to make decisions for my good, they're deciding for me; that is, they're taking the decision away from me, because they don't think I can be trusted to choose the "right" thing. And then I'm supposed to feel grateful for the help instead of resentful for being treated like an idiot or a child!
They mean well. Fine. I still don't have to like it, and I don't have to accept it.
And frankly, in case of a complex program, you can’t be trusted to choose the right thing, as the human brain simply doesn’t scale with program complexity. We need help from the computer to avoid bugs for anything non-trivial.
For example, as have been shown for example countless times, C programmers are unable to properly free allocated memory so solutions were made (GC, or rust’s ownerships). It is egoistical to think
But they can't be trusted to choose the right thing either. Wirth, sitting in his office at a university in Switzerland, has no idea whatsoever what problems I'm facing trying to use un-extended Pascal in an embedded system with memory-mapped I/O. (No, it wasn't my choice to use Pascal without extensions in that environment.)
The point is, you don't have enough information to choose for me. You can't do it. Even if you can absolutely, 100% correctly decide what the "right thing" is in any given situation, you don't know all the situations.
(And if you'd say "the right thing is to not use Pascal in that situation", I'd agree with you. But as a language designer, your language will get used in places where even you may think it's not a great fit.)
I don’t think it takes bad faith to come to GP’s conclusion. Further, it implies a great deal of superiority for you to assume that someone who does not like/enjoy the restrictions of a language does not ‘fully [appreciate]’ the supposed benefits of those restrictions. I, for one, do not enjoy or value the restrictions (or underlying semantic concept) of ownership in Rust. It just means that I don’t enjoy it, I understand the benefits which are claimed to exist for adopting the Rust-centric view of programming. I just don’t think they are worthwhile for me. I’m not really disagreeing with the point you make about language designers intention, I doubt they set out to inflict misery on users, but your assumption about not understanding some alleged benefit is a bridge too far.
The point is not whether they are worthwhile to you, but to the problem space the program solves. Further that includes the context of the organisation/developer that will own it going forward.
I'd probably liken it traffic controls, like lanes w/lines, stop-lights, stop-signs and other formalized rules of driving.
Yeah, it might suck that you can get a ticket for treating a red-light like a stop-sign at 6am in the middle of no-where. Though this is me speaking from a "likes rust" perspective.
Could you please describe a case where this is actually happening in the languages people tend to imply this is happening in?
Rust has unsafe, Haskell has unsafePerformIO. You might not like how verbose some of the stuff is but all these languages give you a foot gun if you really want it.
Why bad faith? Most of the critic he makes against Pascal was sorted out by Pascal dialects.
Which one can consider doesn't count as dialects aren't the main language, yet by 1981, C was mostly available in K&R dialects outside UNIX, like Small C.
And besides those Pascal dialects, Modula-2 standard was released in 1978, while Mesa was powering XDE and Bravo at Xerox.
However, the author assumes that it follows if you've Noticed The Skulls you would choose to do it differently next time. That's not necessarily so.
Arthur Laffer is an economist. Laffer has helped provide advice to numerous Republicans and Republican projects which of course didn't achieve their stated goals (e.g. Trump's massive tax cuts for the wealthy which Laffer claimed would grow the US economy by 6%).
I'm confident Laffer has Noticed The Skulls, but why should he care? Advice that destroys the US economy but further enriches those who already have more than enough worked out great for Laffer.
Not particularly. The title would be pretty awkward if it were Yes, We Have Noticed The Skulls, Condemned Them, And Promised Not To Do Them Again, but the body of the article describes exactly that. Has Laffer said "mea culpa" and made significant and visible efforts to prevent the skulls from happening again? If not, then that's not what Scott's describing.
If you felt despair when you read the metaphor of totalitarianism and just want to get away from languages that want to grind an axe on you, and Lisp is not your cup of tea, then Raku and Perl welcome you.
Yes, and as such perfectly isomorphic to the resistance the programmer feels when working e.g. to satisfy the borrow checker or strict type systems.
This is not about some "technical" impossibility - language still have an escape hatch like unsafe and "non-idiomatic" ways of coding.
But to focus on the technical and ignore programmer experience ("feelz") in that area, is like equating Idris and Forth because "they're all turing complete and it all ends up as binary code anyway".
Not really, as a helmet, seatbelt, and gloves are passive objects you just wear and continue as previously, engaging their protection only in case of accident.
Whereas working with the borrow checker, or a Haskell-like type system, changes the experience and the method of programming itself, forces certain patterns, and so on.
When I was younger I used to write a lot of Common Lisp. A lot. And I also had this idea that Haskell/OCaml/Scheme were strict, ivory tower totalitarian languages, while Common Lisp was this incredibly powerful, expressive, liberating tool. For building organisms, not pyramids, like Sussman said. And the S-expressions were aesthetically appealing to me. But at the time I was working mostly on smaller commercial projects and solo open source projects.
Maybe it's cognitive decline from no longer being a teenager, or the experience of working on larger, O(100,000) line codebases with large teams, but nowadays I find a high degree of dynamism just exhausting. I don't want more expressive power, I want fewer nightmares.
A common problem I faced with Common Lisp is I'd write some module, then go on to write another module that depended upon the previous one, and get an exception through from the first module. Usually a type error. And I'd have to go back, context-switch, fix that, and climb back up to the other level.
With ML/OCaml/Haskell that is far less common. Being able to look at a module and say, "this is done and it's correct", is a very powerful thing. Confidence, rather than safety, is the primary benefit of static typing for me.
And I find that I no longer use the REPL. I've been working on a compiler in OCaml and for some reason Dune won't run utop (lol) so I've just not been REPL'ing and it's not been a problem. The code typechecks. Things work on the first run. If I change something, I get notified what needs updating.
The problem with interactive development is that it's like unit testing: it can prove the presence of bugs but not their absence. Type systems can eliminate large classes of bugs ahead of time.
Just so this is not entirely negative or depressing: there's something beautiful about how maximalist Common Lisp is. It's a big, messy language with every programming paradigm (and you can invent new ones) and different naming conventions in the core language itself. I was learning new things about Common Lisp years into using it productively. And I compared the experience to moving to a huge stately home, that has a room for the piano, a vast attic, a wine cellar, all manner of things, and then trying to move back to a shoebox apartment. Where do I fit the piano? CLOS, the MOP, the Lovecraftian beauty of LOOP and FORMAT: it's like a wild garden next to the (comparative) spartan minimalism of practically everything else. And it's beautiful.
> These "big idea languages" tend to assert a kind of "programming ideology" over reality, and tend to, in my opinion, distance you from the thing you are trying to do, often, they claim, "for your own good" ... "You want to give HOW many objects a mutable reference to this value?" complains the Rust compiler. "Are you insane? You are a Rustacean, you don't do that kind of thing!"
I agree with the spirit of this line of thought: in general I favor pragmatism over zealotry, and I think it should be up to the programmer to determine how a tool should be used, not the tool itself.
However when we talk about something like the safety constraints imposed by Rust, it's more about citizenship than it is about the compiler being overly opinionated. By baking those constraints into the language, I can have confidence when I use third party code that it reaches at least some standard in terms of memory safety and thread safety. Much in the same way as a strict type system, those constraints free me up to not have to think about certain types of issues.
If I am writing code which only I will consume, I'm also free to use escape hatches to sidestep these constraints when I know they're not relevant.
I've written tens of thousands of lines of Rust by now, and I'm still not convinced the Rust model is the right one in terms of achieving its goals, but I think the approach is "reality based" with respect to avoiding some of the problems which can occur while developing software in collaboration with others.
In my view it's very pragmatic to leave dealing with ownership to the compiler. It's something that you have to deal with in any language (even if you have a GC) if you want to write correct code, but it's not something most programmers truly prioritize most of the time. Even those who do think about ownership can't get it right 100% of the time because they're human and get tired. Keeping track of any kind of state is a constant drain on mental resources.
Therefore, having the compiler deal with ownership correctly most of the time and maybe get in my way a bit some of the time when it's actually wrong (most of the time it isn't) is a tradeoff I'm happy to make.
I agree. The point where I'm not totally convinced on Rust's model is when it comes to lifetime semantics.
People talk about the problems colored functions when it comes to async, but in my experience the real issue you run into with Rust is that once you need to introduce an explicit lifetime into a data structure, the complexity of everything touching it starts to balloon out of control.
This creates all kinds of complications, creates problems with automatically deriving the future trait, and leads to workarounds where people end up passing id's around everywhere instead of references in a lot of cases, which can effectively be a way of circumventing the borrow checker.
I don't know what the answer is, but sometimes Rust feels like it's an abstraction or two away from really solving the problems it wants to solve in an elegant way.
> I don't know what the answer is, but sometimes Rust feels like it's an abstraction or two away from really solving the problems it wants to solve in an elegant way.
I feel like it's not unreasonable to say that some parts of rust are impressive because they make hard things ergonomic, and others - so far - are mostly impressive because they make really hard things possible at all.
Or: I share your intuition, but assuming there even -is- an answer, I think at the very least rust has provided a great service in figuring out what the right questions are.
> people end up passing id's around everywhere instead of references
Well, a reference (or a pointer) is just an index into a giant global and mutable array of bytes, which is pretty crazy!
Passing around indexes into specific arrays of objects is a lot less crazy. The indices are stable even when you reallocate the array, for example. Think of them as offsets, rather than pointers.
Yeah for me it's more around the ergonomics (and imprecision) about using indexes.
It's a perfectly workable approach, but passing around offsets feels like I'm breaking the contract a bit. The compiler doesn't know which index goes with which data structure, I'm just asking the compiler to trust me that I'm pairing them correctly.
Also it tends to be more boilerplate than just having a direct pointer stored inside the data structure.
Don't get me wrong, I understand all the problems with pointers, but it the UX is better in a lot of cases.
For languages that have explicit memory management and concurrency (e.g. Rust) having compiler tracking of ownership, or equivalent, is an absolute necessity in the modern world.
But I'd argue most languages don't need ownership. GC is fine. Most of the problems we deal with in commercial software development are succinctly expressed in GC'd languages, and the benefit of using a language that explicitly tracks ownership is greater performance from being able to be closer to the metal.
You need ownership tracking even in languages with GC because you will be dealing with resources that are more than just memory, and with those GC isn't really enough. A GC alone doesn't really help you deal with concurrent access, either. If you don't have a borrow checker, you will be essentially doing the same work manually as you're reading and writing code.
Rust's ownership tracking is not only about memory safety. It can also help you with other kinds of resources whose ownership you can encode using the type system.
Yes indeed, I'd like it if other languages had ownership and borrowing. Hell, I'd like it if a language could actually achieve linear types without holes or being completely unusable.
They could allow more relaxed GC'd normal types, whether default or opt-in, but affine and linear typing are genuinely useful properties fo reasoning about code, from both correctness and performance perspectives.
One needs to look no further than the string slicing problem for that to be clear. It's not an issue in Rust, but in every GC'd language you have to pick between:
1. eagerly copying the slice, which incurs additional often unnecessary CPU and memory costs, but avoids
2. every substring operation which isn't defensively copied being a potential memory leak as you're slicing 3 characters off of a 5MB string on every HTTP request and storing that in a global map, which turns out to store the entire 5MB string you got it from
Or some even more complicated magic where you only perform the copy if the substring escapes, at which point your performance and memory characteristics become wildly unstable and the mere act of extracting a function can bring the entire system to its knees.
The idea is to start with a simple, easily understood linear type system and then add borrowing, but without going too far in the direction where the type checker becomes a tower of heuristics and conveniences so that programmers can write normal-seeming code which magically typechecks.
So you can learn how to write linear code from reading a set of linearity rules, rather than from trial-and-error against the linearity checker.
A look at your anti-features list reminds me somewhat of Zig. Obviously the polymorphism is not in the Zig wheelhouse, but there are some similar seeming paths running through the readme. Your comment had me expecting a Haskel/ML presenting language, but now I’m wondering. Where do you see the syntax going?
Have you ever thought about putting a simple code example right in the README? Maybe it's too early days, but I know that's always the first thing I'm looking for when I stumble across a new language.
Yeah I should get around to doing that, but right now there isn't much code that succinctly showcases the linear typing features. Probably have to implement some of the stdlib for that.
You need some way of managing ownership, but it doesn't necessarily need to bubble up to the user level. For instance Swift is going in the direction of leaning heavily on value semantics, plus the actor model to wrap concurrent data, to create a safe environment where the user doesn't have to think about ownership at all.
Of course there's tradeoffs involved, but I think there are multiple approaches to achieve safety and user-space ownership constraints are only one of them.
Most GC languages invented before Java went mainstream had value semantics as well actually, and this is one of language defects that .NET improved over Java.
Many forget that MSIL is like LLVM, having all semantics to compile C++, alongside GC.
Some of those capabilities weren't exposed to other .NET languages, however since C# 7 they have been improving it.
As of C#10, there is little left to do versus Modula-3 or D.
> the very special case of multithread code accessing in process data structures.
So, the thing this gets you, as the Nomicon explains, is you're data race free and so you get to have Sequential Consistency (in Safe Rust) within your program.
After decades of computer programming, our experience is that humans need Sequential Consistency to think about anything that's too tricky to just write down a complete list of all cases. This is terrible news if you write machine code, since your modern CPU doesn't bother supplying Sequential Consistency in favour of going faster instead, but it's also very bad news in many languages (C++, Java, Go) where the promise is SC/DRF but you're on your own to supply that necessary data race freedom (in fact in C++ it's worse, if you can't supply data race freedom you get Undefined Behaviour).
The traditional (and especially on Unix, well rewarded) solution is to give up and only write serial code. Then your program doesn't have concurrency, so it will be Sequentially Consistent. When the average computer didn't have pre-emptive multitasking this felt like a pretty sensible way to write programs. Even once the average computer did have pre-emptive multitasking (and so threads are a nice win for some problems) it did not permit simultaneous execution, most programs weren't slower for being serial. But today lots of people even own a smartphone with more than one CPU core.
So hence Rust's Fearless Concurrency. Instead of an hour-long tutorial full of caveats and generalisations (to write parallel algorithms in C++) you can confidently write your Safe Rust with concurrency and the compiler will reject any fumbling attempts that introduce data races. Instead of needing to call Sarah the grizzled concurrency expert for any changes to functions in scary-but-faster.cpp you can let Bob the new girl modify the code in faster.rs, confident that either Bob's changes are actually safe or you'll spot the awful mess she made trying to pacify the borrow checker during code review.
Yeah, but that is a small subset of distributed computing.
If you have multiple threads inside the same application talking to the same database, and modifying tables without proper SQL transaction blocks, anything goes and the borrow checker doesn't help one bit.
Additionally, if those multiple cores are being used by multiple processes, microservices style, there is also very little that it can help to fix IPC data races to shared OS resources.
The ownership model, like any model, can't help you if you decide not to reflect important facts about your world in the model.
But Rust will enforce constraints your model reflects. For example it's common in embedded computing to reflect hardware resources as singletons. If the firmware_updater is using the only SerialPort then it isn't available for my doorbell_noise_generator, I can't just make another one and ruin everything by reconfiguring the serial port that's currently moving firmware program code.
Rust has no idea what a SerialPort is, but the programmer provided no way to make any more of them, and the only one that existed is owned by firmware_updater right now, so too bad you can't get one for doorbell_noise_generator and the device doesn't get bricked by a user pressing the doorbell button while doing a firmware update.
If your application needs database consistency beyond what your chosen RDBMS actually promises with non-transactional updates, you should definitely provide Rust access to that database only via transactions that preserve the required consistency. If your RDBMS is so feeble it can't cope with more than one transaction in flight, you may need to make those singletons too. The ownership model is enforced by Rust and you'll do fine.
As I said, humans can't cope with anything other than sequential consistency at scale when reasoning about systems. If you've built a complicated "microservices style" system that doesn't actually have this, the humans supervising it don't understand how it works and sooner or later (but likely sooner) it will do something that is entirely outside their model of what's possible and they've no idea how to fix that.
Remember, Rust is not an innovator here. It's applying lessons understood in academia - to an industry which had been infested by Real Programmers and couldn't understand why now everything is on fire.
I agree. The important thing for the other cases is to be pragmatic and use the escape hatches. Use a Mutex even though it’s not necessary and accept the runtime cost, or use an unsafe pointer, but get the job done. I’ve seen the phenomenon in both Haskell and Rust now where people would introduce way to much complexity for a relatively simple problem to conform to some sort of ideology.
> These "big idea languages" tend to assert a kind of "programming ideology" over reality, and tend to, in my opinion, distance you from the thing you are trying to do, often, they claim, "for your own good".
Another angle on this for a language like Rust is that Rust is designed the way it is because it’s favouring reality over programming ideology, distancing you from the thing you’re trying to do when it’s something that doesn’t map to the reality of how computers work very well—and yes, that reality does end up leading to ideology, but Rust is the way it is because of reality more than because of ideology.
But then Lisp! Why, Lisp is the epitome of programming ideology over reality! Ask me to name a “big idea language” under the provided definition and the first language (well, family of languages) that springs to my mind is Lisp. It’s not just unopinionated, it’s aggressively unopinionated, which is an opinion, an ideology (to say nothing of the doctrine of S-expressions), and one that flies in the face of how computers actually work; so that you pay at least a performance price to use such languages, and often other prices too, like with threading, as ekidd’s comment mentions.
The problem with REPL-driven development is that when taken too far it makes you capable of creating programs that simply cannot be understood by someone without the "lived experience" of the path taken to arrive the solution in the REPL.
The very property that makes it liberating - that you can instantly try things out and see mistakes - allows you to elevate up a level or two in complexity of what you can do. But the future readers of your code don't have this advantage.
While you can apply discipline and use it to work to find the simplest, clearest solution, there is nothing that forces that and you can just as easily arrive at something that looks and works like magic to someone coming to it fresh.
I don’t know. I can write incomprehensible code in any language. It is to some extent the default. I have to work hard so as not to be the poor future reader (myself) in 6 months thinking “where do i even begin…?”
I find the repl encourages me to iterate quickly and when i set out to solve a problem, one of my goals is to express the solution in a nice way.
The repl just reduces the cost of me trying different ways to achieve that. Whether i leave behind horrible or nice code isn’t really a factor of the repl, it’s a factor of whether i started with an intention to write understandable code. I usually do and the repl’s instant feedback and state retention makes that vastly easier.
Sometimes i don’t set out with that intention though, i frequently have to process lots of text and regexp is just the easiest way for me usually - even though i accept it’s utterly impenetrable to many once you go beyond 10-20 chars or so in an expression. No repl involved in producing fairly impenetrable code.
I’ve been reading the book ANSI Common Lisp by pg, and decided to work through some of the exercises that piqued my interest.
Similar to what you are talking about here, I worked on one of the problems for a while. Not in a REPL, but in a similar manner. And I arrived at a solution that is not advanced or anything, but which I was quite satisfied with. It occurred to me then that my solution to the problem I was working on really only made sense because I had worked through the problem.
For this reason I left a printout of two of my hash tables in the code. Because when you see what data they actually have in them it becomes quite obvious what the code is doing. Whereas without those two printouts you’d have to “simulate” in your brain what the code is doing.
And so for that reason I left the printouts in my code.
But this also ties into a more general problem in software development which is that even though our tools are becoming really really good and our computers really really fast there is still a long ways to go in terms of how deeply and how completely we can really see what’s going on.
I was thinking about this recently when I was at the dentist to pull a tooth. It was not a fun experience but even so I saw that the dentist had both the skill and the tools for making the operation. And in particular the tools he had at his disposal allowed him to understand and to operate on my teeth. And that got me thinking once again about the lack of visibility that we have when we develop software.
> But the future readers of your code don't have this advantage.
Future readers also don't have access to my brain state when I'm implementing the program. Whether in Lisp or in Haskell or in C or in Python. This is kind of a nonsensical point. Iterating over a solution in Java and in Lisp it amounts to the same thing: code in a file. There's nothing keeping you from documenting.
> The problem with REPL-driven development is that when taken too far it makes you capable of creating programs that simply cannot be understood by someone without the "lived experience" of the path taken to arrive the solution in the REPL.
Can you provide an example for this? Intuitively I'd say you are right, but, OTOH, even after 20 years of CL-hacking (which probably does more REPL-driven development than any other language), I cannot come up with an example...
I have, once or twice, ended up in a position where "code saved in file" would load perfectly fine in my hacked-in image, but fail to load (or compile) in a fresh image.
This is (usually) a sign that I forgot one function or macro in the code-on-disk, that I hand-chased in the REPL. In neither case was it hard to rectify, but it did leave me a bit confused. And very thankful that I had not quit the main dev image.
After that, I have explicitly added "add a test suite, run test suite in fresh image" as a technique to keep that from persisting for a long time. And simply doing that seems to keep me subconsciously aware of the problem and not making it appear in the first place.
I would argue something else: the repl invites you to write functions that take all their state as input, due to that making repl working easier. That means figuring the code out can be the same as writing it.
Now, sure, you can write a mutating statefull mess using the repl as well, but most code I have come across in CL is not like that, except for maybe the odd times you find java-in-cl.
It is not the same thing, of course, but it is tangential. Pure functions is something you can write in any language, but languages where you so easily can develop/try smaller chunks make it a particularly good idea. Trying a piece of Java code usually means running a lot more code that sets up your state for you. How I, and most people I know, use a lisp repl does not.
This is only a problem when the image diverges from the source. I heard of cases where there was precious functionality only compiled in the lisp image without any source code because it was programmed and debugged directly into REPL. But this is a mistake of programmer misusing a powerful tool, not a good practice.
Emacs makes doing this too easy, though. I prefer slimv which has no extra repl buffer. It forces you to edit the sources and invoke REPL with snippets from there - ideally, whole file stays compilable all the time. Or at least to use a scratch file, to later get back to and refactor. Any programmer bitten by their own unreadable code should learn to do that and resist siren calls of repl patching. But then, unreadable code is possible in any language.
I rather like the Interlisp-D approach where while you edit in a live environment it then syncs the modified versions back out to lisp source files on disk as its standard/natural workflow.
I suspect the way you're working with slimv is at least somewhat comparable in terms of making getting that part right the path of least resistance, which is itself a form of developer experience ergonomics.
Their complaint about Haskell is based on a common misconception. You can do everything in Haskell that you can in a non-pure language, including in-place mutation (e.g. IORef, Data.Vector.Generic.Mutable, etc.), strict functions, manual memory allocation, and so on.
> Their complaint about Haskell is based on a common misconception
Misconception is way too kind a word. And it's quite ironic (though not necessarily unsurprising) seeing the article's namecalling of bad faith as it engages in paragaph after paragraph of bad faith and being plain wrong.
I found the article quite interesting and funny (in a good way) and I have trouble understanding why anyone would think it's written in bad faith. He's not attacking you personally for being a Haskeller, he's just making light fun of the fact that Haskell requires monads to do IO and won't let you do it any other way, how is that bad faith?
My point is that making fun of something you don't like, however light the fun is, is not a good faith technique. Now, that Doesn't mean bad faith discourse can't be hilarious, Conrad Barski had made that same joke quite well in his comics.
It's just hard to make it the start of a honest discussion on the topic.
Nonsense. It's your choice, as a reader, to refuse to entertain it as an honest discussion, just because you don't like its style. And it's your choice to ascribe bad-faith intentions to the writer just because you don't like its style. (Note that the writer even defined the concept of bad faith as he was referring to it, and was not using it in the way you are.)
It's striking how preferences in programming languages seem to mirror preferences in communication style: You demand that the writer express himself in your preferred style, and if he does not, you accuse him of not writing in good faith. The writer, on the other hand, writes in a liberal style, full of metaphor and references to philosophies outside of computer science.
You only tolerate a narrow range of expression, while he welcomes wide and varied expression. You take offense rather than seeking to understand the intended meaning, while he takes into account others' interpretations and tries to meet in the middle: https://hyperthings.garden/posts/2021-08-30/freeing-your-goa...
There are some life lessons to be found in these exchanges. Postel's Law is not well-followed anymore.
The fact that monads aren't technically required to perform IO at all but rather is an abstraction that meshes nicely with the Haskell way of doing things. If you don't want to use monads for IO you're certainly going to meet resistance in terms of library support.
It would be nice to get an example of how this wonderful REPL does all the cool things mentioned.
In fact Haskell has a decent REPL, and you can (if you really want to) explicitly allow IO outside the IO Monad, or skip the type checker, though I find them more of a moral compass than an impressive dictat. I don't know if you can crash straight to the debugger like in common lisp though.
You can run your program in the repl and :set -fbreak-on-error .
You can get stack traces with access to variables via :set trace, but because of optimizations and lazy evaluation they are similarly jumbled as async code debugging.
Haskell has some nice advantages like `:r` for reloading, similarly you can reload a dynamically linked binary and the old version is only unlinked after all references to it are gone. There is a significant difference to dynamic languages, though, because having different versions in memory leading to incompatible types would be a lot more common with a partial reloading workflow.
Don't play with this if you like hot-reload or hot-refresh features in react or jrebel or whatever. You will forever see these other approaches as lame in comparison... speaking from someone who was previously very happily ignorant, living my best life on the JRebel side of the fence.
These posts complaining about programming languages shackles also seem to concieve of programming as a highly individualistic endeavor. I don't that that's a coincidence.
I think type systems and other restrictions are really good precisely because they help me better trust and collaborate with others.
> There is something vaguely totalitarian about these big idea languages. The Rust or Haskell compilers, for example, insist on policing whatever private understanding you might have about the meaning of your code. They are like the neighbor's kids in a totalitarian regime
This statement doesn’t even slightly ring true for me. These compilers are more like infallible personal assistants who keep track of all the crap that I don’t want to. The Haskell compiler in particular has never suggested I stop doing something that didn’t turn out to be a bad idea under more careful consideration.
The rust compiler is more restrictive (mostly due to lack of ability to express some types, like quantified or higher kinded types), but still is fundamentally working on my behalf.
> "True interactive development is also about programming in such a way that your program never crashes. Instead, whenever an unhandled condition is raised, the program starts an interactive debugger."
That's great until someone else tries to use your program.
For those who like to cross a REPL with strong static typing and don't want to go as far as Haskell, OCaml has a REPL and a tool (ocamlmktop) that builds a REPL variant of your software as a binary executable. Not quite as highly integrated as CommonLisp + SLIME, but still very useful.
A type error isn't some kind of punishment, it's a discussion with the compiler. Type errors are good -- they mean that there's a discrepancy with your idea of the program that you're writing, and the way that you've actually written it, and a good typechecker will help you track down that discrepancy.
So when GHC says "you're doing side effects outside of IO", it's not trying to harsh your vibe. It's explaining to you the inconsistency between your program and the "type" (description) that you've written for it. You might not want to have to explain yourself to the compiler, and that's fine -- go write in an untyped language instead. But it's hardly authoritarian to offer a different way of programming.
(Keep in mind that typechecking meets many of the same needs that debugging does. For someone who already tries to typecheck in their head to ensure that their programs are correct, having an explicit, external typechecker can be a relief. That might not be you, but it's at least some people; when I program in Lisp I'm always running into problems that a typechecker would have solved at compile time, and I find that frustrating. To each their own.)
I have been musing with the thought that so many IDE environments are mainly about integrating other tools for you to make your program. Common Lisp does seem to be about integrating your program into your environment in a way that isn't that common anymore. (I know there are other "image" based languages. They seem to have fallen by the roadside, though. Common Lisp, mostly, included.)
I’m interested in how people using different languages manage this style of “developing in the debugger” and then (eventually) checking in finished code.
Two systems I’m familiar with:
* In Flutter, you write code in the normal way, in an editor or IDE. If you want to change code while the program is running, you edit code in the normal way and hit a key to reload it. This works when stopped at a breakpoint. But the changes you can make using hot reloading seem somewhat limited; you couldn’t do a major refactoring.
* In an Observable notebook, you edit a cell and then any other cells that depend on it automatically run, like a spreadsheet. You don’t normally use a debugger, though the browser’s debugger does work. In some cases when you’re using advanced features, you might need to reload the page, but usually you don’t.
How do people work using Common Lisp? In other languages? In particular, do you write code in one place (using the repl) and copy-paste it to another place?
The workflow for slime/sly and swank/slynk described at the end of the article is standard if you are in the Emacs world.
Sometimes I work in an Org file, sometimes in a lisp file directly. In both cases I will C-x C-e (eval-last-sexp) or run blocks, or the whole file (depending on Org vs lisp file). It is copy and paste on steroids (same for slime-eval-defun were you can be working deep inside some large function and update the whole without changing your view).
There are some examples of developing in the debugger in [0] around 55 minutes (linked). This video also has great examples of other crazy interactive development workflows and is worth watching in its entirety if you are interested in this kind of thing.
Hmm. The article doesn't really describe what it's like but I skimmed the SLIME manual. So you basically point at the code you want to run, and this isn't necessarily how you would run it in production. It looks like SLIME mode requires you to install an interpreter hook in the startup code for the process that you want to debug, so you can run commands that way?
This seems like it would make the most sense for event-driven code. Evaluating a script is another kind of event. Many systems have a way to execute commands.
I'd guess this style of development encourages people to structure their systems as essentially command interpreters, even if the commands normally come from a UI.
The interesting bit will be deciding how the data structures that persist between commands are implemented. For example in server-side web apps, the shared state is often a database, so state is external to the process being debugged. The architecture limits what you can do directly, but you can try out SQL queries.
I've never understood the purpose of significantly modifying a program in memory. Won't you just have to redo all your changes, this time to the source program, once you've settled on a design? [0] As long as you have fast, incremental compilation, doing it in memory first seems like a waste of time. It mostly seems useful for small, disposable changes, to explore while debugging or whatnot. (I suppose hacking on a process as it runs in production, or some other scenario where "halt, compile, restart" is impossible, is also a good reason, but personally I'd prefer saving process state across restarts, like XMonad does.)
[0] Of course, that doesn't apply if you're building the program entirely in memory, such as Smalltalk with its image-based programming environment.
This is a wonderful exploration further into the original spirit of that comment and very much reflects the experience that I have had as well.
The only other runtime I have found that has a similar absence of pain is Emacs (though Emacs has its own separate warts unrelated to friction at the repl). I think there is a deeper disconnect between having something that looks like a repl and a real repl which is that for decades Emacs didn't even have a command line style repl, because the repl was everywhere, hidden behind C-x C-e or M-:.
A point of confusion I think is that repl in the lisp world has come to imply much more than that somewhere there is some code that looks like (loop (print (eval (read)))). That level of sophistication
of the repl concept was archaic well before common lisp, but the term continued to be used and evolved to signify a runtime that could support interactive development.
For many of the other languages the issue isn't with the form of the interface (almost any language can have a prompt, come up with printed representations that can round trip through their printed representation, etc.) it is that their underlying runtime can't support the kind of redefinition that is needed to avoid the creeping insanity that can arise when code becomes subtly out of sync with data e.g. in Python classes and instances.
Computers are faster now, so the workaround is "restart the runtime" and users hardly notice. Therefore no one bothered to implement what common lisp has. Given how long a cold boot took back in the 70s and 80s, avoiding the restart at all cost was a top engineering priority. Lisps evolved in a significantly different and harsher selective environment than many of the runtimes that came after, and as a result had to solve certain fundamental problems that are practically invisible for users today.
In the time since I wrote the referenced comment on reddit I have also discovered the amazing power of save-lisp-or-die. At the time it hadn't quite clicked for me. I'm guessing that the author has had a similar experience given the title of the series so I'm looking forward to reading more in the future!
In the mean time I also learned a bit of docker, and what I find amusing/sad about it is that slad is basically docker without all the fuss. Sure it can't bring a whole unix system with it, but it is a couple of orders of magnitude smaller and significantly less complex.
This is remarkably similar to the advocacy made by lots of C/C++ programmers that the language is more useful because it lets you make mistakes, which is all well and good until someone buffer overflows your server and leaks the personal data of a million people / your cryptocurrency keys / whatever.
The opinion of Rust and Haskell is precisely that making mistakes is bad, regardless of your "provocative" opinion that it might be good, and that the language should be a https://en.wikipedia.org/wiki/Poka-yoke against certain categories of mistakes that have been found to cost the industry billions of dollars in failure.
(What do the Clojure-for-web-services people do? Presumably that doesn't drop web requests to an interactive debugger, or does it? Or is that irrelevant because this is only concerned with Common Lisp?)
This kind of advocacy seems to be endemic in LISP and FORTH: a tool produces great results when used by one or two idiosyncratic "genius" developers working in near-isolation on problems of their choice. It tends not to work nearly so well beyond that.
This whole discussion reminds me of http://tomasp.net/blog/2015/failures/ . What I’ve noticed is that programmers are largely divided into camps based on the cost they’re willing to pay to avoid runtime errors: Haskell/Rust developers tend towards one side while Erlang/CL/Clojure tend to the other.
My experience is that if you make the edit->run->edit loop tight enough, you can recover a lot of the guarantees available through static checks via a couple of alternate techniques and, for some of us, the resulting workflow is a lot more pleasant way to spend the day than tweaking the code up front until the compiler accepts it.
> What do the Clojure-for-web-services people do? Presumably that doesn't drop web requests to an interactive debugger, or does it? Or is that irrelevant because this is only concerned with Common Lisp
In CL the interactive debugger is only one option of how to handle uncaught exceptions. You can set it to just crash or log or whatever you want for production use case.
In Clojure it defaults to the host runtime default handling, so like in Java it'll throw all the way to the thread and kill it. Unless you explicitly handle it somewhere.
In JavaScript browser, it'll just get logged in the browser console.
And I'm not sure what NodeJS does, maybe it crashes the process?
But I'd say Clojure is more like the OCaml of Lisp, it nudges you strongly towards pure functional programming, managed memory and thread safe behavior. But it isn't strict about it, which allows you to still get a fully interactive development workflow.
It's a bit like how in Python private methods are a convention, turns out defaults and conventions tend to be followed pretty well by programmers. But having it not be strict means it can accommodate certain weird use cases when needed.
I like to think of it like where Haskell and Scala have safety guarantees, Clojure instead has a safety protocol. In practice both seem to yield on average similarly correct programs.
Not precisely what you're asking but I have, very occasionally, REPLed into remote clojure servers and rewritten running code live to fix time-critical bugs in production. But I'll admit i'm unsure whether this is an argument for or against Xtremely interactive development.
Darklang does offer something they call trace-based development where unhandled requests basically do initiate an interactive process to write handling code. I'm under the impression though that this is not intended for production time.
In general I'm loath to make any generalization about what kind of mistakes are good or bad and when. Luckily we have a language landscape which lets people make up their own minds.
Darklang's traces are not the same thing as a REPL. The user gets sent a normal 400 error, and the request trace is saved so that later, the developer can "replay" the request (on the live server, with full context).
While Dark does support the general idea of doing live code in production, it's not a Lisp dialect and is very much a functional language.
The thing about the C/C++ analogy is that that there is already a safety mechanism in these dynamic languages that prevents error from becoming catastrophic. The program will signal an unhandled error and the program can choose to stop, to drop the request, to perform some failsafe action, to raise alarms etc.
(Of couse you can still have expensive failures caused by bad logic, but I think in security bugs alone the cost of C/C++ style nasal demons behaviour has been bigger than other bug categories)
The article closes with a discussion of "true interactive development". I was particularly interested in this and wondered how IEx (Elixir's REPL) compares to Common Lisp's. So, I started this thread on the Elixir Forum:
https://elixirforum.com/t/hell-is-other-repls-how-does-iex-c...
- it seems some core lisp/Slime features are not there: compile one function independently, get type warnings and errors, goto source definition working OOB, no function signature (?), find who calls a function or macro, who sets a variable, and then no interactive debugger, no stepper, no inspector… ?
- Supports tab completion, shortcut keys like ctrl-a/ctrl-e and probably more that I don't ever use.
- Top notch support for unicode and supports color (IO.inspect with [color: true])
- EZ access to docs (Just type h in front of what you want to know about)
- You can recompile your project without leaving the repl (keeping whatever state you have)
- You can access the history of what has been returned using the v function
- Remote shell support (type `h IEx`) to learn more.
---
One thing I don't like: If you get mixed up and forget how many braces or parenthesis you need to close, you get stuck. I usually have to quit iex to get out. There may be a better solution to this that I'm just not aware of
Ruby pry brings a lot of those conveniences that I think really hard to live without. There is a presentation[1] when Conrad Irwin shows what is possible.
Thank you. I kept asking myself why I was reading a debate in 2021 about a language that defaults to (GET OFF MY LAWN) all caps. Learning about Ruby pry made it all worth it.
This person doesnt understand what Bad Faith actually means. Many people including GP are now using this term as a simple perjorative just to describe things that they don't like or disagree with. Something similar has occured with the term 'gaslighting'.
Ironically, the 'bad faith' argument in this post is an example of a bad faith argument.
> These "big idea languages" tend to assert a kind of "programming ideology" over reality, and tend to, in my opinion, distance you from the thing you are trying to do, often, they claim, "for your own good".
Except they don't pull these programming rules out of thin air, they're actually widely acknowledged as the best practices in the domains they're targeting. It's like complaining that SQL database engines don't let you hand-write custom query plans in your queries, 'for your own good'. Well yes–that's exactly the point! They've been finely tuned over decades of research to know how to do it better than humans.
> They are like the neighbor's kids in a totalitarian regime who will report you to the secret police when they think you're not quite as doctrinaire as you ought to be.
Wow! Comparing a technological tool to a fascist regime, because that's totally accurate and appropriate! /s
> Haskell compiler might be heard to say, "What are you doing? You are a Haskeller and should know better!" ... complains the Rust compiler. "Are you insane? You are a Rustacean, you don't do that kind of thing!"
Yes, of course, compilers are exactly shrill, screaming drill sergeants trying to break you down and indoctrinate you. By contrast, doesn't Common Lisp seem so mild-mannered and gentle? Of course you'd never want to use anything else!
Now I know it would be a little extreme to call this argument 'gaslighting', but I honestly can't think of a better word.
> Encouraging you to avoid something, however, is quite different from banning it outright.
Yeah, which is why Rust and Haskell actually don't ban it outright, and in fact why almost every language has 'escape hatches' that allow programmers to do whatever they want, on their own recognizance.
> When you are programming interactively, you are not so much writing code as sculpting and structuring live computer memory.
Great, but once I'm done with that, I need to actually lock in the shape of the sculpture so that it can be deployed with confidence.
> True interactive development is also about programming in such a way that your program never crashes. Instead, whenever an unhandled condition is raised, the program starts an interactive debugger.
Great if you're planning to keep a watchful eye on the program constantly. Not very helpful if the program is supposed to run unattended.
> Hell is Other REPLs.
Clever, but even other REPLs allow you to do the crucial bit–i.e. interactively explore an API. And after that, the best languages even allow you to lock in your findings and deploy with confidence!
What CL development I've done, it was pretty much "reload it and rerun it" in terms of a development cycle. Mind, these were not large programs. But there was enough global, shared state that needed to be reset that, most of the time, a simple tweak to a function wasn't enough to right whatever wrong was involved. And the reloads weren't arduous anyway.
Sure, for "little work", "local work", tweaking a routine, doing simple tests and such in the listener. It was fine. Very quick turn around.
But when fixing something that was larger in scope? Reload it, rerun it.
I also never "got" the restart and condition systems in CL. Part of this is simply my background, today mostly being in Java where production debugging is doing archaeological digs on deep stack traces.
I get restarts in interactive systems. But my systems were not interactive. They were server based systems, processing zillions of requests. I never saw the utility of a restart on a headless system. I could not imagine a connection stuck in a restart just waiting for me to fix it, debug it, correct it, continue it, or abort it. In contrast to just logging the torrid details of the failure and path to it and using the information in a post mortem.
Do folks get 3am phone calls to open up a server and dig through restarts? That never made any sense to me. On paper, it sounds great, I just never saw any realistic way it would ever apply in any of the work that I did.
Are there times it would have been nice to log in to a server, tweak a piece of code, and let it continue on? Changing the tires of a car on the road? Sure. Occasionally.
Mind, that could just be habitual. Since to me it was a novelty, and one unavailable to me, perhaps I simply don't miss it. Yea, it makes sense when hacking deep space probes. But a generic remote web service type of application in production? To me, not so much.
The idea of hot patching a server is amazing and frightening at the same time. How was the patch tested, do you commit it to VC first, before cut and pasting the DEFUN in to the listener, etc.
The same applies to Smalltalk. The idea of sending out an application that faults and drops the user in to a restart and debugger. What can they do with that? Call me up and talk them through it? I'd much rather them send me the log file with a post mortem I could use. I'm sure ST has ways of capturing stack traces in log files, but, casually, nobody talks about it.
So, I'd love to hear stories about what I'm missing. What applications folks were doing that were leveraging these aspects of the CL experience. How they manifest on system with any transaction volume (you know, a few trx per second). How one uses restarts in a web shopping cart in production.
Make no mistake, in Java, with the applications servers, turn arounds can be pretty long. But, similarly, with code structure, units tests, etc. turn around can be very fast. Make a tweak to the code base, repeatedly run an isolated unit test until it works, then run the larger suite to see what else you broke. That can be quite fast. Not "interactive", but...fast. "Close enough".
Restarts allow composable bridging, across components, of the interactive and batch/"server" worlds, because they can also be invoked programmatically. Of course you wouldn't want the debugger popping in your headless server app, but it could be an incredibly powerful option when you turn it on to debug a specific issue. I don't have any first hand experience in this (never doing any server side work in CL) but Paul Graham had mentioned briefly, somewhere, about ye olde Viaweb and being able to interactively debug and fix customer issues on a live system in real time. It's an appealing proposition, though I think he romanticizes it a bit.
I agree that function-level recompilation of definitions feels alarmingly undisciplined, borderline irresponsible in a production context. On the other hand, in my $work, I can recall sufficient instances where being able to compile in just a little extra targeted debug logging to a running process would've turned completely baffling production issues into something easily explained, sparing a lot of time and hard thinking trying to infer via code review the root cause of issues that no one was able to reproduce internally, and provide much greater confidence of fixes.
It's worth mentioning that on the Lisp machines, the debugger was a first class component of the user interface in a way that might feel very foreign to people accustomed to the Unix command line. It was just a way of inspecting backtraces, it was a essentially a form of interaction where the application could present a problem situation and offer the choices how to proceed.
> Do folks get 3am phone calls to open up a server and dig through restarts?
No, back then it would be a 3am pager beep; triggered by Smalltalk resumable exception, possibly when someone was on-call (in a bar).
> … nice to log in to a server, tweak a piece of code, and let it continue on?
Critical not "nice".
Trade reconciliation done upstream on mainframes took many hours (not enough time to do over) and that upstream processing was sometimes broken when it was changed, and then those errors were caught and mitigated by downstream software — because there we could "tweak a piece of code, and let it continue on".
I'm sure ST has ways of capturing stack traces in log files, but, casually, nobody talks about it.
With smalltalk the user could send you the image in its current state which would provide a lot more context than just a stack trace and a log. I definitely would prefer to pickup a running environment at the point of failure than a log when solving a bug. I cannot imagine how one would keep up with security on a setup like that though.
Back in 1990, 3½-inch 1.44 MB micro floppy disks were snail-mailed to me — a Smalltalk/V 286 image file, sources file and change log; for a system I'd developed a year or two before.
The "image in its current state" had been saved with an open "Walkback Window" showing an exception.
I opened the debugger, identified and fixed the problem (iirc without needing to reshape any user data with become:) and resumed the exception; saved the image and change log to the micro floppy disks, and snail-mailed them back.
Apparently the users picked-up their work where they'd left-off, when they sent it to me.
Anything that is possible (allowed by compiler/interpreter/runtime) will eventually make it into your codebase. For this reason alone I will always prefer languages with strong principle and big idea to languages that are multi-paradigm and agnostic. (Not saying that’s my only preference ofc)
> The language seems designed to encourage you to be you and to shape the language to suit your individual style, problem domains, or interests.
This is cool for the solo programmer.
Teams and organizations need some amount of standardization if they want their programmers to be able to maintain each others code. At that point, none of the coolness of Lisp remains, and you're better off using a less dynamic language that imposes more of a standard style.
It's not necessarily so. It works also for groups. Example: at a time when 'object orientation' was still kind of new (early 80s) lots of Lisp-using groups were developing OO-extensions for Lisp: Flavors, LOOPs, CommonObjects, ObjectLisp, ... These language extensions were used in groups at TI, Xerox, HP, Symbolics and others. For example Symbolics wrote much of their operating system and their applications using Flavors (-> several groups of people used it as a standard language extension -> incl. their customers).
With the Common Lisp standardization, a subgroup was given the task to decide on a standard object-oriented extension or to develop their own. After several years of work, a core group with a large group of supporters and implementors then defined a standard language extension to integrate object-object-oriented programming in Common Lisp - the Common Lisp Object System, which is now widely used in Lisp.
Not sure why i’m getting downvoted. When I come back to a project 6 months after I last touched I might as well have never heard of it for all I remember.
> (...) a single "revolutionary" concept or idea. Some examples include safety in Rust (...)
I think these languages are focusing too much on a few ideas while ignoring the rest. For example if I'm writing a Rust program which launches missiles, then I can still accidentally launch these missiles in safe code. A GC'd language would still provide good memory safety guarantees while allowing me to focus more on the problem at hand (not distracting me with type errors), and thus it could be safer overall.
I'll admit that I like Rust, but this seems like an odd take to me. "Not distracting me with type errors" seems directly at odds with "safer overall", especially for a missile launching system. That sounds like a recipe for "Whoops, there goes New York, but at least the code looked nice".
I would interested in seeing an example of what sort of type errors you mean. IME, the Rust compiler does a great job of catching actual mistakes in the type system, such as with Send/Sync. It also pretty easily lets you tell the compiler "no, I know what I'm doing" and say "unsafe impl Send for Foo {}" to do it anyways.
IDK, I think this quote from your link sums up my attitude towards that:
> Linked lists are as niche and vague of a data structure as a trie. Few would balk at me claiming a trie is a niche structure that your average programmer could happily never learn in an entire productive career -- and yet linked lists have some bizarre celebrity status. We teach every undergrad how to write a linked list. It's the only niche collection I couldn't kill from std::collections. It's the list in C++!
> We should all as a community say no to linked lists as a "standard" data structure. It's a fine data structure with several great use cases, but those use cases are exceptional, not common.
I have never in my professional career used or to my knowledge relied upon a singly-linked list, to say nothing of a doubly-linked list. That feels like picking something to be contrarian, not because it exemplifies a good case of where Rust is too strict. Just use a Vec? It's way more performant anyways.
The thing about "private understandings" is that they're just another way of saying "we expect ceaseless, flawless vigilance" (as one writer put it), not to mention "flawless communication and training."
Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.
There are certain programming idioms that work brilliantly in Haskell. Those same idioms would be utterly miserable in the presence of unrestricted mutation.
Or to take the author's other example, Rust forbids shared mutable state, and it keeps careful track of ownership. But I can freely use native CPU threads without worrying about anything worse than a deadlock, and I can bang directly on raw bytes without worrying about anything worse than a controlled runtime failure. And this remains true even if team communication occasionally fails or if someone makes a mistake.
Sometimes I want to rule out entire classes of potentially dangerous mistakes, and not just have a "private understanding" that nobody will ever make certain mistakes.
As always, it's a matter of using the right tool for the job. If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions. If you need to do highly exploratory programming and invent new idioms to talk about your problem domain, Common Lisp is awesome. And if you need to build libraries where everything has a rigid mathematical structure, Haskell is a great tool.
In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.