Hacker News new | past | comments | ask | show | jobs | submit login

> The Rust or Haskell compilers, for example, insist on policing whatever private understanding you might have about the meaning of your code.

The thing about "private understandings" is that they're just another way of saying "we expect ceaseless, flawless vigilance" (as one writer put it), not to mention "flawless communication and training."

Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.

There are certain programming idioms that work brilliantly in Haskell. Those same idioms would be utterly miserable in the presence of unrestricted mutation.

Or to take the author's other example, Rust forbids shared mutable state, and it keeps careful track of ownership. But I can freely use native CPU threads without worrying about anything worse than a deadlock, and I can bang directly on raw bytes without worrying about anything worse than a controlled runtime failure. And this remains true even if team communication occasionally fails or if someone makes a mistake.

Sometimes I want to rule out entire classes of potentially dangerous mistakes, and not just have a "private understanding" that nobody will ever make certain mistakes.

As always, it's a matter of using the right tool for the job. If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions. If you need to do highly exploratory programming and invent new idioms to talk about your problem domain, Common Lisp is awesome. And if you need to build libraries where everything has a rigid mathematical structure, Haskell is a great tool.

In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.




It's all about thinking frameworks. Ensuring that effects are declared, that data has a static lifetime, or that your program is rewritable data are tools you can use to focus your mind in one part of the problem and less in another. Just like having irestricted access to everything your computer can do, or data + code encapsulation with multiple access levels.

Not recognizing this (like in the article) is a clear sign of a single-paradigm developer that never really understood anything else but is too arrogant (insecure maybe?) to assume he doesn't understand something. So, the problem must be with everybody else.

Anyway, a funny red-flag is that reaction to Haskell IO. People that never really learned Haskell tend to complain about IO with a deeper complaint of "why to I have to keep using monads?", while after people learn it they tend to complain about the lack of monad composition, with a deeper complaint of "why can't I use monads for everything?"


Can you say more about the latter i.e. lack of monad composition ("why can't I use monads for everything") ?


After you learn how to solve states, input data, output data, error handling, parsing context, resource mocking, guaranteed resource release, and a lot of other things just by declaring the correct monad when you call your code, people tend to want to solve several of those at the same time.

But for solving several of them, you need to compose your monads, and there is no good general way to do that. There are context dependent ways to compose some of them, but you are always thinking about their compatibility and they often require that you specialize your code.


> But for solving several of them, you need to compose your monads, and there is no good general way to do that.

It's (provably, I think) impossible for fully general monads, but for traversable-but-otherwise-general monads you can do[0]:

  newtype (%) (f1::k1->Type) (f2::k2->k1) (a::k2) = Comp { unComp :: (f1 (f2 a)) }
  instance (Monad f1,Monad f2,Traversable f2) => Monad (f1 % f2) where
    join (Comp a)  = Comp $ a >>= (map join . mapA (unComp    ))
    (Comp a) >>= k = Comp $ a >>= (map join . mapA (unComp . k))
(For that matter, you only need f2 to be traversable; f1 can be any monad at all.)

0: copied from my own code, so it might need other adjustment than s/map/fmap/ and s/mapA/traverse/.


That instance is type-correct but unlawful: https://stackoverflow.com/q/42284879/7509065


CL is one of the most prominent examples of multi-paradigm languages, though.


> If you need to write a high-performance, heavily-threaded network server that parses malicious binary data, Rust is a great tool because of those restrictions.

If dealing with Untrusted File Formats perhaps you should use a tool purpose-built for Wrangling them Safely, WUFFS.

You won't find a Hello, World example for WUFFS because Hello World prints out text to your console, which is exactly the sort of nefarious stuff bad guys might try to do and so WUFFS doesn't even provide any mechanism you could use to do that even if you wanted to. But it does Wrangle Untrusted File Formats Safely, shrinking your remaining problem space.

For example WUFFS would be appropriate for taking files your users claim are JPEG "photographs" they uploaded for their "profile picture" and turning each one into either a 64x64 pixel RGB array or an error without any risk that they seize control of your profile picture program and do goodness knows what else instead.

Although Rust's memory safety means you can achieve confidence a "photograph" doesn't corrupt memory it doesn't require the rigour WUFFS brings to file parsing, so a Rust program could end up confused about unforeseen state while parsing the file. For example in Rust you can write a function that might mistakenly overflow a 32-bit integer and, in production it will just silently wrap. In WUFFS that function won't compile until you either decide explicitly what should happen (e.g. wrapping, saturation) for each overflow, or you trap all the cases where it could overflow as an error. This is very annoying of course, but we're parsing Untrusted File Formats and if we leave anything to chance that will be exploited.


I thought this post was one long gag until I found WUFFS on GitHub:

https://github.com/google/wuffs


VERY good catch, was confused too.


You're completely right, of course.

I'm pretty confident that I can parse untrusted binary data in Rust with nothing worse than a denial of service. (And I have over a billion 'cargo fuzz' iterations to prove it.)

But WUFFS is even more opinionated and strict than Rust, and so it can offer even stronger guarantees. Admittedly, it looks pretty obscure and the documentation is a little light, but it's a great idea.

I am really just done with CVEs, or at least the sort of CVEs that appear in C programs. We know how to prevent so many classes of security holes completely, allowing us to focus on the harder challenges.


> For example in Rust you can write a function that might mistakenly overflow a 32-bit integer and, in production it will just silently wrap. In WUFFS that function won't compile until you either decide explicitly what should happen (e.g. wrapping, saturation) for each overflow, or you trap all the cases where it could overflow as an error.

You can do this in Rust, up to a point. There's a lint to ban dangerous arithmetic: https://rust-lang.github.io/rust-clippy/master/index.html#in... . You can then use the {saturating,wrapping,checked,overflowing}_{add,div,mul,abs,...}() methods to decide exactly what should happen on overflow.

But WUFFS seems a lot nicer. Judging by the README it first tries to determine whether overflow is actually possible, while Clippy will happily forbid you from running "1 + 1".


> As always, it's a matter of using the right tool for the job.

Nice in theory. In practice a manager will ask you to build A, then later they will ask you to add B and C, and these components should interact seamlessly of course. If you chose your language based on A, then you might get stuck on B and C.


Some of my favorite organizations have been the ones where someone senior picked a very tiny number of "official" languages early on, and established clear rules about when to use which. Some examples:

- "Our web apps get written in Typescript."

- "Our data science gets written in Python."

- "Our inner loops get written as Rust CLI tools that contain no business logic." (Or whatever. It depends on the problem domain.)

This works because TypeScript, Python and Rust are all very good at certain tasks, but they're each also fine general-purpose languages with lots of third party libraries. If you do somehow wind up writing a webserver in Python, you'll be fine. Of course, something like Haskell is a deeper commitment with more tradeoffs, and it's the wrong answer for most organizations. (But if I were mathematically modeling the pricing of complex derivative contracts, Haskell would make the list.)

But this is why good senior people with taste are so useful, and why competent technical management is worth its weight in gold. Sometimes you want a toolbox with 2 or 3 well-chosen, versatile tools. And sometimes you get organizations that try to pound in nails with a screwdriver.


It sort of makes sense. If things can still change. Have you never been at a company where the above list was:

Our web apps get written in Perl (later added: with jQuery for FE interactivity)

Our data science gets written in Oracle stored procedures

Our core business logic is written in Oracle stored procedures!

Fat clients are written in Smalltalk

These were arguably even sort of well chosen at the time. (I won't say which company this was as that would identify me and I changed some details but this is a real life example)


In such an environment, there is usually next generation after next generation of these software types, with upgraded (or at least updated) basic technological choices. Some of these choices are recent and possibly bleeding edge, some are old and possibly only good at that time, some prove misguided immediately or in hindsight.

For example I've seen:

  - COM and ASP.Net to Java and JSF (general web apps)
  - Forté 4GL (look it up, it was very interesting) to J2EE (general enterprise software)
  - MIDP Java to React Native, following mobile phone evolution (mobile app front ends of general enterprise software)
  - HTML frame messes to Java applets to JavaScript interactivity (high profile public web site)
  - ColdFusion, or something of the sort, to Sharepoint (another high profile public web site)
  - SharePoint to a BPEL workflow engine (important workflow-oriented application)


People keep dismissing COM, unaware that all major Windows APIs since Vista are COM based, even WinRT.

Not using COM for native programming on Windows is being stuck with Windows XP.


In my case there were true believers using their own COM components to complicate the architecture of web apps and web services, without the slightest interest for Windows APIs beyond "we are stuck with a fun server OS".


I was somewhat involved in this process at an org I worked at, the thing is, some languages really are the best tool for the job, and trying to build something in the wrong language can really hurt software quality or reliability. That said, I agree you should still try to standardise to a reasonable degree.


You mean, your manager calls you for an embedded software project, and next thing you know you have to turn your code into an user facing web application?


Stranger things have happened. At a previous job there was a 20+ year old codebase that targeted embedded Windows CE, which it was requested to be used as the backend of a mobile app. The solution was a bit Heath Robinson, but you could remote desktop onto the server and watch the windows flicker as they processed web requests as if they were input sequences.

Was this mad? Yes. Was it quicker to market than a rewrite? Yes.


Translation note: "Heath Robinson" is pronounced "Rube Goldberg" in en/us.


Also, if things do move this far away from the original vision -- and it can happen especially in a start-up -- that's a rare case where "total rewrite" is a valid software engineering strategy that can deliver benefits out-weighing its cost.

I've done a few total rewrites now, and the only one I am sure was a good idea was for this reason (DPX, the Java re-implementation of Garlik's core software before it was purchased by Experian Consumer Services many years ago).


I have worked for someone that believes you should rewrite every five years or so, just do the current team knows how it works and also do not needed stuff can fall away. I think it presupposes enough modularity that you can do it without having everything in the org have to change.


Business logic features are rarely constrained by the choice of language. All that really changes are probability of implementation error and ease of implementation. I’ve rarely had the experience that a particular business capability is easy in one language and hard in another language, but vice versa for a different feature.


Are you mentally defining "business logic" as the 20 or so lines of code in a program that are completely independent of all input, output, OS libraries, and frameworks?

Pretty much every app, driver, and framework I've worked on across the web, desktop, and mobile has been mostly glue holding different components together. Choosing a different language would require a nearly-complete rewrite in every case except for a tiny section of independent interesting logic (and of course, if you arbitrarily picked some meme language for your "business logic" you'd have to write more glue code to attach it to the rest of your software..)


It sounds like the kind of code we're writing is very different. I'm definitely not just gluing things together, and 90% of the stuff I am gluing uses some kind of language-agnostic interface like JSON over HTTP.


Most of my code is probably glue code. My largest consideration is making things scale, which makes the glue code larger and more complex.

Anywhere you're e.g. dealing with an eventually consistent database... that is not business logic, that is all glue code that exploded due to scaling concerns.


Rust does not "forbid shared mutable state". Instead it offers both the usual usual shared references along with a new unique reference, but restricts shared references so the type being pointed to must opt into aliased mutation. Making sharing and aliased mutation explicit in the type system allows both the programmer and the compiler to reason about which variables can change behind your back when calling functions or mutating through other pointers, and which ones are referentially transparent. (Though this property does come with sacrifices to programmer convenience, namely the restrictions on Cell, the runtime overhead and panic-proneness of RefCell, and the unsafe blocks associated with UnsafeCell. Hopefully GhostCell can do better.)


Also, the article explicitly says that the rust compiler is usually right, with - IMO - the implication that haskell is likely usually right for its purposes too.

But people have to get mad about the one throwaway comment at the top without even reading the next paragraph, let alone the rest of the article :(


Probably good feedback for the author. People don't have time to read every article they come across fully so they're evaluating quality as they go. Throwaway comments like this will have the effect of losing audience so best to leave them out.


I don't have anything in particular to contribute, but it struck me as interesting that the term private understanding is used here. It reminds me of the subtitle of one of the more well-known books on formal methods, The B-Book: Assigning Programs to Meanings. [0]

Of course, using a formal methodology like B, ceaseless, flawless vigilance is mandatory.

[0] https://www.cambridge.org/gb/academic/subjects/computer-scie...


> Commony Lisp is a deeply opinionated language

How so? You can do whatever paradgim you want in CL. There is no policing involved. Not in the sense of Haskell or Rust at least. CL will let you modify globals or share data willy nilly just fine.

Mixing functions and values in lists is not really a restriction either, it opens up for all kinds of shenanigans.


The deepest and most radical opinions of Lisp are:

- You should be able to implement Lisp-in-Lisp within a few pages of code. This a profound and remarkable design constraint. (See http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf (PDF) for an introduction.)

- Syntax would only hide the fact that code is a simple data structure that can be manipulated like any other data structure.

Just like Rust's strong opinions buy you stress-free threading and bit-banging, Lisp's opinions ensure that it's an almost ideal environment for inventing domain-specific language extensions.

But Rust is poorly suited to object graphs where everything mutates everything else, and Common Lisp is poorly suited to proving that a large program obeys certain strict rules.

I love opinionated tools, but I try to choose ones with the "right" opinions for the task at hand.


> Common Lisp is poorly suited to proving that a large program obeys certain strict rules

As I'm sure you know, a Lisper would probably design a domain specific language that was well suited for such proofs. One of the projects I'd like to get around to one day is a Lisp hosted general purpose DSL[1] based on predicate transformer semantics[2] that combines writing specification and program text into a single interactive experience. Think something along the lines of smartparens[3] strict mode, where the editor doesn't allow you to make an edit that will violate the count("(") == count(")") invariant and you modify the tree structure with operations like slurp and barf, but a bit more general.

[1] Haha, I know.

[2] https://en.wikipedia.org/wiki/Predicate_transformer_semantic...

[3] https://ebzzry.com/en/emacs-pairs/


Write a DSL … and now you have two problems.


The thing is, the sea-of-objects-where-everything-mutates-everything-else programming model is fundamentally bad. It's too hard to reason about. Programs and data need structure. There certainly are structures that do not easily fit into Rust's constraints (e.g. when you're modelling an arbitrary graph), but when you choose a sea-of-objects design when not absolutely required by the problem domain (and it almost never is), you choose poorly.

Even if you do need to model an arbitrary graph (for example), though a traditional programming language would make it a lot easier than in Rust, I still wouldn't necessarily let that dictate the choice of programming language. Most likely your program will end up doing a lot more than just banging on the graph and those parts may benefit from a more disciplined programming language.


You can structure mutation just fine by organizing those objects into hierarchies of abstraction, such that a sea of objects on one level becomes a single black box on the next. This keeps the "seas" small enough to reason about.


The object graph problem really hurts Rust imo. In C# I can just mutate and forget about ownership. In Haskell I can solve this using an immutable graph and a state monad. In Rust it gets pretty awkward.


Yea, but in those languages you have a garbage collector working in the background to clean up after you. You don’t always want to pay that cost. From time to time I work on an open source application that spends 60–70% of its time in garbage collection, at least on jobs at the extreme end of the size we can handle. I’m starting to think about ridiculous hacks like combining millions of moderately–sized objects into a giant arena. With no pointers into the arena (just indicies), I can eliminate a huge chunk of the time wasted scanning the heap over and over again. But at that point, why am I writing this in Go? My Rust prototype is very incomplete, but for the parts that are implemented it uses way less memory and completes a lot more quickly. And I haven’t done _any_ performance optimization or ridiculous hacks there yet.


And in Rust you can always drop down to reference counted heap allocated stuff as well.


Reference counting has completely different properties than other forms of automatic garbage collection. In particular, reference counting still requires some amount of ownership tracking, whereas this concept just doesn't exist (for memory resources) with tracing garbage collection.

This is particularly relevant for certain algorithm, such as atomic initialization, that get significantly more complex with in-line ownership tracking.


On the other hand, Rc turned out to be just what I needed for a particularly complicated data structure that we use.

http://db48x.net/reposurgeon/rust-port-docs/reposurgeon/path...

Also, because of Rust’s wonderful type system it will be fairly straight forward to replace the string keys in this code with interned strings instead. (The amount of strings we use for paths in a large repository is pretty astounding.) That might now be almost possible in Go, if I learn the new generics.


That’s true; the prototype uses Rc/Arc for things where I don’t yet know for sure how the ownership should really work out, and doubtless a fair amount of that will stay.


In those languages you also have the features to do C++ like coding if felling so inclined to do so.


Rust has been my favourite language since 2013. I constantly miss its ownership model when working in other languages (which most commonly means JavaScript for me).

Certainly not everything is a good fit for Rust’s ownership model, but honestly most code benefits from it at least a bit, and steadily more and more is being figured out about how to mesh other models into it without too much pain. It’s very liberating, protecting from various mistakes that are too easy in other languages (and that’s specifically where I miss it).

Ownership modelling can require more effort initially if the problem doesn’t match Rust’s capabilities very well (or put another way, if the problem doesn’t match how hardware works without garbage collection), but subsequently it’s liberating to not need to worry about various problems that are ubiquitous in other languages.

I reckon it similar to the debate of static versus dynamic typing, though more subtle. For toys, dynamic typing is adequate and static typing a burden, but as you scale a system up, dynamic typing makes life harder and requires more discipline to avoid ossification or drowning from technical debt. It’s taken time, but there has been a significant resurgence in static typing as people slowly come to realise its value through bitter experience in dynamic typing. Like static typing, a strict ownership model requires more effort initially, but makes life easier down the path, and I expect that at least the basic concepts of it will be picked up in more languages new and old over time (this has started already, actually, with Swift and D springing to mind), though it’s probably a harder concept to beneficially retrofit to a language than static typing.

You may want to mutate and forget about ownership, but Rust doesn’t let you do this, and it’s for your own good ;-). In time you get more of a feeling of why it is the way it is and learn to appreciate it.


There is a fairly high bedrock of abstraction, mostly because the compiler is available at runtime, the blurring of the distinction between compile-time and run-time, and image-based development. Common Lisp programs can be as fast as Java, but the executables are huge because they have to bundle the entire image inside. And tree-shaking is less effective because the language is so dynamic it's hard to guarantee some code won't ever be called.

And if the abstraction bedrock is high, then problem domains below that bedrock can't be approached with the language.


Contrary to popular believe, image support is common, but actually not required by the Common Lisp standard.

Various Common Lisp implementations like ABCL (Common Lisp on the JVM), ECL, CLASP, mocl, ... don't support saving and starting images.

> And tree-shaking is less effective because the language is so dynamic it's hard to guarantee some code won't ever be called.

That depends on the delivery system. For example in LispWorks I can manually remove a lot of functionality:

http://www.lispworks.com/documentation/lw71/DV/html/delivery...


Yes, LispWorks has a powerful tree shaker and I think SBCL now has one as well. Arguably images could get even smaller by distributing the runtime as a shared library. I admit this is an implementation detail.


It was already years ago a problem for some companies or organisations wanting to deploy Common Lisp software on tiny (embedded) machines or without GC. IS Robotics (later called iRobot) developed L, a small Common Lisp, for robots with small embedded computers - I think it also has been used on the Roomba devices.


Is this a stripped down Common Lisp, or something like Naughty Dog's GOAL[0], basically an embedded assembly DSL?

[0]: https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


It's Common Lisp. See "L - A Common Lisp for Embedded Systems" https://www.researchgate.net/publication/2949173_L_--_A_Comm...

Another one was CLICC : https://github.com/hoelzl/Clicc which has a language definition for CL0 : https://github.com/hoelzl/Clicc/tree/master/doc

There are or were a bunch of such small delivery oriented implementations: Oracle bought one years ago, Gensym has one, mocl is derived from CLICC, ...


Hello, former minor L contributor here! (I last worked on it in the late 90s, and I'm only going to talk about stuff that iRobot has mentioned publicly.)

L was actually a subset of Common Lisp. It would run pretty nicely on a machine with 1 MB total RAM and a 16 MHz CPU. All the basics were there: lambdas and macros and symbols and modules, and they all worked how they did in Common Lisp. (Or they were close enough that you could write and test in Common Lisp and then cross-compile.) There was a REPL that could run most code, and an ahead-of-time compiler.

It was a fantastically nice environment for embedded systems too small to run something like Alpine Linux. You could compile a program, load it onto actual hardware, and then tweak things interactively using the REPL. And you could invent all kinds of macro DSLs.

Of course, all this becomes moot once your system is large enough to support Alpine Linux and an SSH server. If you can run Linux, then you can use Lisp or Python or PLT Scheme or Rust or anything else you want, and you have half a million open source libraries available.

Still, L is proof that Lisp is a great embedded language for anything with a MB of RAM. You can have a pleasant development environment and tons of flexibility, with an excellent REPL.


"CLiCC is a Common Lisp to C Compiler. [..] CLiCC supports a subset of Common Lisp + CLOS." Isn't it then made obsolete by ECL (a decade or so later)?


ECL is based on earlier compilers, which go back to KCL from 1984.

The idea of CLICC was to compile programs to small static C programs. Really small and with little or no dynamic features of Lisp (code loading, runtime compilation, eval, redefinition, ...). It did not have all the bells and whistles of ECL (virtual machine, runtime code loading, debugging, full dynamic Common Lisp, ...).


The thing is, if you start with Common Lisp, it's pretty easy to write a DSL that adds the constraints and provides the guarantees that you need. If you start with Rust or Haskell, it is impossible to remove the constraints those languages impose short of invoking Greenspun's Tenth Law and re-implementing big parts of Common Lisp.


If you start with Common Lisp it is not "pretty easy" to write a DSL that provides the constraints of Rust. You'll need to reimplement type checking and borrow checking at least, and as others pointed out, it's a ton of work to match the ergonomics of the Rust compiler. It's even more work to implement IDE support comparable to say rust-analyzer (which takes advantage of Rust type inference). (And of course this all assumes you're happy to jettison Rust performance features like absence of GC etc.)

But say you do all that work. Congratulations, you now have an ecosystem of one person. To do anything in this language you need to reimplement a standard library plus third-party libraries, or at least write Rust-type-safe glue code to existing Common Lisp libraries. If someone wants to join your project, they'll need to learn your DSL first.

When you write a bespoke DSL for a problem already solved by a programming language with a sizable community you're giving up a lot of network effects. That is something the OP completely failed to account for.


> If you start with Common Lisp it is not "pretty easy" to write a DSL that provides the constraints of Rust.

It's a lot easier than writing Rust.

> you now have an ecosystem of one person

Do you think it would have even been possible for one person to write Rust? Imagine if all the effort that went into writing Rust -- and Haskell and Swift and C-sharp and Python and Java and Javascript all the other myriad balkanized languages that have come along since CL was standardized -- had gone instead into improving CL. Do you really think we would be behind where we are today?

Inventing a new and different syntax for every new language you design is a choice, not a technical necessity. The fact that languages have communities is survivorship bias. It's a bug, not a feature. In Common Lisp, it is possible for a single person to write something truly useful. In other languages, it is not. This is not to say that Lisp's lone-wolf culture is desirable; it's not. But that is NOT a reflection of any technical problem with Lisp. It's a political and cultural problem. And the first step to solving that problem is getting more people to recognize it for what it is.


> Do you think it would have even been possible for one person to write Rust?

No, but I don't think it would be possible for someone to implement an S-expression Rust dialect on top of CL either.

> Do you really think we would be behind where we are today?

Hard to say. The counterfactual world where everyone decided to build on top of CL is so weird it's hard to say anything about it. Also, "improving CL" is an unclear term. For example if you want to replicate all the benefits of Rust by "improving CL" then that means, among other things, you need an implementation of "CL" that can compile your Rust-equivalent subset/superset of CL to bare metal with no runtime (including no GC). That implementation will take advantage of Rust ownership and other properties thus will not compile regular Lisp code. If you have a separate implementation for a special dialect of CL with distinct static and dynamic semantics from normal Lisp, it is only very loosely still "CL".

> In Common Lisp, it is possible for a single person to write something truly useful. In other languages, it is not.

This is an absurd overgeneralization that makes you sound like a nut.


> This is an absurd overgeneralization

You're right. Allow me to rephrase: in CL it is easy to write a compiler for a new language because you don't have to write a parser or a back-end. It is so easy that it is a plausible task for one person to write such a compiler and make it truly useful, and indeed there are several extant examples. In other languages it is much harder.


> The thing is, if you start with Common Lisp, it's pretty easy to write a DSL that adds the constraints and provides the guarantees that you need. If you start with Rust or Haskell, it is impossible to remove the constraints those languages impose short of invoking Greenspun's Tenth Law and re-implementing big parts of Common Lisp.

This surely cuts both ways: if you write a DSL that adds the constraints and provides the guarantees that you need, you've re-implemented big parts of Rust or Haskell.

Or maybe you only need a tiny fraction of the constraints and guarantees that they provide. But it's likely that you'll need more over time, and then you wind up with not just a re-implementation but a gradually accreted, rather than coherently implemented, one, and that's unlikely actually to provide the guarantees it's meant to provide.


> you've re-implemented big parts of Rust or Haskell

Maybe. Or maybe all I had to do to turn CL into Haskell is implement the Hindley-Milner algorithm. I didn't have to write a parser or a compiler because I can just re-use those from CL.


I used to use CL quite a bit, but have since abandoned it for Haskell, so I'm a bit biased.

There's a number of issues with that:

- I'd be missing all the optimizations that can be performed due to purity.

- There's more to Haskell's type system than just vanilla Hindley-Milner, and the implementation of it isn't particularly trivial. https://github.com/stylewarning/coalton is the closest thing and it's still missing a large amount of the type system.

- Doing the implementation would be a significant amount of work to get it to integrate well with the language, and it would be a layer tightly glued on top instead of integrated with the language. I've seen many good DSLs embedded in lisp, but a type system is hard to embed in any language because it changes fundamental semantics of the language. Typed Racket is a massive project and it's lacking things like ADTs.

- A major part of Haskell is the standard library, a good chunk of the semantics of Haskell people use on a day to day basis, like monads and etc, are a part of the standard library.


> I'd be missing all the optimizations that can be performed due to purity.

What I'm advocating here is not retrofitting Lisp but embedding DSLs within Lisp. In a DSL you have complete control over the semantics, and so you can do all the optimizations that Haskell does.

> coalton is the closest thing and it's still missing a large amount of the type system.

Note that this was written by one person. You have to distinguish between what has actually been done given the current state of the world and what would be possible if people made different choices. If a tenth of the effort that has gone into implementing Haskell had instead gone into implementing a Haskell-equivalent embedded in CL, that effort would plausibly be competitive with actual Haskell if not superior.


well, a Haskell had been implemented in CL

https://github.com/haskell-lisp/yale-haskell


Yeah! I worked with Robert Smith on porting it to modern CL implementations.

Not a fun codebase.


Rust didn't have to do those things either, parser generators and LLVM meant that much of the work was done for them. So they spent their time on borrow checking.

Now they are reimplementing some of those things for performance reasons, but you're going to have a hard time convincing anyone that a lisp-based rust you wrote is going to be more performance than the one that exists today, so they'd need to make those optimizations anyway.


Rust does not use (and I’m pretty sure “has never used”) a parser generator.


And even if it does, that misses the point. You still have to put in effort to design and write the grammar. If you embed a DSL with Rust's semantics within Lisp you don't have to do that. You just use S-expressions and use the Lisp reader to parse them.


You're assuming that creating a fully featured dsl that is ergonomic enough will be vastly simpler. "Just" is doing a lot of work, and hiding complexity, as it often does.


I'm not assuming it, I have actual first hand experience, and a lot of data points from other people's efforts.

You are absolutely right that I'm squeezing a lot of content into the word "just". But I'm not doing it in a vacuum or out of ignorance.


> If a tenth of the effort that has gone into implementing Haskell had instead gone into implementing a Haskell-equivalent embedded in CL, that effort would plausibly be competitive with actual Haskell if not superior.

It's such a wildly ridiculous claim though. It implies either

1. that the syntax of Haskell is the majority of the work, such that using a dsl would remove 90% of the effort.

2. Somehow a dsl for Haskell embedded in a lisp would be more efficient to work in, so much so that development would be an order of magnitude faster.

1 is absolute nonsense. 2 might be true in a sense. I doubt it, but even if it were, such a language would no longer be Haskell, so you'd lose out on the value you gain from it's syntax. The same is true for rust.

S-expressions are nice, but they aren't always the best way of structuring things, and if you're going to forego them to better embed another language, why constrain yourself to the lisp runtime?

The more I look at this, the less sense it makes.


So, where are these languages?

You can create a DSL for a team working on a specific problem. Lisp is good at that. What I have not seen is a DSL that anyone else wants to use.

Someone builds a DSL, a small team uses it. It just has to meet the needs of that team. It doesn't have to be bulletproof or elegant. It can have all kinds of rough corners. It can take undocumented tribal knowledge to know how to use it.

The comparison to Rust is therefore almost completely misleading. It's like comparing Linux in 1992 to a commercial Linux distribution today.


Yes, that is a totally fair point. But Linux is where it is today because some people in 1992 looked at Linux in the state it was in at the time and decided that it had potential. If that hadn't happened, no one outside of a small group of hackers would ever have heard of Linux. The idea that embedding languages in CL is a bad idea is likewise a self-fulfilling prophecy.


I'm not sure that it would be "pretty easy" to write a DSL that adds the constraints and provides the guarantees that Rust does. I'm not sure that it's that easy to get it consistent and bulletproof.

Yes, you can't remove the constraints of Rust or Haskell (other than using unsafe, I suppose). But if those languages have the constraints that you want, then trying to write that as a DSL instead is... probably hubris. Also wasteful.


> I'm not sure that it would be "pretty easy" to write a DSL that adds the constraints and provides the guarantees that Rust does.

It would be a lot easier than inventing Rust, for two reasons:

1. You don't need to write a parser.

2. The target language for your DSL compiler can be Common Lisp rather than machine language without sacrificing performance.

Those two things make it absolutely certain that however much work it is to re-invent Rust in CL it will be strictly less work than inventing Rust from scratch.

There are other benefits to this approach as well. For example: if, after implementing your DSL you discover that you've made a sub-optimal design decision somewhere along the line, it will be a lot easier to tweak your DSL than it is to tweak Rust.


> 1. You don't need to write a parser.

One of the things people really like about Rust is its fantastic error messages. They're very helpful.

But, having decided you "don't need to write a parser" you're in a tough place for your DSL since there are problems where Lisp's parser will just tell your DSL programmers something went wrong here and leave them clueless as to what exactly is wrong in their program. Rust wouldn't have done that, but of course its compiler owns the parser too so it gets to handle all the corner cases correctly.


You can add descriptive error messages when an error happens (like multiple mutable borrows) during macro expansion. Macros are just regular list code, so all runtime features like printing functions are available.

You could even write a FPS using OpenGL where shooting functions, or pieces of suctions removes them from the current compile job (or even deletes them from the source code entirely). In a macro.


I mean, that's nice, Rust's proc macros get to emit descriptive error messages† too, but for that to happen the macro expansion occurred and that means we didn't have a parse error. So you're dodging the problem.

† Or of course, to deliberately not emit descriptive error messages, e.g. whichever-compiles!


> It would be a lot easier than inventing Rust

And yet rust has been invented, and it seems the CL version is hypothetical.


"Easier than inventing Rust" is very much not the same as "pretty easy", though...

And in a world where Rust already exists, in most cases it's easier to just use Rust than trying to write "that kind of thing" as a DSL in Lisp.


Or maybe using Rust is an instance of the sunk-cost fallacy. It's possible that the net costs of living with Rust's constraints are larger than the cost of re-implementing it in CL. This has undoubtedly been the case for C and C++. The cost of living with C's constraints (or lack thereof) is incalculable, surely billions of dollars, possibly trillions if you add up the costs of all the buffer overflow vulnerabilities that have cropped up through the ages. You could probably write one hell of a CL compiler for a billion dollars.


This is why Apple, Google and Microsoft have finally started to (slowly) move away from them (or deeply push mitigation strategies), with the increase of networked devices, fixing those CVEs is finally making visible results in what would be otherwise profits.


I don't think Lisp is as clearly superior to Rust, as Rust is to C.


> The target language for your DSL compiler can be Common Lisp rather than machine language without sacrificing performance.

Uh, how exactly does that work? Which CL impls can e.g. vectorize as well as rustc does?


And yet there's been absolutely no such DSL from CL that's successful. Maybe people just want distinctly different tools for different purposes.


Can you add a proper strong type system to Lisp? Restrictions are not bad in itself.


Common Lisp does have a "proper strong type system". What it doesn't have OOTB is is static type checking at compile time. Those two are orthogonal issues. Although with CLOS (object system) you get pretty close. Some compilers like sbcl have type inference and you can declare types of variables and function arguments. There are also libraries that take this much further to make it work more like a statically typed language. There are even libraries that add things like dependent types and such crazy things.

Note that static type checking can be difficult to work with when using the prominent interactive programming style since the types aren't always wellhdefined and/or in flux while writing and adding new bits and pieces of code, or redefining types, adding struct/object members at runtime, etc.

But it's possible, yes.


And it is certainly possible to put static type inference into a DSL embedded within Lisp.


My data is often in a class, structure, or hash table, all of which are vanishingly rare in my code...


I’m curious, but my reading of your comment essentially boils down to ‘the actual data in my code is vanishingly rare’. Am I reading that right? Is the comment actually saying that in any given program most of the data is in some structure, but compared to the rest of the code is it almost insignificant? I’m not judging or contradicting, but if my reading is right, that’s the fist time I’ve heard anyone make that point in such a straightforward manner.


I was replying to this:

> In my experience, Commony Lisp is a deeply opinionated language, and its most dramatic opinion is that "your code and your data should have identical representations and structures." And for the right problem, that restriction is extremely powerful.

Code in common lisp is almost entirely in a tree structure via nested linked lists. However, when programming in Common Lisp, I don't use this specific data structure very often for things that aren't code. Furthermore many other Common Lisp programmers write software this way as well, which is a bit of a counterexample to the assertion I was replying to.

[edit]

I think an opinion common lisp does have is that code should be easily manipulable as data. As much as scheme programmers like to call CL macros "unhygienic" it's far easier to write a correct general-purpose CL macro than a correct macro in other languages.


Lisp has a lot of functions for working with linked lists, which do double duty as functions for working with parsed code. But optimized Lisp tends to use structs and arrays and hashtables and avoids creating a lot of lists at runtime.


>Languages which impose weird restrictions tend to do so because it allows them to offer (nearly) ironclad guarantees about something else.

Yes, but oftentimes this is more like:

"If we tie you to this bed, you will never have a car accident! You'll also never get lost! This also makes it easier to stay healthy as we'll be feeding you the best food!"

Sure, but I'll also miss all kind of activities I could be doing outside on my own. I'll also get bed sores. And I often want a steak or ice cream, even if it's not ideal, but you insist bringing me this organic gourmet vegan crap.


I think that point of view assumes bad faith.

No one sets out to create programming languages that tie people to their beds. If people add restrictions, it's not to make users miserable.

It's fine if you like being unrestricted, I like that too.

However, when you see people going out of their ways to add barriers to their own work, you should assume they reap a benefit that you're not fully appreciating, not that they hate fun and want to outlaw ice cream.


>No one sets out to create programming languages that tie people to their beds.

People can still end up designing such languages, even if they didn't set out with that specific goal.

(Kind of like how you can bring war and division, even if your goal is to bring some religious peace on Earth or "the revolution").

>If people add restrictions, it's not to make users miserable.

That's already explicit in my comment though. They didn't tie the person to his bed to make him miserable but to spare them from car accidents, to help them never get lost, and other such niceties!

The misery is a by-product, not a design goal.


There's a reason they were called "bondage and discipline languages". They created barriers for what they assumed to be your own good. But as Edward Abbey said, "When I see someone coming to do me good, I reach for my revolver." That is, when someone wants to make decisions for my good, they're deciding for me; that is, they're taking the decision away from me, because they don't think I can be trusted to choose the "right" thing. And then I'm supposed to feel grateful for the help instead of resentful for being treated like an idiot or a child!

They mean well. Fine. I still don't have to like it, and I don't have to accept it.


And frankly, in case of a complex program, you can’t be trusted to choose the right thing, as the human brain simply doesn’t scale with program complexity. We need help from the computer to avoid bugs for anything non-trivial.

For example, as have been shown for example countless times, C programmers are unable to properly free allocated memory so solutions were made (GC, or rust’s ownerships). It is egoistical to think


But they can't be trusted to choose the right thing either. Wirth, sitting in his office at a university in Switzerland, has no idea whatsoever what problems I'm facing trying to use un-extended Pascal in an embedded system with memory-mapped I/O. (No, it wasn't my choice to use Pascal without extensions in that environment.)

The point is, you don't have enough information to choose for me. You can't do it. Even if you can absolutely, 100% correctly decide what the "right thing" is in any given situation, you don't know all the situations.

(And if you'd say "the right thing is to not use Pascal in that situation", I'd agree with you. But as a language designer, your language will get used in places where even you may think it's not a great fit.)


I don’t think it takes bad faith to come to GP’s conclusion. Further, it implies a great deal of superiority for you to assume that someone who does not like/enjoy the restrictions of a language does not ‘fully [appreciate]’ the supposed benefits of those restrictions. I, for one, do not enjoy or value the restrictions (or underlying semantic concept) of ownership in Rust. It just means that I don’t enjoy it, I understand the benefits which are claimed to exist for adopting the Rust-centric view of programming. I just don’t think they are worthwhile for me. I’m not really disagreeing with the point you make about language designers intention, I doubt they set out to inflict misery on users, but your assumption about not understanding some alleged benefit is a bridge too far.


The point is not whether they are worthwhile to you, but to the problem space the program solves. Further that includes the context of the organisation/developer that will own it going forward.


I'd probably liken it traffic controls, like lanes w/lines, stop-lights, stop-signs and other formalized rules of driving.

Yeah, it might suck that you can get a ticket for treating a red-light like a stop-sign at 6am in the middle of no-where. Though this is me speaking from a "likes rust" perspective.


Could you please describe a case where this is actually happening in the languages people tend to imply this is happening in?

Rust has unsafe, Haskell has unsafePerformIO. You might not like how verbose some of the stuff is but all these languages give you a foot gun if you really want it.


That is the usual cowboy programming sentiment that fueled the same critic against Modula-2 and Pascal.

A well known article, written in bad faith, is "Why Pascal is Not My Favorite Programming Language".

http://www.cs.virginia.edu/~evans/cs655/readings/bwk-on-pasc...

Why bad faith? Most of the critic he makes against Pascal was sorted out by Pascal dialects.

Which one can consider doesn't count as dialects aren't the main language, yet by 1981, C was mostly available in K&R dialects outside UNIX, like Small C.

And besides those Pascal dialects, Modula-2 standard was released in 1978, while Mesa was powering XDE and Bravo at Xerox.


> Why bad faith? Most of the critic he makes against Pascal was sorted out by Pascal dialects.

That reminds me of Yes, We Have Noticed The Skulls [0].

[0] https://slatestarcodex.com/2017/04/07/yes-we-have-noticed-th...


Great read, thanks for sharing.


However, the author assumes that it follows if you've Noticed The Skulls you would choose to do it differently next time. That's not necessarily so.

Arthur Laffer is an economist. Laffer has helped provide advice to numerous Republicans and Republican projects which of course didn't achieve their stated goals (e.g. Trump's massive tax cuts for the wealthy which Laffer claimed would grow the US economy by 6%).

I'm confident Laffer has Noticed The Skulls, but why should he care? Advice that destroys the US economy but further enriches those who already have more than enough worked out great for Laffer.


Not particularly. The title would be pretty awkward if it were Yes, We Have Noticed The Skulls, Condemned Them, And Promised Not To Do Them Again, but the body of the article describes exactly that. Has Laffer said "mea culpa" and made significant and visible efforts to prevent the skulls from happening again? If not, then that's not what Scott's describing.


Programming is for all kinds of people.

If you felt despair when you read the metaphor of totalitarianism and just want to get away from languages that want to grind an axe on you, and Lisp is not your cup of tea, then Raku and Perl welcome you.


This is a purely emotive metaphor with no technical content?


Yes, and as such perfectly isomorphic to the resistance the programmer feels when working e.g. to satisfy the borrow checker or strict type systems.

This is not about some "technical" impossibility - language still have an escape hatch like unsafe and "non-idiomatic" ways of coding.

But to focus on the technical and ignore programmer experience ("feelz") in that area, is like equating Idris and Forth because "they're all turing complete and it all ends up as binary code anyway".


It is more like, if you were an helmet, a seatbelt, metal gloves, you will survive to tell the story.


Not really, as a helmet, seatbelt, and gloves are passive objects you just wear and continue as previously, engaging their protection only in case of accident.

Whereas working with the borrow checker, or a Haskell-like type system, changes the experience and the method of programming itself, forces certain patterns, and so on.


If they were passive objects, many wouldn't avoid using them unless forced by law.

Using them also changes the experience, and there are common patterns to fake using them.


It’s almost like this is just a shitty analogy that doesn’t fit the original context properly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: