I think formalization and abstraction are two different things, and you conflate them in your comment.
Formalization, in my view, is about "knowing what you are talking about". So you have a perfectly defined set of basic operations on the concepts you use, and you know what you can do with them and what happens if you do; you cannot cheat by making stuff up as you go along.
On the other hand, abstraction deals with "how general I can make this thing" (AKA reusability).
What I mean by "formalization has won in mathematics" is that when you write a paper, all concepts you are using have to be formalized, or at least it must be obvious how they can be, typically through a shared context of standard theories (so you don't have to use formalism everywhere, although it would certainly be desirable and eventually I think it will happen).
But that doesn't mean you have to always use the maximal level of abstraction. It's perfectly valid mathematics, for instance, to prove a theorem just for Euclidean spaces even though there is nothing that could stop you from proving it for topologies. (And I think Bourbaki group learned about the cost of abstractions the hard way.)
The same is fine with code - if you think abstraction hurts it, absolutely go for lower level of abstraction. But it will hurt reusability of said code, no doubt.
But if you want to talk about some reusable concept, you should be able to show the code at the correct level of abstraction (or at least be able to show that you are able to show it).
So that's my beef with software design patterns - they are abstractions, but aren't formally defined. It's like eating a cake and wanting it, too. They are at best transient thoughts ("funny, this thing here looks awfully similar to that thing there, how so") before somebody should abstract them formally. And I cringe when people think that they help communication, because in mathematics, we already went through that.
I was using abstraction in the sense of "lambda abstraction", like a formally specified function with parameters.
Well, I had to remind myself what the actual design patterns "movement" is, and I get the impression that it's a dead movement with no development since the classic book and a very limited influence.
It seems that maybe the bigger problem is that there is no "theory of software" aside from a few interesting attempts like "algebra of programming" etc. That's why I can't really blame people for using software concepts that aren't formally specified.
What's your favorite alternative to design patterns, or like the path you think is more fruitful and good?
I think having very strong foundations is very comforting, and the separation of side effects (and introducing them into the type system) has important consequences for composability. But it is still hard and work in progress.
One of the more subtle criticism I have of the classic design patterns is that they often rely on interfaces as descriptions of objects (in the general sense). But it is a very weak description, and indeed, the most useful type classes (which is comparable to interface) in Haskell actually also have laws (not expressible in the type system, for better or worse) that describe behavior. This was a point made by Erik Meijer, that really, you want to describe the objects (that your abstractions can work with) with the laws governing them, not interfaces.
So yes, I think the type classes with laws that are commonly used in Haskell (which is actually pretty much application of mathematical concepts) are the way forward, because they are both better defined and more fruitful (thanks to being explicit about laws) than design patterns.
In particular, I suspect the whole field of group theory is still waiting to be discovered by programmers; I had this idea of looking at error handling and recovery, or transactional processing, as a group operations within a certain subgroup (of valid system states). So no operation or empty transaction is a neutral element, and recovery from action or transaction rollback are inverse elements in some group. I am not yet clear whether this is helpful or trivial, and I am not really sure how to proceed with it. But Monoid abstractions are already quite used in Haskell, so why not Group ones?
I'm super interested in all that stuff, but I also feel that it's not for everybody. Like just yesterday I saw someone on the Rust subreddit who mentioned that they think Haskell is cool but it tends towards too high abstraction levels for their taste, which I totally understand.
> You can do the same things as in Lisp, with type checking
Can't easily do even simple things like variable argument functions, or polymorphic lists (required for representing source code for macro programming).
Lisp has loops, variable assignment, stateful objects of various types and so on. Also, strict evaluation for straightforward, deterministic sequencing.
I only touched that point but as I said - I used to think the same, but then I actually studied it and changed my mind. You can do all that in Haskell too, including interpreting homoiconic code and build DSLs (which was my main point), and it's all built on the more solid foundations with integrated static type checking. It just works quite differently.
It's not about that. Haskell has tools to do all that, and they are better. You can write imperatively in Haskell, you can write strictly or lazily in Haskell, you can have dynamic types in Haskell. There is actually more choice.
However, one (unfortunately) needs to study it to see it. I like Lisp, but curiously enough, it has its limits too.
Formalization, in my view, is about "knowing what you are talking about". So you have a perfectly defined set of basic operations on the concepts you use, and you know what you can do with them and what happens if you do; you cannot cheat by making stuff up as you go along.
On the other hand, abstraction deals with "how general I can make this thing" (AKA reusability).
What I mean by "formalization has won in mathematics" is that when you write a paper, all concepts you are using have to be formalized, or at least it must be obvious how they can be, typically through a shared context of standard theories (so you don't have to use formalism everywhere, although it would certainly be desirable and eventually I think it will happen).
But that doesn't mean you have to always use the maximal level of abstraction. It's perfectly valid mathematics, for instance, to prove a theorem just for Euclidean spaces even though there is nothing that could stop you from proving it for topologies. (And I think Bourbaki group learned about the cost of abstractions the hard way.)
The same is fine with code - if you think abstraction hurts it, absolutely go for lower level of abstraction. But it will hurt reusability of said code, no doubt.
But if you want to talk about some reusable concept, you should be able to show the code at the correct level of abstraction (or at least be able to show that you are able to show it).
So that's my beef with software design patterns - they are abstractions, but aren't formally defined. It's like eating a cake and wanting it, too. They are at best transient thoughts ("funny, this thing here looks awfully similar to that thing there, how so") before somebody should abstract them formally. And I cringe when people think that they help communication, because in mathematics, we already went through that.