Hacker News new | past | comments | ask | show | jobs | submit login
Why Patterns Failed and Why You Should Care (uni.edu)
166 points by matt_d on Oct 6, 2018 | hide | past | favorite | 95 comments



Disappointingly, the article misrepresents the most serious criticism of the GoF patterns:

> we often heard folks say that software patterns existed only because people used horrible languages like C++ and Java

Maybe folks say that. But (at least when they thought about it a bit more) what they actually mean is that the patterns from the GoF, and similar constructs, only exist because people use “horrible” languages. Nobody is seriously disputing the existence and usefulness of functional design patterns. But they are on a higher level, and arguably more useful, than the basic building blocks from the GoF book.

Peter Norvig famously found that most GoF patterns are implicit and not worth talking about in dynamic, functional languages [1]. But he isn’t saying that higher-level patterns are useless or only exist in “horrible” languages. Contrary to what the article says, the common criticism of GoF patterns isn’t an “error”, it’s spot-on. It just criticises something different from what the author thinks: few people seriously criticise the usefulness of Alexander’s pattern language.

[1] http://www.norvig.com/design-patterns/


Design patterns without language support are all instances of the human compiler. Some human has to do all the work.

In a language with no functions or objects, like assembler, functions and objects are design patterns. You use the subroutine pattern and the abstract data type pattern. Yes, I was taught about abstract data types in C, instead of using a language with objects.

This doesn't mean you should forget what functions and objects are if you use a language that supports them. What it means is that the compiler will perform additional checks to ensure the function is well defined, and it will perform additional checks to that objects are well defined.

So, a more complex design pattern is something you have to care about and check for yourself in a simpler language, and something you have to care about but also something the language cares about and checks for you, in a more complex language.


I like the way you expressed this. It takes longer to say, but is less contentious than Norvig’s pithy quote “Design patterns are bug reports against your programming language.”


GoF style software patterns and the software pattern movement are misunderstanding of Alexander's work and his pattern language. Software patterns are recipe collections. They may be useful as recipe collection and bag of tricks. but that's what they are.

Only exception is, of course, Richard Gabriel who studied Alexander's work and read his other books.

Christopher Alexander's foreword and the first part of the book Patterns of Software are good intro to Alexander's thinking and how it may inspire software.

https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf

ps. Richard Gabriel is the author of Worse is Better: https://www.dreamsongs.com/WorseIsBetter.html


Good grief.

GoF was wildly successful.

Design Patterns was of a time. We were just learning how to talk about design, the design of design. Now that methodologies, heuristics, frameworks, architecture and so forth have become "democratic", it's hard to appreciate the original efforts out of context.

Concretely:

To Norvig's point, GoF style Design Patterns are just a rung in the programming language maturity ladder. Something like: patterns > idioms > libraries > syntax.

Cynically:

Design pattern implementations just add another layer of indirection. On the notion that you can, may, or should defer an architectural decision. (Opinions differ. I lean towards YAGNI.)

Metaphysically:

Mastery is like becoming a jazz musician. First you have to learn the rules before you know how to break them.

--

I started a design pattern study group in the 90s, still going strong today.

It always kinda goes like this:

First reading, mind blown.

Second reading, aha, now I see how this could be useful.

Third reading, meh, it's obvious, why we still talking about this?

Happily, progress marches forward, and there's always something new (or new again) to argue about. Rinse, lather, repeat.

--

The sole legitimate criticism is the exploitation of the design pattern zeitgeist. Here's a relevant documentary on that phenomenon. "Exit Through the Gift Shop" https://www.imdb.com/title/tt1587707

Weaponization, misuse, misappropriation of an idea happens every where. I worked for a guy who called everything a Decorator. One of the architects for the Kuali Foundation student project. Took me way too long to figure out he didn't know what a Decorator was. Those conversations were so confusing.


Took me way too long to figure out he didn't know what a Decorator was. Those conversations were so confusing.

Some of the GoF folks used to say most design patterns were just State or Strategy in disguise.


Ya. Just another layer of indirection. In most cases, it's about intent.

The "architect" in this story conflated Chain of Command with Decorator. Fire brigade vs call thru. And how that impacts exception handling.


It's important to note the difference between the technical mechanism and the motivation for a design. Yes, lots of languages can pass functions around so perhaps you think the Strategy pattern doesn't apply to you. And yet I see huge numbers of APIs that still take enormous option maps and don't offer configuration with some sort of callback, however simple the host language might make it. Patterns capture a design _choice_ as much as a technical implementation.


It is sort of generally true that I usually find myself only using patterns to "fill in" functionality that isn't as easily achievable (or lacking completely) in a particular language. But being able to do this is incredibly important, and I can think of at least one time in my life when having this capability put myself and my team at a significant tactical advantage. I think you could find a lot of parallels with for example AvE on Youtube, who builds the tools he needs when he doesn't have them available. I never heard any of the hype surrounding patterns, so I can't really say whether or not they lived up to it. But I recommend to young programmers that I mentor to learn about patterns, and it's been my experience that they are universally better programmers for it, even if they aren't using them except in edge cases.


I don’t know if you watched the video this blog post is responding to, but I really would because I think it’s a far more important contribution to this conversation than the response.

His fundamental point was that design patterns should be about describing knowledge of how different constructs react to each other in combination, and that design patterns as written were mostly about how to build these individual constructs out of just inheritance and methods. Hence the ultimate failure to create a lasting conversation.


I started reading Richard Gabriel's book Patterns of Software just recently, enjoying it immensely. He has it available at https://www.dreamsongs.com/Files/PatternsOfSoftware.pdf

It's very much not a GoF "patterns" book but about more important (though harder to nail down) concepts like "code habitability". It's even got an introduction by Christopher Alexander himself that I appreciated. If "Patterns" have failed, it's at least in part because we've been reading the wrong material.


Interesting book, thanks for the link.

Very philosophical. Interesting read about 'habitability'.


Patterns are just named best practices, you can't argue they, as a concept, failed, it's ridiculous.

Some patterns solving problems in C++/Java context don't translate to ie. Haskell/OCaml - and that's fine, I don't think it bothers anybody just as patterns in Ethereum/Solidity smart contracts don't make sense in Javascript - the context is completely different.

Patterns (named, best practices) and anti-patterns (named arrangements to be avoided) - are sign of any maturing thing (programming languages, frameworks, databases, ui etc).


> Patterns are just named best practices

Agreed, and this is one of the most important parts of patterns - being able to quickly communicate with someone else. In OO land, explaining to someone that this is a factory or a decorator lets them immediately understand the purpose.

Patterns also haven't gone anywhere. They are simply different for different languages and contexts. If someone sits down at a JS UI project and they are told it uses a one way data flow pattern vs. a two-way binding pattern they will immediately have a clear high level idea of what's going on.

I think people are too quick to get lost in the GoF patterns (and then mix in OO hate miss the forest through the trees). They forget that the GoF said these are common patterns that they saw while creating software. At no point did they say those were the only patterns or even the best patterns for evermore. For me, the real win for going over the GoF was a reminder to think at the next higher level of abstraction. To use the example above, Redux is a way to implement one way data flow pattern. Decoupling the implementation from abstract idea allows a higher level of understanding that is also transferable across technologies.


Named patterns rarely have anything to do with best practices, even for the hypothetically optimal context the pattern was supposedly invented to handle.

In fact, seeing code that uses a named design pattern is usually a severe bad code smell that people were very confused about how to model the domain problem in an efficient way and fell back to a pattern as a lazy way out.

Using named patterns is the local development equivalent of “nobody got fired for hiring IBM.”


I think this comment has little to do with the article. You say patterns are X and X is useful therefore patterns didn’t fail, whereas the article says that patterns are meant to be Y and X is not as useful as Y so patterns failed. I think what you call a pattern is what the article would call a building block and the article does not talk much about whether or not building blocks are useful.


Patterns are just named best practices

More specifically, they are named documents of best practices for a particular contexts comprised of problems and constraints.


I think that’s not true. You can always apply best practices. You have to weigh advantages and disadvantages for each situation with design patterns. Most of the time where you can use a design pattern you shouldn’t (singletons are just the most trivial example)


> Patterns are just named best practices

Not best practices: just common practice, few would argue that the singleton pattern is a best practice.. But yes, the naming truly help communications.


For me, the epiphany about design patterns happened some fifteen years ago, when I was trying to understand and learn the abstract factory pattern, in PHP back then. After constructing all the concrete factory classes and the whole setup, I suddenly realised that, in PHP, it all boils down to a simple:

    $className = 'FooBar';
    $newInstance = new $className();
And other dynamic languages, like Javascript and Python, have their own mechanisms to dynamically define the class you want to instantiate. This is not about dynamic vs static languages -- it's simply that I then realised that the GoF patterns are not some generic programming constructs; instead they are simply recipes for dealing with certain specific type of constraints arising from a specific set of programming tools. Dynamic languages (or any other tool) have their own constraints, and there are "design patterns" around them.


Nnnh, only in the really straightforward case. It's often not the case that you can just `new` what you want with no parameters, and _that_'s what the factory pattern covers.

To take the archetypical example, let's consider our database. We just want something that implements the `Database` interface, but we can't just `new MySqlDb()` because it needs a whole bunch of data to connect - so we end up needing something more complicated which can build and configure something implementing the interface we want without requiring us to know anything about the implementation.

What people mean when they say better languages have this behaviour as just something obvious within the language semantics is like, when you simplify it down you don't get `new $className()`, you get a function of `() => Database`. Any language with higher order functions that can properly represent function types can, instead of building all this cruft to construct factories, just accept that the concept of "something you don't need to give any data to but which returns you a configured X` _is_ a function from nothing to X- and so all the behaviour is "built in".

The code which deals with databases just needs `() => Database` as a parameter, and then your composition root simply provides the right function. Exactly the same expressive power as an abstract factory, but it's so straightforward it doesn't need a fancy name.


You could still do the following in Java:

    Class<? extends FooInterface> c = FooBar.class;
    FooInterface newInstance = c.newInstance()
But people usually choose not to, because it still leaves the construction details up to the caller (and forces the class to have a no-arg constructor). The point of a factory is to create a fully "configured" object, e.g. maybe the factory has a set of specific constructor arguments it uses for all the objects it creates, that the caller then doesn't have to know about, etc.

It's true however that people would simply use higher order functions for this in JavaScript, e.g.

    function createButtonFactory(color, background) {
        return function(text) {
            return { type: 'button', color, background, text }
        }
    }

    const buttonMaker = createButtonFactory('grey', 'blue')
    
and then you can pass `buttonMaker` to somewhere that wants to create buttons by just specifying the text (i.e. some function that will be calling `buttonMaker('hello')`)

This has nothing to do with the language being "dynamic" though, it's about having this specific feature. You could do the same thing in C++ now that it has lambdas.

The contract of the thing that uses the factory is "I need to be able to create buttons by just specifying their text", i.e.

    type ButtonMaker = (text: string) => Button
    function loadGui(buttonMaker: ButtonMaker)
so the factory has to take care of all the other details (if any) to make that possible.


Yes, that may work for that case, but an abstract factory can do things like choose the appropriate underlying class that you don't know of, returning the interface that you do know. In a duck-typed language like Javascript or PHP, it's maybe not obvious unless you test on type for some reason.

Some languages, like Smalltalk, consider every instance of Class to be an abstract factory. If you change #new you can have it return anything, although you should leave #basicNew alone.

Some languages, like Objective C, have a bizarre idiom where you ask specifically for one thing (NSArray?) and get another that implements NSArray's protocol. OK...

Where the usefulness of design patterns really comes into play is in naming the thing. My undergrad brain spent way too long trying to understand what COM's IClassFactory was doing when it would have been nicely named IAbstractFactory or IFactory or simply IClass. By abusing the conventional term, it made it unnecessarily hard for me. It didn't help that COM called the objects that implement IClassFactory "Classes" or "CoClasses". While the designers of COM did a fine job overall, you kind of wonder if they were new to OOP when they were first writing it, and couldn't agree on terms.


When you know in advance which object to use and object does not require configuration, instantiate that object without factory or dynamic construct.

The factory is useful only if you are a.) about to wire object into something complex or b.) create it based on complicated external rules or c.) decided that some group of objects is better to be initialized all at the same place for code readability (rare). Even in java, there is not other use.

As in, your php code foes not replaces factory and using factory where you phone code works is unnecessary overkill.


Agreed, generally class factories make little sense in languages where a class reference lives at run-time in some or other form. Even some static languages have this, e.g. ObjC.

I would even go with a broader generalization and say that existence of patterns (i.e. things that require a lot of typing) in a given language means the language is underdesigned and/or is behind the times. Which is certainly true for Java for example.


That's pretty cool. But I would prefer code that checks the existence of the class at compile time.


The implementation might be trivial, the design choice is not.


Patterns failed because they aren't patterns. But if the GoF had published a Glossary of Software Things We Find We Use, few would have bought it.

In one sense DP wasn't a failure, because it made its authors boatloads of money. That's usually considered enough, if your expectations are not unreasonable. It didn't revolutionize software development, but software development already was and still is dead in the middle of a continuous revolution, with no end in sight.

That revolution is driven by programming languages getting better at enabling programmers to capture ideas in libraries. Progress mostly happens by existing languages adopting good ideas. Sometimes the ideas have been tried out first in a new language, but as often not. If you have a good language idea, you stand an overwhelmingly better chance of people using it if you get it into a widely used language. As a consequence, C++ has, over the past 20 years, turned into a language that is enormously more fun than its first standardized version. C++11 is more fun than C++98, C++14 is more fun than 11, 17 is more fun than 14, and 20 will be more fun than 17.

The bleeding edge is discovering how to express good ways to control the non-von Neumann hardware whose computational capacity completely overwhelms the pile of cores we are handed. It's not clear that libraries will be able help very much with that.


Selling bureaucracy is a lovely business.

DP are actually useful in some situations, but its importance is oversold.


Selling bureaucracy is a lovely business.

Hence, the profitability of most business software.


Also a bunch of letters that are not enough to ensure any certificate holder can turn on the computer, but hey, it sounds fancy, like CMMI or ITIL


> The bleeding edge is discovering how to express good ways to control the non-von Neumann hardware whose computational capacity completely overwhelms the pile of cores we are handed

Namely? What non-von-Neumann hardware are you thinking of here? Certainly not the very-much-von-Neumann innards ubiquitous today in consumer or workstation machines, mobiles, embedded/wearable/IoT gadgets, game consoles, I reckon.. so what are you thinking of here, custom-designed ASICs / FPGAs / SoC or ...?


GPUs have enormously more computational capacity than CPUs, and are very weird. They can be programmed in C or C++, but it is not not a natural fit. Near-future chips will have FPGAs on board. Programming FPGAs is much stranger. You can shoehorn them into doing familiar things, but it's wasteful.


> GPUs have enormously more computational capacity than CPUs, and are very weird. They can be programmed in C or C++, but it is not not a natural fit.

Neither is C or C++ for most "ordinary" CPUs.


I think ncmncm means “people”. The cutting edge of computational systems organization is finding ways to effectively work with the complex multiagent systems running on wetware.


This is my reading of that comment too. Having a great language is no use if you can’t get the wetware to express ideas in that language.


I find it odd you mentioned libraries and C++ in the same paragraph, when C++ can only deliver libraries in source form and can barely do that.

For instance, all C++ classes are fragile, so their callers have to know their exact memory layout.

And your private ivars and methods, which should be secret, all have to be declared in the header…?


Libraries in source form are libraries. The most useful and used libraries for C++ are all, or almost all, source, and there's nothing wrong with that

A pattern, in the DP sense, is something you use over and over but can't be put in a library. As the language gets better, the patterns become ordinary library components. Then people start to notice patterns in use of the language plus library, and either capture them in another component, extend the language to enable capturing them, or just rewrite again every time.

So a catalog of patterns is really a list of language weaknesses. For an unfixably weak language like C, a list of patterns might be actually useful over the long term. But ambitious people will have moved on to a more powerful language, leaving the incurious, who will not be interested.


Lack of experience with C++?

I have successfully used plenty of binary libraries throughout the years.


Did patterns really fail? Or did we just out grow them?

For me personally patterns have always been valuable as a means of communication. Before patterns one team would talk about "device for when at rest" and another team would talk about "timber and fabric object for relax position". Now we just call it "chair" and it's common. Just like, singleton, command, proxy, whatever.

Maybe design patterns simply have done their job.


Yes, creating common terminology is important, and the fact that we're still doing it for basic things simply shows that this field is still in its infancy.

As we all know, naming things is hard, and in many cases it was less than fortunate. E.g. I'm really excited by the concepts of "event sourcing" and "CQRS", but think that those names are quite unfortunate choices. On the other hand we have "singleton", which might not be to everyone's liking as a concept, but I think we can all agree that the name is quite clear and descriptive (that said, I've always been sorry that we didn't go for the alternative choice, "highlander").


Yes, several people I have spoken to said the main thing they got from the GoF book was giving a name to things they were already doing.


The other way to use GoF and related texts is as an inspiration. It's been a while since the last difficult software design problem that I had to solve. But I reread GoF to get a new perspective on what I was trying to solve.

Also, I found that some people tend to take the descriptions of design patterns as a kind of gospel and any deviation from the Right Path outlined therein is heresy. But that's how the application of design patterns leads to failure. These are raw building materials for your design. You always have to shape them to fit your exact needs. Extend them, reduce them, cut them in half and glue the halves back together with their backsides - do what you must to get a clever design that leads to simple code.


> I found that some people tend to take the descriptions of design patterns as a kind of gospel and any deviation from the Right Path outlined therein is heresy.

I think the key aspect which distinguishes "dogmatism" from "reasonable" is whether the choice they're concerned with would cause a communication-error or leaky-abstraction.


I always felt that this is a larger value than the implementation details. The software industry was / is still in need of more standardization and common vocabulary.


The main thing I learned by actually reading the Alexander book is exactly how far GoF missed the mark in translating "patterns" into the software domain. They're hierarchical, and span a vast range of scales. If one end of the scale is "Iterator", to match Alexander the other end of the scale would need to be set at "Ecosystem" or "International Community". Less than a quarter of the book would have anything to do with data structures or bytes in a file.

I think there's room for a proper Software Development Pattern Language book, but GoF is definitely not it.


I have found it interesting when I learn of a new trend or convention, which I think is a superset of patterns, only to realize part of the way into learning it that I’ve already used it in places before.

Intuitively I managed to figure out the patterns, but didn’t realize it at the time I was doing it. Once I found that it was considered a pattern, or a best practice, it became concrete.

I think this speaks to patterns as being a mechanism for validating student code, but I don’t think patterns should be the method for conveying problem solving to students.

Instead, much of what I learned from coding patterns early on came in the form of intellisense hints from Resharper. I would write nested loops and it would suggest the alternative LINQ query. Then Visual Studio started doing the same thing and automatically refactored my code. This meant I deconstructed the problem first, and understood the problem, and now I could simply figure out the translation.

Perhaps a better way to instill patterns for new programmers is to build more robust code convention and anti-pattern recognition into the IDE, with code rewrite capabilities.

For example it should be relatively trivial to recognize when local variables are clustered around certain regions of a method. When the IDE intellisense compiler recognizes this, it starts to suggest breaking out a new private method, or at least an internal function. Perhaps include a more robust hint with details about unit testing and why small methods are easier to unit test.

Patterns are a broad concept though so I’m not sure how well this idea would scale.


Wow, I did not know IDEs were making suggestions like this.


Another component of this: _mathematically sound_* patterns do seem to be useful. You can see this in most languages in the shape of list comprehensions, LINQ &c, and in Haskell in Monoids, Functors and so on.

I think the big difference is: you can reason about these constructs at a higher and composable level, but patterns are just tricks of the trade that don’t compose into large patterns.

Seen from this perspective, patterns are folk remedies, re-usable data structures are medicine.

*or at least, close enough that they’re useful.


List comprehension functions are just that: functions. Monoids, Functors and etc are type classes, some semantic element of a language. List comprehension syntax is a DSL with some specific capabilities.

None of that is a pre-made structure one would impose on code. Those are all functionality that the language makes available for you to use when desired.


I’m not sure how you think a typeclass differs from a pre-made structure you impose on code. The average typeclass has parts that are enforced by the compiler and “laws” which aren’t.

The functionality you describe is provided only when you conform to the pattern required by the library/compiler. I don’t think the division is as cut and dried as you set out.


You can't not use patterns, period. You can only be intentional or accidental in how you apply them, and careful or careless in your intentional application as well.

The problem with patterns is, as with most things in tech, relying on them like they are magic and using them to show off rather than simply being pragmatic. The other big problem is viewing the GoF book as the bible and the end of the discussion about patterns rather than just the start of a conversation. The GoF list of patterns includes a bunch of workarounds for limitations in Java which are simply not applicable in other languages. It also missed plenty of patterns that are useful in other kinds of programs. And it can give the impression that in any decent system you need to cram every conceivable pattern when the reality is that most of the time you're going to be heavily leaning on a tiny handful of patterns.

Edit: some other issues at play here are not doing the work to identify and document existing patterns, especially bad or less good ones (like big ball of mud) and developing a culture of using pattern language to describe existing codebases.

Also, it's a bit rich to talk about patterns "failing" when every modern language leans heavily on iterators, decorators, commands, delegates, facades, and factories. The heavy lifting blue collar patterns are out there doing the work without getting the recognition.


> bunch of workarounds for limitations in Java

While i agree with you, I just want to point out that the GOF book was written before the release of Java (and its rise to popularity) and that the book uses C++ and shows bunch of workarounds for limitations in C++, not in Java.


I'll go further: it seems as though that when Java was designed, there was express consideration that design patterns would find clear expression and use in Java. That is, it is not the case that design expressions are suitable for Java because of uninspired design, but because Java was designed to support programs making heavy use of design patterns.


Because it's much easier to collect a baseball card than to play professional baseball. It should have been about understanding the collective experience of programmers, not about showing dominance through having a collection.

The collector mentality seems to have gripped algorithms. You shouldn't read algorithms to be able to regurgitate them. If you can apply an algorithm, that's good. But if you can learn from the techniques used to construct and analyze them, that is the underlying knowledge.

Collecting baseball cards is a fine hobby. The point of the analogy isn't that collecting things or facts is bad. What's bad is the mistake of thinking that collecting by itself gives you the skills of a practitioner.

Feynman's knowing anecdote:

https://blogs.ubc.ca/edutara/feynmans-knowing-anecdote/


Patterns (in the gof sense) failed due to over creation of new “patterns” that were not general solutions to a general class of problems.

They failed because many developers decided that the correct way to design software was to choose patterns and then build their software around them.

They failed because they were given these excessively complex definitions that meant things were woefully misused (“Singleton” pattern is particularly terrible example)

They failed because people tried to use them as something other than a general vocabulary to discuss software.


Patterns didn't "fail" they just became part of the background like everything else in this industry eventually does. Because we're all so busy chasing the latest shiny new fad, and once some "thing" is no longer "the new hotness" people quit talking about it, writing blog posts about it, writing books about it, etc. But if it was useful to begin with (as patterns were) people just quietly continue using $whatever like it was always there.


In my opinion, patterns have failed because they were not formalized enough. You need to understand something exactly before you can successfully approximate it.

I personally find type classes (in the sense of e.g. https://wiki.haskell.org/Typeclassopedia) to be much better replacement.

Related to that, being able to describe things to computer is very useful for compilers etc. Patterns are a language intended to aid computer programming, but usable only for human consumption - inevitably it had to be a failure.


Patterns failed because almost nobody does any actual software design worthy of sharing. There are very few architectural decisions to make when you're working with a modern web framework, for example. That's how 99% of us live, day to day: most programming involves basically no design at all.


I am not sure if I am in the 1% or if you are wrong in a foundational sense of “you have the latitude if you'll use it.”

Like, in the past week I had to add a feature to a front end that is heavily based on jQuery and its ecosystem, so lots of variables that are module-local but otherwise global, and every modification to that globalish state needs to update all of it consistently. I introduced maybe 80 lines of code and comments to define a Model as an immutable value with a list of subscribers to notify when that value changes, a Set method to change the value and update the subscribers, methods to subscribe and unsubscribe easily, and another function which multiplexes a bunch of models into one model of the tuples of values.

The result plays nice with that jQuery globalish code but it's terser and more organized, “define the state, define how updates must covary in one place.” But I can also see that it is not quite structured enough: it lacks functional dependencies which would structure the state more, “you select a ClientCompany in this drop-down and that wants to update the ProductList because each ClientCompany owns its own ProductList,” not because there happens to be a subscriber which has that responsibility. Also means that there is a sort of eventual consistency in the UI which was always there but now I may have an approach to remove it.

So I think that I have a good deal of latitude to try new high-level structures for my code, but it's possible that I just happen to be in a lucky place where I have that freedom.


I mean, that sounds very much like react + redux, which is where I’d recommend you start if you were building the thing you just described from scratch.


Right, I wouldn't dispute that. If I wanted to rewrite the 15k lines of code in this application (which is what, 500 pages printed? two books?) I would probably use react+redux and could maybe even eliminate half of the code when I was rewriting it.

The problem is that that still comes out to ~250 printed pages, so one book, so that's an investment of 2 months to create no obvious business value, and I think if I could take that I would actually be part of that 1%. But the point of my post was just to give an example of "we can make smaller architectural decisions all the time to clean out crap and make our lives easier," and nobody is going to look the ~2 hours you spend cleaning as wasted time since it causes them to get a more-correct product sooner.

Another example: I remember at IntegriShield we had an API written in PHP, and one of my favorite little things I had written was a data model. ORMs are not hard to find in PHP but because the data model we were using was JSON we could express inside of that data model a declarative security model for the data and it would get written into the SQL queries: you say "Give me all of the groups!" and it rewrites that to, "I will give you all of the groups that you can see." The logic for the group-editor does not need to explicitly handle the checks for "can this person really edit that group?" because the data model will check it for them, "UPDATE groups SET values WHERE id = (the group you are editing) AND (user can edit the group)."

Adding the first security type was maybe half a day's work threading stuff through the SQL generator? Adding subsequent new checks took more time but was incremental so each of them might have delayed their projects 1-2 hours. But the net result must have saved a tremendous amount of programming. I have always had that latitude to create structure, if I want it.

That said, I have been pretty lucky with the places I've been privileged to work, so maybe I'm already part of the 1% and this is not representative.


Yes, and 'reactive programming' is the exact sort of architectural choice that is good to capture with a name and a clear context and motivation. The system works, folks.


Re-read “Object Oriented Programming Is An Expensive Disaster Which Must End” and you’ll see it is making a similar point, but with more of the history of how the idea of Patterns developed and then faded:

http://www.smashcompany.com/technology/object-oriented-progr...


Is that a parody?


GOF is one of the most useful books I’ve read in my software career (now spanning over 20 years), and I’d recommend it to any beginning developer.

Design patterns did not “fail”; they’re ubiquitous in software. At worst, you can say that the terminology has changed because our industry has the attention span of a caffeinated ferret.


GoF changed the terminology, too. To quote Peter Norvig: "Before the Gang of Four got all academic on us, ``singleton'' (without the formal name) was just a simple idea that deserved a simple line of code, not a whole religion."

I recommend any new developers to familiarize themselves with dynamic languages (hopefully beyond JS) and to consider the idea that design patterns are just missing programming language features, and different languages will have (or not have) different patterns based on their feature set. (Norvig, again, for some intro material: http://norvig.com/design-patterns/)


But even then, I would not use dynamic language for large project expected to live long nor the kind of projects that is currently done in "enterprise".

So even in cases where pattern is workaround for language not being dynamic, changing langauge just for that might not be what you want.


Of course there are always tradeoffs to make when choosing a language, especially if you expect "enterprise" scale. Others will choose (and have chosen) a dynamic language, even if you don't; many dynamic languages can scale to mega sized systems just fine. (Here's an interesting talk from a while ago from a startup thinking about building for a scale beyond what startups normally deal with, and how Clojure was a nice fit for them: https://www.youtube.com/watch?v=BThkk5zv0DE)

Changing languages also has a bunch of tradeoffs, but in mega sized software you ought to have already solved the modularity problem that lets you get that big in the first place. Being well modularized, you should be able to introduce a new language without too much impact on the rest of the systems. I used to be pretty pessimistic that more interesting languages could never be introduced to BigCo and survive, but seeing the evolution of JavaScript and Java play out, plus what's going on in mobile with Objective C + Swift and Java + Kotlin, and BigCo developers adapting to all this new syntax (both JS's and Java's latest versions are very much completely new languages compared to what the codebases started in) I'm more optimistic that developers can in fact be taught something new. The real difficulties are political, and only one minor component of that is being able to reassure that learning is fairly easy and possible.


None of what you wrote make dynamic language better for that particular task or easier to use for such task. It just makes it possible if you put in additional effort.

Of course big companies use all kind of languages, what they don't do is using it for enterprise bussines project. Javascript or kotlin being used for something else in same company is irrelevant.

The issue is not even syntax, that is why it is no problem to update Java. The tooling around it still works the same, ecosystem works the same. The issue is any programmers ability to figure out someone else's code - meaning importance to compilation errors being visible at compilation time, trustworthy "find all callers" etc.

Of course developers can learn something new. That does not mean that the new thing is suitable for task they are working on.

Also, framing everyone who disagree with you as incompetent is awfully dishonest tactic. Suitable for toxic corporations (and toxic startups) for sure, but not something that should work in any kind of rational workplace.


I'm not making the argument that dynamic languages are better for any particular task; I assume it, but that's a separate issue. Your comment says you would not use a dynamic language for a "large project expected to live long nor the kind of projects that is currently done in "enterprise"" on the basis that by switching to dynamic languages you save on some pattern cruft. My comment is agreeing, there are other tradeoffs to consider, I wouldn't necessarily expect anyone to switch languages just on syntax. The second half of my comment was considering another tradeoff, which is "do developers need to learn something new? can they? will they?" and more importantly for management "do we need to allocate more time for onboarding?" and the related "does this shrink our hiring pool and by how much?" Even if weighing those favor choosing (or moving to) a particular dynamic language, of course that might not be enough to change! Tooling is a factor as you point out, but there are even more tradeoffs to consider.

Big companies do in fact use dynamic languages for big enterprise business projects. I work for one, we (among many, many other big companies -- though I'm not sure what your cutoff point for big company is, we're not one of the two Trillion Dollar Behemoths if revenue is a factor) use JavaScript extensively, if not exclusively. It's not "for something else", the business depends on JS. Take away the mobile apps, ok we can live, take away the JS, um, what's going to talk to our equally important server and database code? (Not to mention do all the important things that are only implemented in JS rather than the backend?)

> Also, framing everyone who disagree with you as incompetent is awfully dishonest tactic. Suitable for toxic corporations (and toxic startups) for sure, but not something that should work in any kind of rational workplace.

Ok? I don't disagree and don't think I've ever framed people that way unless it was a joking reference to Linus Torvalds' git introduction 10 years ago for a Google presentation where he defined those that disagree with him for the duration of the talk as stupid and ugly. I'm not sure if you're on a tangent or something in my comment (or the video I linked?) gave that impression.


Then why pick the "developers have to learn something new" as tradeoff to discuss? That is odd choice as I don't recall unwillingness to learn language to be issue, ever. Nor was bigger issue to find programmer for language that is not obscure.

I did not meant project by big company. I mean large project of the kind people call enterprise, regardless of whether it is done by big company or small contracting company with freedom to choose tech (pretty common here).

Yes mobile and quick responsive frontend in js are necessary, but they are also more expensive and time consuming to produce larger project. It is harder to maintain in them and harder to do more complicated business processes in them. Refactoring messy javascript is much harder then refactoring messy static code.

Hence, popularity and all that hope towards typescript and flow and what not.


I picked it because it's relevant to my own experience and what I see in broader tech, maybe your part of the ecosystem is better? (If memory serves the "stack overflow crowd", the people who answer the surveys each year, are better at self-learning, but they're also much younger so there's a forcing element, and I still think represent a minority anyway.) I see that people have little interest in learning new things (whether it's languages, tools, concepts, or history) unless there's a strong incentive to do so, and even then there's resistance to change. Additionally the issue of finding programmers always seems to come up where I look -- I don't think Clojure is any more obscure than Go, yet in the video I linked they still had to onboard 50% of new hires in the language. For some companies that alone might be considered too much cost for the benefit.

If a big company example isn't needed, you might be interested to know that https://www.ptc.com/en/products/cad/elements-direct/modeling exists -- I read somewhere it's made of several million lines of Common Lisp and has been developed over decades.

Agreed that JS refactoring is often harder, though I don't think it's bad enough to say it's "much harder" than e.g. Java, and I'd prefer refactoring a large JS project to trying to refactor a large C++ project. Still, the bar for dynamic languages that JS sets is pretty darn low. Python, Clojure, and especially Common Lisp all do much better on the refactorability metric along with other metrics that people who prefer static typing usually care a lot about (e.g. warnings/errors about trivial misuse before runtime).


Caffeinated ferret is my spirit animal...


I still find patterns useful, I am glad I learned them and I am super glad that they are now named consistently thanks to those books. They also make it easier to think about structures. When you need undo and redo, knowing about command pattern made it easy. When I see word decorator or iterator in code I know exactly what it is.

It was useful and it is still useful if you work in object oriented language and problems that match it.


> When you need undo and redo, knowing about command pattern made it easy.

This is an example of a pattern that's only needed in C++-like languages. In a language which supports messages (first-class method calls), undo stacks come naturally.

Note C++ has first-class function calls (function pointers), but not methods. NSInvocation does it all.


Just create a new command pattern implementation that implements the pattern interface. Deploy the factory method pattern to return the correct concrete implementation based on programming language.

Solved.


NSInvocation is an example of the Command pattern.


>When you need undo and redo, knowing about command pattern made it easy

Undo and redo are easy even if you don't know about "command pattern"s. Every computing problem can be solved using a virtualization layer, and that's all the command pattern is. It's not even a pattern, really... it's just the way computing works.


You couldn't get more hand-waivy and abstract than that.

Except maybe if you said "just write some code to take input and produce output, that's all you need".

Yeah, it's a "virtualization layer" (though the usual terminology is "abstraction layer"). But that doesn't explain the specific of how to implement a layer that does undo/redo. Caching is also a layer, but it's not undo/redo. So it's not like "any layer will do".

The command pattern does explain one way of achieving it.


I'm talking about actual virtualization. Reifying objects in memory. That's completely different from abstraction, and nothing like caching. It's a process that requires interpretation or translation, and not nearly as broad as "abstraction" is.


You keep using this word "virtualization". I don't think it means what you think it means.

In any case, reifying objects in memory doesn't magically solve the undo/redo problem.


The command pattern is waaay simpler. Much much more simple. And takes less memory. And works.


Of course it is pattern and of course patterns are largely the way computing works. It is even one of original patterns in that book. If it seems so ordinary to you, then it is more of success ot patterns then any kind of failure. Nowdays, you learn these ideas just by reading code here and there, they are so ubiguous and simple and standard. Back then, people complained they are complicated.

Everything in computing is just a way computing works. Functional programming is also just the way computing works and it still makes sense to study it's structures. And eventually someone will come up with names eventually everyone will "just know".

If everything is vurtualization layer, the I don't find the term particularly useful. Then it is just the same as saying "a thing".


I didn't say "everything is a virtualization layer". I said every problem could be solved with one, because it is a fundamental capability of computing. It's the same difference between implementing something in hardware vs software.


That is kind of academic theorising. It is like saying that every problem is solvable on turing machine - yeah true, but practically I won't do it and need different solution.


Looking on my bookshelf I see that Design Patterns for Smalltalk is only about 40% of the length of Design Patterns for Java. Design patterns are not much required for reasonable dynamic languages. Interestingly, they don't seem to be needed for functional languages like Haskell either.

All that said, when I did a lot of Java programming over about a 15 year period, I found the Java Design Patterns to be useful.


I skimmed both books several times back in the days, but didn't really have enough experience to know what to make of it. I once used the bridge pattern to write a multi backend GUI framework, looked it up in the book and all.

32 years into the game my perspective is that patterns is exactly what I don't want in my code...


The GoF book eventually did more harm then good. The outcome of GoF is that patterns are (ab)used to enforce a common code structure and a reference to the origin (book) is the legitimation for it. Code samples are good for newcomers to learn and improve, but more or less "rigid" patterns make it harder to think out of the box. The form of the solution is not relevant, just the solution. If we focus too much on the form we end up in a cargo cult. And there we are.

One of my favorite real world patterns is PRG (Post Redirect Get). While it is a HTTP pattern, it can be described language agnostic, which is sufficient for a proficient programmer to solve the problem. It also clarifies HTTP and it shortcomings to a degree. It solves a real problem, the context is clear and is useful as a pattern to teach.


Are Alexander's patterns influential in the architecture community? I know they get mentioned a lot, but I've never seen anything that was specifically designed by him or students following him.


Design patterns are derived from practice and experience. The linked talk described them as being harvested.

It’s easy to note reappearing patterns. It’s difficult to consider a task, identify a worthy pattern, and then “apply” it as if it were a recipe and not not a general postulate that seems to work well in a class of situations.


The best and most useful role that I have found for design patterns is to use them to systematically synthesize the code directly from the design as part of a very well defined methodology.

This is very well described in the MIT OCW 6.005 Elements of Software Construction course, 2008 version [1]: "Most SE courses teach design patterns as a big catalog. Instead, we’re going to learn the patterns that are relevant to moving from behavioral design to code for each of the paradigms.".

The point here is that you don't just pluck a design pattern out of thin air in an ad-hoc fashion, where every GoF pattern is a potential candidate, but rather you first do behavior driven design following well defined steps in one of 3 paradigms (state machine, functional or OO, you can mix and match of course), build a model, and only when you are ready to start coding do you map each key design element to code by choosing from a reduced set of suitable design patterns. The point of the patterns is to make much of the coding almost automatic by directly translating the design to code.

In particular in [2] when the design is in the state machine paradigm (machine as class (Singleton), as object, state as object (State), as enumeration), when the design is in the functional paradigm [3,4] (Composite, Interpreter and Visitor, variant as class, Facade) and the OO paradigm [5] (relation as field, relation as map, subset as boolean field)

[1] https://ocw.mit.edu/courses/electrical-engineering-and-compu...

[2] https://ocw.mit.edu/courses/electrical-engineering-and-compu...

[3] https://ocw.mit.edu/courses/electrical-engineering-and-compu...

[4] https://ocw.mit.edu/courses/electrical-engineering-and-compu...

[5] https://ocw.mit.edu/courses/electrical-engineering-and-compu...


Writing a book on practical analysis and being both an OO and FP programmer (and architect, whatever that means), I've spent some time thinking about patterns.

I think the key phrase is this: Marick's provocative claim that, as an idea, software patterns failed is various degrees of true and false depending on how you define 'patterns' and 'failed'.

Yes. What we run into again, over and over again, is the difference between human language and understanding and formal languages and understanding. Human languages are mostly spoken, extremely loose, improvisational, and change while we're using them. Mathematical languages are all written, tight and consistent, and stay the same over decades or centuries.

One of the things I learned from the linguists was that written human languages, which we mostly think of as language, is in fact a very recent thing -- and once a language gets written all sorts of other things happen as a result. People start viewing the symbols on paper as having some kind of power that a few grunts and turn of a phrase do not. Somehow they seem more important, more real...but just the opposite is true. Instead, they give the illusion of being just like formal mathematics without actually being so.

(There's a wonderful scene in "The Wire" where two detectives view a recent murder scene and have a conversation using only the word "fuck". Masterful example of the difference between spoken and written language in action.)

The way this plays into patterns failing is that yes, there are recurring situations where the same types of problems come up. At some point, you can mathematically generalize these kinds of problems into a formal pattern of constructs and the formal pattern is less of a hassle than simply continuing to analyze and code, but that's a different concept entirely from saying that these problems are an example of Pattern X. It doesn't work like that. Our brains work like that, but solving problems doesn't.

This also explains the authors observation that students find patterns most useful. It gives them a formal construct to use using the computer language they already know that appears to give them traction on the problem. It explains why new folks to programming, architecture, and patterns tend to overuse them. Neither one of these groups has any larger context to know how to solve the problem, what the language can do or not do, how patterns fail, and so forth, yet a template for a solution looks to be right in front of them. Why not use it? After all, it's good enough! That's the way we think. We naturally are attracted to the purity of math and are inveterate over-generalizers. We have to be. Otherwise we couldn't get out of the bed in the morning.

For those interested in learning more about some of the concepts, here's a Wiki page I wrote up on the book: http://wiki.info-ops.org/?ref=hn


I believe Christopher Alexander's architectural approach could also be considered a failure in terms of mainstream adoption and impact on urban architecture.

The vision of a "timeless way of building" based on participatory design and traditional yet evolving harmonious patterns at every level of scale is beautiful but kind of steamrolled by technical capitalism, division of labor, economies of scale, CAD, etc.

It's still something to admire and advocate, I think.

I feel the same way about computers. In fact computers could be considered an aspect of the general Alexandrian project of harmonious life. They exist in our pockets and homes just like wallets and kitchen sinks. And computing environments are themselves architectural.

Patterns of user interfaces, of social network design, of data representation, generally patterns of how the digital world is constructed—this seems like a more authentically Alexandrian field than patterns of low-level software engineering.

Alexander writes about lived environments, worlds for humans to inhabit, social processes of inhabitation, and how to make sure our worlds are humane, human-scaled, and beautiful. He's not a theorist of technical construction.

That's why he asks, in the foreword he was asked to write for Richard P. Gabriel's Patterns of Software, about programs written using "design patterns":

> Do people actually feel more alive when using them? Is what is accomplished by these programs, and by the people who run these programs and by the people who are affected by them, better, more elevated, more insightful, better by ordinary spiritual standards?

Describing the change of perspective that comes from his way of thinking:

> Two things emanate from this changed standard. First, the work becomes more fun. It is deeper, it never gets tiresome or boring, because one can never really attain this standard. One’s work becomes a lifelong work, and one keeps trying and trying. So it becomes very fulfilling, to live in the light of a goal like this.

> But secondly, it does change what people are trying to do. It takes away from them the everyday, lower-level aspiration that is purely technical in nature, (and which we have come to accept) and replaces it with something deep, which will make a real difference to all of us that inhabit the earth.

Yet:

> But at once I run into a problem. For a programmer, what is a comparable goal? What is the Chartres of programming? What task is at a high enough level to inspire people writing programs, to reach for the stars? Can you write a computer program on the same level as Fermat's last theorem? Can you write a program which has the enabling power of Dr. Johnson’s dictionary? Can you write a program which has the productive power of Watt’s steam engine? Can you write a program which overcomes the gulf between the technical culture of our civilization, and which inserts itself into our human life as deeply as Eliot’s poems of the wasteland or Virginia Woolf’s The Waves?

Just contrast this with the movement that thinks abstract factory is an example of a design pattern.

I believe you could summarize Christopher Alexander's philosophy as opposition to abstract factories.


> The vision of a "timeless way of building" based on participatory design and traditional yet evolving harmonious patterns at every level of scale is beautiful but kind of steamrolled by technical capitalism, division of labor, economies of scale, CAD, etc.

I think the largest issue with making people care about building things is that in the West we already have all the things. Everyone has a real "end of history" outlook that grocery stores and suburbs already exist and someone else is taking care of it, or did take care of it in 1950.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: