Static pages with placeholders dates back to Perl and the start of PHP. The original CMS model was to separate back and front end e.g. Vignette (and TOGAF guidelines ftm). DB CMS's used to generate static pages for display.
These all turned out to be lousy approaches when it came to UGC, personalisation, migration and URL paths, authenticated content, non-technical business users, team authoring and non-trivial workflows (e.g. picture desks), content reuse and contextualisation, evolving functionality (a split front and back system can require 2 dev streams kept in sync like trying to ride 2 horses).
Integrated back and front end systems was the start of solving these issues and has evolved to SaaS for non-complex use cases. If you look at Wordpress and Drupal, they have been adapting over many years answering these kind of questions and you'll see that they are now starting to address multiple front end channels, PWA and so on.
Old versions of Drupal for example used to allow pages to be rendered to file for faster delivery. However, it turned out that the number of pages grew towards infinity due to dynamic variants of pages (making server and file system into the bottleneck), and that CDN's just became a much better way to do this. Likewise, work was started on a materialised views layer to optimise DB calls until it was discovered that PHP object caching had become very effective and made this redundant.
A static generator could be a good app for a developer to blog but this is niche and it is about the worst choice possible for almost all organisation-based CMS users.
Thanks for that, very helpful. I do wonder if you may be seeing the new architecture as too rigidly identical to the old architecture, specifically in that the templating/page personalization can now be easily moved to the front end rather than needing to be handled on the backend like in days of old. This front end rendering approach allows the browser to pull down one JSON blob of possibly dynamically extracted user-customizing data (first name, last name, items in their character loadout, whatever happens to be relevant) and one or more JSON files of static template content, fold the JSON blobs together exactly as one would have done on the server, and present a fully customized page to the user in a way that accomplishes the goals you articulated (I think) while keeping the templates static and their delivery serverless, fast, and cheap. Are there things you see that I’m missing in that architecture? (I’m admittedly new to these aspects of the CMS world but trying to understand the issues)
I just don't think it's a new approach at an architectural level and I don't see any gains (unlike say, AWS Lambda).
For example, I consulted to a large national broadcaster about 6 or 7 years ago who had created a legacy CMS which rendered XML/XSLT to reduce bandwidth, and the user's browser compiled to HTML in much the same way you describe. It would have been interesting as a tech demo when it was first created about 10 years ago and would have made a good HN blog post. But as a production CMS it was completely unmanageable, prevented them doing the things they wanted to do at a business level, and grew into a tentacled legacy blob which was extremely expensive to migrate off. It was an embarrassment to the tech management and they were open to switching tech even if that meant recycling their large in-house dev team. In the end the cost of their tech experiment which had gone out of control ended up in many millions.
Developers building CMS's is an established anti-pattern with a problem space that is quite hard to understand unless you have been working with users at large orgs for a while. Go and look at how many modules are on WP or Drupal - 10's of thousands, each one which will have had months to years of dev time. Each one represents real use cases that organisations had which was not solved by what developers imagined managing and publishing content to entail.
Admittedly naive analysis here, but it seems like there are two distinct classes of issue you are raising. The first class of issue is plugin module availability, the second is data model/schema/information architecture. I may be wrong, but I’m guessing the flaws you saw in the National broadcaster’s system were more at the schema/information architecture level than the plugin ecosystem level, particularly given their willingness to throw devs at it (a bad schema deeply baked into a system is much harder to patch than a lack of extension modules). Now that we’ve reached an era with at least three CMS backends with heavily battle-tested information architectures (I’m thinking WP, Drupal, and Joomla, and there are probably others) it seems like a venture funded startup like Netlify should be able to propose a viable information architecture (or at least encourage/show customers how to implement one if their system is too flexible to enforce a good IA). Once there’s a good IA in place, my (admittedly naive) assumption is that the library of npm packages becomes the front-end equivalent of the WP store and on the backend you always still have the opportunity to procedurally generate any content or templates you want (potentially using WP or drupal or whatever as your JSON constructor kit). I totally hear your concern that random devs building CMS’s is a bad idea. That said, a well funded competitor going after WP with a “more modern” and “more secure” architecture (and well designed information architecture) doesn’t seem like a dumb idea to me given how much money WP makes and how many objections many people have to its legacy constraints.
The business argument isn't there. Drupal for example has over a decade's lead and a developer community and pace of innovation that no private company of any size can match, let alone a startup. They are now competing with the last man standing in enterprise, Adobe. Even IBM are MS aren't really in the space any more. For business lower end, you cannot compete with Wordpress for price and developer availability. For personal, you would have a very tough job to even catch up with the mindshare of Wix and Squarespace, let alone exceed the products sufficiently to take their market, which is super low value unless you can bag most of it. You mention security a few times - do you think any privately funded CMS has a dedicated security team covering both core and community code as Drupal does? I will bet you a bridge that a private company's development resources get diverted to adding features in favour of security to sell the next upgrade.
As for IA/schema, one of the major reasons to use a dynamic CMS is that you configure your own because there is no such thing as a generic 'good IA' for any non-trivial content (recipes are probably the closest but even there most schemas are too limited for professional use). If you are working off a fixed IA, you may as well just use a low end SaaS. Drupal is particularly strong in IA, offering nested object modelling, workflow authoring, content versioning (now including staged sets of content so you can version sections or hub landing pages).
You then need to start adding considerations like BigPipe, or offline first access with eventual consistency for edits, and how all these play with access levels for granular pieces of content which may sit within pages depending on who's viewing, ditto contextual content. These are just a few things off the top of my head which CMS's start needing to provide.
If I was creating something in this space, I might look at small serverless modular apps performing simple use cases in a very polished way which could be used alone, or very easily integrated with major CMS's (including building, maintaining, and promoting those integrations). But possibly better to look at SaaS layers which sit on top and address digital experience and orchestrate content from multiple backends across multiple front end channels e.g. gamification, recommendations etc. A lot of money there and few open source solutions. (Even something as simple as buffer which just publishes from its backend to major social media channels has very little competition and is quite expensive. I'd imagine something which could simply manage a flow of content to and from social media, sites, apps could do well, even before you started adding in any intelligence around that content and your users.)
These all turned out to be lousy approaches when it came to UGC, personalisation, migration and URL paths, authenticated content, non-technical business users, team authoring and non-trivial workflows (e.g. picture desks), content reuse and contextualisation, evolving functionality (a split front and back system can require 2 dev streams kept in sync like trying to ride 2 horses).
Integrated back and front end systems was the start of solving these issues and has evolved to SaaS for non-complex use cases. If you look at Wordpress and Drupal, they have been adapting over many years answering these kind of questions and you'll see that they are now starting to address multiple front end channels, PWA and so on.
Old versions of Drupal for example used to allow pages to be rendered to file for faster delivery. However, it turned out that the number of pages grew towards infinity due to dynamic variants of pages (making server and file system into the bottleneck), and that CDN's just became a much better way to do this. Likewise, work was started on a materialised views layer to optimise DB calls until it was discovered that PHP object caching had become very effective and made this redundant.
A static generator could be a good app for a developer to blog but this is niche and it is about the worst choice possible for almost all organisation-based CMS users.