Hacker News new | past | comments | ask | show | jobs | submit login
MVC frameworks aren't dinosaurs but sharks (david-dahan.com)
318 points by j4mie on May 9, 2022 | hide | past | favorite | 248 comments



> Last years, using MVC framework was not an option as soon as you wanted a dynamic front-end. Having to reload a full page for every user action leads to a bad user experience and is indeed, not acceptable in 2022.

When it's needed, fine.

But there are so many SPAs that just don't need to be. Rendering HTML server-side is just nice. It's simple and easy to work with. Weird states are unusual. The user gets a really normalized, predictable experience. And done well/quickly, there's little actual difference.

Maybe I'm just showing my age. This falls really close to my "I miss the old web" type sentiments. The last thing I built, and the next thing I will build, are purposely doing full page renders for any actions (no jQuery or React/SPA stuff). I don't think I'm going to have any javascript at all. It's really fun and lean. I don't know, it's just got me going lately--reminds me of when I was starting out.

My last project is 300 lines of python, 270 lines of php, html/css of course, postgresql, and cron. Some scrapers run, ingest data, and the php sorts and displays it.


Rendering server-side HTML is fine, as long as your application is stateless. Then it's really easy. However, when your application becomes more interactive and thus state-driven, is it really easier to it server-side? Remember, the primary function of modern frameworks is to have a declarative way of creating your by writing it as a function of state. Things get messy really quickly once you start to build something more complex.

Also, is it really easier to everything on the server? I don't think it is so by definition. You could argue that a stateless REST API with client-side state management is less complex than doing everything server-side. It's a much more scalable solution and creating interactive user interfaces is very easy.

Ofcourse it could be that interactive user interfaces is not something you are aiming for. That's fine. But do understand that most applications are actually usually very interactive.


Yes, it is. That's what databases and POST data is for and it's all incredibly quick.

Almost all applications that are written in the world today are glorified forms.

That's reality because 95% of apps are written for the business world. Most don't really do much apart from take some form data, apply business logic to it and show the results/send some messages.

Even most consumer apps like Facebook/Twitter/TikTok/Pintrest/etc. are basically glorified forms.

You can do save/load state all in nanoseconds, any competent programmer can easily aim for 50ms page loads, well under human detection.

This is incredibly basic stuff and how it was done for decades before SPAs.


You have rose-colored classes for the past. Multiple-megabyte state cookies were very common in business applications written in .NET WebForms, because even simple CRUD business applications have a lot of state to carry around that nowadays is simply managed in-memory by JavaScript. 50ms load times were out of the question; you were lucky if you could do it in 500ms. No "competent programmer" could do anything about this because WebForms didn't let them. As soon as you needed to filter one dropdown based on another dropdown, you needed to write custom JavaScript anyway.

Writing a stateful front-end that just asks for what it needs from a well-built API when it needs it is enormously simpler than trying to store all state in some server session and having to work around your own framework whenever you need a dynamic dropdown list.

We all seem to have collectively forgotten how awful web frameworks used to be, and how little interactivity the user actually expected. Things are way better now.


WebForms is an unrepresentative example because it needs to carry all the view state between the front end and backend and was enormously complicated under the hood to make the programming model easier. Nothing else works like that, the state is stored in session storage in memory on the server, and the client just sends a cookie containing the key for its session.


To add to this: Microsoft recognized that third-party control developers were dumping too much info into the ViewState bag, and this was largely fixed in later versions.

It's also possible to store ViewState "on the server side" instead of on the client side. So instead of sending megabytes up and down with every request, the client just gets a unique identifier. The downside is increased memory usage, but that's often better, especially for internal enterprise apps.


Yeah, ironically the way people try to do stateless backends and heavy frontends with JWT is closer to the ViewState mess than normal MVC apps.


The parent's point was that you don't need Javascript for that; some of the state (i.e., the state of the parent dropdown) could be stored in the URL, or in a form variable...which is how things were done in the days before Javascript.

Yes, you would need multiple pages. But that is how most form-based SPAs work today anyway, and avoiding a page reload isn't always preferrable to just reloading if the SPA version requires heavy amounts of JS and the good-old-static version loads in the blink of an eye.


> Almost all applications that are written in the world today are glorified forms.

Yeah, but our users like interactive forms. When a widget is changed, they like feedback instantly.

Just about any shop that reloads the entire page when you change a single filter gets annoying pretty fast.

Imagine an online computer store: tick the box for "16GB RAM", wait for page to reload, tick "32GB RAM", wait for page reload, tick "512GB SSD", wait for page reload, tick "1TB", wait for page reload.

Now you may argue you still have to wait if it is an SPA, but then you're only waiting for the results. You can quickly tick all four boxes in an SPA and each one may cause a background request, but that doesn't matter because you don't have to wait for each request to complete to make your next selection.


> Imagine an online computer store: tick the box for "16GB RAM", wait for page to reload, tick "32GB RAM", wait for page reload, tick "512GB SSD", wait for page reload, tick "1TB", wait for page reload.

Tick all the boxes and then click "Apply filter".


> > Imagine an online computer store: tick the box for "16GB RAM", wait for page to reload, tick "32GB RAM", wait for page reload, tick "512GB SSD", wait for page reload, tick "1TB", wait for page reload.

>

> Tick all the boxes and then click "Apply filter".

Many places do that, but it's not as user-friendly as not having to click "apply". If there's a large number of filters (Buying a computer on Amazon, for example), the "apply" button is off-screen most of the time - scroll to either the last filter or the first one to even know that there is an "apply" button.

Also, some fields do not work too well by having a separate "apply" filter step. You drop-down the manufacturer selection, choose "Lenovo", click apply and now there is a new drop-down for "Model".

The better experience is one where both those drop-downs exist, and second one is populated only after a choice is made on the first one. In all the pages I've seen that have drop-down #2 dependent on drop-down #1, those that got a JSON list via AJAX were much more pleasant to use than those that reloaded the entire screen just to add 5 items to the drop-down #2.

As the creator of the software, it's more important to have happy users than have an easier maintenance burden.


You don't need an SPA for that, you can still filter the DOM with JS (Vanilla or a Reactive framework) and save the state in cookie or the URL, and have the server handle the state.


> You don't need an SPA for that, you can still filter the DOM with JS (Vanilla or a Reactive framework) and save the state in cookie or the URL, and have the server handle the state.

Doesn't that still reload the entire page when a checkbox is clicked?


Not necessarily, you can update the URL with the History or Location API without a reload. In that case you download the whole list and filter it on the client. The assets (images, documents, icons) can be lazy loaded, so the browser will fetch them only when visible.


I'm all in for reducing the usage of frontend heavy frameworks, but I also agree that as soon as you start with the "sprinkles" you're opening the pandora's box in the long term if not using state based ui rendering (what React, Vue, etc give you)

Example:

I have an application where a form input expects an URL. As soon as any URL is entered, I need to:

1) Provide feedback on wether that's a valid URL or not, before the user submits the form

2) Show a suggestion for a page title, which is usually the <head> title extracted from that URL. If the user clicks on the suggestion, that should become the value of the next input in the form.

Other than this, this is a pretty basic form. But how do you give a decent experience without JavaScript?

Right, a bit of jQuery would do it in this case... but then more and more similar small tweaks are necessary, and before you realise you're swimming in spaghetti.


The idea that we should use react to do a form input suggestion... To avoid spaghetti code?

I can't make that logical jump. If you can't keep that tiny bit of code organized, why do you want to pile on the entirety of react and why would that be more organized?

Maybe you're saying it's a snowball of these things. I've seen that happen, but it didn't have to.


I agree using React just for that is overkill. And yes, I'm talking about how it snowballs when you later get more requirements like now we need this dropdown to change accordingly, now we want to show validations as you write, now we need to check for email uniqueness before submitting, etc, etc...

But, the worst part in my experience, is that without one of these "opinionated" frameworks such as React or Vue, each new developer that joins the team (after others leave) have a total different idea of how to organize the code, or they have a hard time understanding the "custom" organization and ideology behind the structure you've had to invent to keep in maintainable.

Again, I'm not a fan at all of frontend heavy frameworks, but they do solve a real problem, specially in large apps and large teams or teams with a lot of turnover.

I'd say it is the exact situation in the backend regarding using Flask vs Django, Sinatra vs Rails, etc... having a set of imposed opinions and structure have long term benefits as soon as you're not the only one working on the codebase.


50ms is under human detection?

maybe 50ms is under the "conscious detection" threshold (debateable, IMO...) but it will definitely feels "laggy"


Bit of an off-topic hot take, so sorry in advance, but:

In my experience with hiring, the only reason it has become easier to move a lot of logic and state to the frontend is because frontend developers are plentiful and easier to hire. I'm not dissing them: it's very easy to find great frontend devs that know their tools inside-out and also know the fundamentals of software engineering.

On the other hand, good backend developers are becoming rarer and rarer, and most of the new breed struggles to do virtually anything beyond a cookie-cutter REST API. Very few are knowledgeable about databases, for example. Very few know about deployment. Some of them aren't even familiar with HTTP as a protocol. Any backend dev that I interview that is as knowledgeable about their niche as the frontend devs I interview often gets two or three other offers and we have to raise offers very very often.

I'm pretty sure there's lots of great backend engineers who could do SSR like we did back in 1990-2010 with one hand behind their backs. But there just isn't enough of those. Most of the ones you see will have no idea how to deal with it.


On the other hand, maybe hiring one good backend dev and paying them 2x market rate is still cheaper than having a frontend team to (poorly) reinvent the wheel?


Definitely!

On the other hand... maybe not, for more conservative industries? You don't need a full-blown frontend team to be very productive. A great frontend engineer (easy to find) and a cookie-cutter backend code-monkey (also easy to find) working together and getting paid 1x each are probably as productive as a great backend engineer you're thinking of paying 2x.

But the thing is that the 2x will bring some extra experience, less bugs... but that's not what companies are after.

So yep you have a point.


In my experience server side rendering becomes really messy once you hit complex state management. Shuffling shit between pages, persistent vs transient state, handling reload/resubmit logic, validation. It works for simple stuff but even then I think once SPA tech matures more (ironically even after all this time I think transpilers/bundlers/package managers in JS still have a long way to go to get out of the way and frameworks are still far from optimal) anything that's not a web page should handle UI logic on the front-end.


All code, across all paradigms, frontend, backend, low level, high level, whatever; all code becomes really messy when you have complex state management.

The solution in each of these cases is the same, though. To simplify the state.

In general, I think a lot of the pain in web development comes from attempting to have shared mutable state between backend and frontend. Shared mutable states are almost always messy. You can get around that by pushing it all to the frontend, or pushing it all to the backend. It's when it's straddling both you're in for a headache.


I agree - but you can't simplify when your tools complicate - in the sense that they intertwine different concerns. As soon as you're not dealing with trivial apps "pushing it all to backend" isn't really an option if you want good user experience - and then you'll have to have logic on frontend as well. And that's where the pain comes.


WHAT??? Okay I am going to bite, state is managed by the browser and then SPAs came along and tried to hack the state management ever since. I remember creating SPAs in 2004 when it was called AJAX and there was the debate of HTML or JSON over the wire. The biggest issue was back and forward buttons, even today that issue is not solved very well by SPAs.

The issue I have is we had a great idea to refresh parts of a page with AJAX and then we went overboard and tried to move the entire application to complex frameworks which add more problems then they solve. We became so scared of screen rendering we avoided at all costs. This is bad UX as users actually get good confirmation that something has happened when there is a screen refresh.


> Rendering HTML server-side is just nice. It's simple and easy to work with. Weird states are unusual. The user gets a really normalized, predictable experience. And done well/quickly, there's little actual difference.

It has also been forgotten that people used to complain about 100ms of latency. Since then, modern users (especially mobile) have been trained to accept horrible latencies that would have gotten you excoriated back in 2000.

100ms of latency is MUCH more difficult to deal with server side than 2-3 seconds of latency. 2-3 seconds is an eternity with modern clouds and modern network connections.


You only need what’s appropriate. I doubt missile launch systems are using react or whatever fancy frontend engineers want to use. SPAs are responsive which is nice, but some projects could better spend their time ensuring the project actually functions.


Sadly, that might not be a great bet. The world of government software procurement is very, very ugly.


Of the people I know who hold this view I know many software developers and zero other people. Even other very technical people, like data scientists, prefer an interactive interface to a series of pages.


Of the people I know, ranging from programmers to HR people, most of them prefer the series of pages over the interactive interface for business processes. The series of pages makes it obvious that something has happened in response to input.

OTOH, for stuff like Facebook they absolutely prefer the interactive interface.


The business people I know would mostly rather just do their business processes in Excel rather than a web browser :)


Almost everyone I know likes to have an expected response when pressing a button.

In fact, I don't know anyone that doesn't.


This seems like an orthogonal question? I agree that whatever technology one chooses, buttons should be responsive when pressed. A page load is not the only kind of button press response that is possible.


>Rendering HTML server-side is just nice

No, rendering html server-side is not nice and never was. Why on earth you will send markup template for every list item over and over again instead of sending just data and only one code-template (to render data on a client) ? Suppose you need to show to user a list of books on your web-page. Server side rendering means you need to send to user a markup consists of html elements for every book

<div> <div class="book"> <div class="book-title">Book 1</div> <div class="book-author">Author 1</div> <div class="book-description">..description..</div> </div> <div class="book"> <div class="book-title">Book 2</div> <div class="book-author">Author 1</div> <div class="book-description">..description..</div> </div> <div class="book"> <div class="book-title">Book 2</div> <div class="book-author">Author 2</div> <div class="book-description">..description..</div> </div> </div> ...

Don't you see something wrong with this? Why you need to duplicate over and over again html template for every list item resulting of significant increase of size of page to download if you can send only one template-component like this

<div> <div class="book"> <div class="book-title">{book.title}</div> <div class="book-author">{book.author}</div> <div class="book-description">{book.description}</div> </div> </div>

and a data in a more compact way

[{title: "Book 1", author: "Author 1", description: "..."}, {title: "Book 1", author: "Author 1", description: "..."}, ...]

or more compact like this

[["Book 1", "Author 1", "..."],["Book 1", "Author 1", "..."], ...]

or even in more compact binary-encoded format resulting in more than magnitude size compaction comparing to over-duplication html-approach. More size means more traffic for users to download over network and users which use data-roaming will "thank" you for your hundreds of kilobytes instead of dozens

This is why server-side rendering approach was flawed from the beginning (not because of page reloading but because of data duplication and traffic consumption)


This is wrong because you’re not considering the whole picture:

- to turn the JSON into HTML, you not only need the template you mentioned, you also need code to execute that template (potentially lots of it); preact is 10KB ungzipped (other frameworks, like React/Angular/Ember, are way larger), so your HTML solution is already 10,000 characters ahead (by comparison, the markup you demoed is ~90 bytes per book, so you’re ahead until at least ~100 books on the page, without considering the framework, your code, and the following points)

- HTML compresses amazingly well because it’s repetitive so the overhead is less than you think

- HTML stream renders, so you can start rendering the first book immediately; JSON streaming is technically possible, but not built-in, and quite difficult to implement (and it requires even more JS)

- the same KB of JS is way more expensive than it would be as HTML (or images) because parsing, compiling, and executing is slow

- latency is often a larger problem than bandwidth, especially on mobile, so saving bandwidth is less important than (a) streaming and (b) client-side blocking time

- a large number of users visit one page and bounce, for typical websites, so the promised savings once the SPA is live doesn’t materialize

- the naive way to implement a SPA is _full_ of footguns; first of all, you’d naively load the template for all the pages, making the overhead problem even worse, naively you wouldn’t render anything until the JS has arrived, delaying the TTFMP enormously, naively you’d mess up routing, etc.

- it’s relatively trivial to preload/precache HTML pages, including in a service worker to support offline, making performance on par or better than a SPA

I happily work on a huge SPA daily, for context.


- about HTTP compression (gzip, brotli) - it works on byte/text level and doesn't know about your markup template repetition (where structure is mostly the same but small changes everywhere) so it will be not as efficient as manual template-data separation

- about SPA - in my comment I said no word about SPA, it's a more complex layer on top. I only arguing about template-data separation approach to decrease data duplication and traffic consumption. And no, you don't need to rewrite your app as SPA and no react/preact/javascript frameworks required. You can use your server-side rendering and just change format of what you are sending to user. Instead of sending index.html with repetitive html markup for every list item you can basically send script-tag where you loop over data and build html-markup directly on a client

<doctype html>

<html>

<body>

   ....

   <div id="books"></div>

   <script>

     const data = [{title: "...", author: "...", description: ""}, {...}]

     document.querySelector("#books").innerHtml = data.map(item=>`

        <div class="book">

           <div class="book-title">${item.title}</div>

           <div class="book-author">${item.author}</div>

           <div class="book-description">${item.description}</div>

        </div>

     `).join(``);

   </script>

 </body>
</html>

Yes, it lacks http streaming but - less repetitive html markup -> smaller size -> less time to download -> http streaming becoming less useful

And again - you can use your favorite http compression (gzip, brotli, etc) on top of this template-data separation approach to compress even more


I’ve just tested what you suggest with a page that contains 20 books (no HTML or JS minification, and probably some typos because I didn’t run it).

https://pastebin.com/8CM1eUKQ the HTML version https://pastebin.com/ujrai58n the templated version

Even I was surprised by the result. The original size is what you’d expect: 5.3K for the full HTML version, 2.4K for the JS template version. Once Brotlied (default compression) however, the situation changes completely: 127 bytes (!) for the pure HTML version and 197 bytes for the template version. This is even true for Gzip, 213 vs 284 bytes.

I don’t know if Brotli (and Gzip) are somehow optimized for HTML, but yeah it really doesn’t make sense to use templates here.


> Why on earth you will send markup template for every list item over and over again instead of sending just data and only one code-template (to render data on a client)?

I see what you're saying, but I think server-side rendering is simpler, and the concern about repeated HTML tags in the content is likely much easier to address by using gzip over the wire.


Not necessarily, because you have to consider the entire lifecycle.

When doing frontend rendering you usually work against a REST(ful) API to fetch data, the idea is to reuse endpoints for multiple frontend components, because of this generalization you usually end up sending more data than necessary, not only between the database and the backend, but also between the backend and the frontend.

And not only that, you start doing JOINs over HTTP and JavaScript.

With server side rendering I can write an exact SQL query to render that component.


Ah ... calling server-side "flawed from the beginning" is a bit overkill. The use case is a bit convoluted. HTML itself is not the problem that slows down modern web experience, is it?

Also, in your example, don't forget the code required to render that data, which is often a second fetch and is, you guessed it, utf-8 / ascii/ unicode. Now we've double startup costs or forced http/2.

There's some context with these opinions. I concur that rendering server-side is often much, much simpler, but acknowledge that might be because that was what came first; and I acknowledge that some of the hate against client-side is b/c modern web often feels bloated and JS takes the blame.


> Why on earth you will send markup template for every list item over and over again instead of sending just data and only one code-template (to render data on a client) ?

You have a point, but not everything needs to be black and white. We had this nifty technology called AJAX even before SPAs were a thing. Now I understand that SPAs are built on AJAX calls too, but it is also possible to render (most of the) HTML on the server side and only use AJAX using off-the-shelf components such as Datatables when you absolutely need that functionality.


Two words: HTTP compression


HTTP compression (gzip, brotli) works on byte/text level and doesn't know about your markup template repetition (where structure is mostly the same but small changes everywhrere) so it will be not as efficient as manual template-data separation. Moreover you can also use gzip, brotli for your template and data so http compression doesn't change the overall picture


It changes the overall picture because HTML compresses better than JSON, which itself compresses better than your deduped arrays (because of the duplication you’re trying to avoid). So the HTML would be closer in byte size to the most compressed version you’ve shown than it appears, after compression.


^^^ this. Plus you’re probably going to have to render the JSON you send on the server and then rerender it as HTML on the client plus send the code amd template to do that too.

Also I know there’s the promise of sharing code between the front and backend but the reality is that you’re probably more likely to reimplement one or the other and then have to keep them in sync.

The overhead for doing it just the once on the backend and sending compressed HTML over the wire is pretty low for a small team in comparison.


a third word: DIVitis.

but yeah, gzip or the more recent ones such as brotli have been taking care of those things for 25 years....

so HTML server-side has always been nice.

Especially semantic markup cough cough


Do you even gzip transfer-encoding?


The big MVC frameworks are in a great position to take advantage of the move back to hypermedia as a major network model for web development. We are seeing a swing back to this model with modern hypermedia-oriented libraries like unpoly, hotwire and my own htmx, where javascript is used to augment the hypermedia model rather than replace it with a client-server RPC-style network model as with most SPAs.

These older frameworks have been honed to a razors edge for producing hypermedia (HTML) server side and delivering it to clients efficiently. As people begin to realize just how much you can achieve with this approach (and the simplicity it brings back to web development) I expect interest in these frameworks to soar.

Bullish on 2005 tech books!


To be frank, I don't really see this swing back. Maybe it's hip right now on HN to scoff at SPAs and fetishize server-rendered webpages.

Having used NoScript with temporary allowances, it's glaringly obvious how much of the web doesn't show anything without JS enabled. If there's a trend to not make everything render on the client, I'm not noticing it.

Even if there is, we may ultimately end back in the same place we were in say 2011 (or ~2014 depending on how you look at it). We could very well end up back in dumping grounds of abstraction spaghetti bad object-orientation.

The weakness of backend MVC frameworks is that it's way too easy to munge business logic with the view. Yeah, Model-View-Controller suggests a separation of those concerns, but now it's up to the human to follow that convention and... yeah...

SPAs can be just as bad in a lot of ways, but at least they are more suited to dealing with view logic and not so much of the data transaction stuff behind the scenes.

Personally, I often find frontend JavaScript projects easier to figure out, and I think that my be due to MVC being a flawed concept. It might be okay for hammering out a prototype, but in the long term so many things need to happen in order to keep it intact; Rails introduced concepts like "concerns" exactly for this purpose, because an MVC structure in a pure OO language like Ruby is very rigid.

Server frameworks should do themselves a favor and divorce themselves from ideas like MVC and find ways to create structure in ways that aren't merely meant to scratch the itch of design-pattern enthusiasts. I find it dubious that a concept of a "model" is even necessary as a thing to always think about; it's a weird implementation detail that's treated as if it's a design pattern. And when you're left with just the View-Controller part, it's dubious why they even need a formalized concept in the first place.


While MVC is certainly not perfect, I've had the opposite experience you describe: the days of the spaghetti abstraction really began with the Javascript centric approach. The emphasis changed from opinionated defaults, to completely DIY, everything's configurable, every application is a snowflake.

This doesn't seem any better to me than the Perl+CGI / PHP world that Rails replaced. And we've traded the risk of munging view and model, or breaking conventions, for no guardrails and no patterns and no conventions beyond the most popular library to show up in the last 6 months.

I've personally seen an old, terribly uncool dinosaur MVC decomposition turn into 50+ microservices, with 10+ front-end apps, each with their own toolchains, libraries, peculiarities and hidden dependencies. That doesn't seem to be a win either. It feels like we are all collectively missing a more integrated approach to development, and eventually the pendulum will swing back in the favor of more comprehensive frameworks.


But the same 'munging' of business logic and view logic happens in SPAs and related. It's just a fundamental problem with MVC that the lines are difficult to demarcate well no matter how you define your layers.

That and "model" is tricky to define well when we're talking about data backed by a relational database which can (and should be) be sliced in any number of ways. I've yet to see a good relational->UI mapping that works without badly fossilizing everything into a static and undynamic OO "model" inbetween... In fact this is made even worse in SPA-type systems by having an ORM (or similar) and a REST/GraphQL type layer inbetween the view and the DB. Layers and layers of transformation. Each stripping out the richness of the relational model.


It certainly can. In my experience, I've seen this happen to a much lesser extent, though I've not worked on as many SPAs as others of course.

> It's just a fundamental problem with MVC that the lines are difficult to demarcate well no matter how you define your layers.

Yes.

> That and "model" is tricky to define well when we're talking about data backed by a relational database

And that's why I find it nearly useless to even make it a "thing" other than perhaps for n00b coders who've yet to touch SQL or write many of their own classes/modules, or I guess just have enough experience working with data. Thinking in terms of models for data, IMO, can and I think almost always does pigeonhole data into moving and existing in ways it doesn't need to. In many cases, a formal model makes no sense for the data at hand, and if developers only think in terms of models then everything inherits a ton of complexity that may be of no benefit. It's premature optimization hidden as a convention or pattern.

> I've yet to see a good relational->UI mapping that works without badly fossilizing everything into a static and undynamic OO "model" inbetween.

At which there's no point in using strictly OO or a model paradigm. This is what happens when those highly accustomed to OO realize that their conception of OO is unsafe. Keeping data frozen, static, immutable, or whatever is right thing in more cases than is appreciated, but even when devs acutely realize it after the fact, they often resist giving up using models. If some concept of a model is more of a drawback then a benefit, just get rid of them. I don't care what framework or ORM people are using. Any developer can learn to work with data in a functional way that doesn't involve making data act as an ooey gooey object that can morph (and get messed up) and take on a taxonomy that wouldn't otherwise exist in the real world.


  > In fact this is made even worse in SPA-type systems by having an ORM (or similar) and a REST/GraphQL type layer inbetween the view and the DB. Layers and layers of transformation. Each stripping out the richness of the relational model.
What about GraphQL that is quite literally just your database structure? IE, verbatim columns and foreign key relationships.

I have this issue too -- you can make an infinite number of onion-ey layers of abstraction over your data. But at the end of the day, there's only one canonical representation of it -- in your data layer.

This is why I am against marshalling naming conventions across programming languages. If your database has snake_case columns, keep the values snake_case when working with them from API responses in your code. If you change the casing, you now have something that doesn't represent what's in your database anymore.

Want data that doesn't exist in a table? Make a view or call a function.


To be fair, I have 0 experience with GraphQL so I probably should not have mentioned it. I spent the last 10 years in pseudo-embedded land and systems programming and some purely backend high throughput systems (ad tech), not doing much 'web' related. I'd like to play with GraphQL.

That said, GraphQL is not a relational query language, and works in the world of hierarchical/graph database land. I think Date & Codd did a good job of critiquing that model and showing its faults the first time around (the 1970s).


> I have this issue too -- you can make an infinite number of onion-ey layers of abstraction over your data. But at the end of the day, there's only one canonical representation of it -- in your data layer.

Right, and the amount of effort spent by devs working within layers of abstraction can outweigh the effort they'd otherwise spend working with pure data, even if other issues arise; said issues would be faster to solve without lots of code in the way.

People forget that an abstraction doesn't make things easier to understand, but this is how many new devs are taught and think about abstraction. Abstractions are only simpler so long as you don't ever need to worry about what that abstraction is doing. As soon as an abstraction is misbehaving mysteriously, inevitably you have to open the hood and see what's up with the engine.

It doesn't even matter if the abstraction was made by really smart people with distinguished titles. At every Rails job I had, there was at least one problem with an abstraction in either ActiveRecord or other things like Rails Engines that were totally inexplicable by anything – they only were solved by hours upon hours of mid-level devs digging through framework code until they came up with a monkeypatch.

Part of this also comes from what I believe is a very misguided desire to leave the door open to switch databases. During my first two Rails jobs, senior developers resisted to use any raw SQL queries anywhere because "what if we switch to Postgres?" Never did that pan out, and in the meantime devs were forced to do weird things with ActiveRecord to make things work, often inefficiently.

I remember when I was momentarily the most senior dev at one of these jobs, once the lead had left, and I decided to just use SQL where it made sense. Not only did this cut down on lines of code and method calls, but those queries ended up being significantly faster, reducing page load time.

Sure, I didn't go through the trouble to force that raw data into an ActiveRecord model, but so what? All we wanted to do was display the data to the user. Why introduce a bunch of ORM gobbledygook for read-only? Because we might want users to edit articles in some hypothetical future where we just let any rando write stories? lol

That codebase was never great, but after some of those kinds of adjustments we somehow didn't get much if any emergency calls after hours anymore. Hmmm...

Abstractions can be great, but I've come to think they rarely have value with persistent data. In a game engine, it can make sense to have model-like objects for things that are ephemeral in gameplay but need to know about one-another, and this is usually easier to implement in a custom way because games usually aren't written like web apps.

> Want data that doesn't exist in a table? Make a view or call a function.

Yes.


> I remember when I was momentarily the most senior dev at one of these jobs, once the lead had left, and I decided to just use SQL where it made sense. Not only did this cut down on lines of code and method calls, but those queries ended up being significantly faster, reducing page load time.

That's so familiar. I've also been trough this more times than I'd like to admit. Often five or six files of ActiveRecord code that could be reduced to a View.

However I don't think it's just desire to change databases, there's also a lot of resistance against using different languages. SQL is bad, it's ugly, it's old. I see the same sentiment against Bash in younger developers too. They don't want to take the time to learn, so they just reject it.


I mean... Bash is actually bad and ugly.

And ... I'm old and that's how I learned that :-)


Bash is bad and ugly… But I just avoided a day’s worth of development of a little command line utility by adding two alias lines that go into .profile.


> I find it dubious that a concept of a "model" is even necessary as a thing to always think about; it's a weird implementation detail that's treated as if it's a design pattern.

Are you saying you find it dubious that what you're displaying should be predicated on a well-defined collection of data? Because this is what a model is.

If that's what you're saying, I'm not seeing it. Care to expand on it?


For many, the well-defined collection of data is realized in a relational database. In this case, the challenge is mapping this data onto exact requirements for specific data entry screens or reports.

With complex business rules that cannot be easily implemented via database constraints or triggers, or where the same rule is located on more than one data entry screen, then there is a justified need for an intermediate "model" layer for business logic. However, many applications out there just don't have this complexity. In these cases, having an intermediate model accessible via JSON API may be over engineering. You have to double your mappings. A new feature requires that more code be touched, in different parts of the application. For applications with simple business logic and screens that closely map the database structure, this additional layer may not be worth it.


I guess the question is: What's the alternative to defining models? Just passing around DB cursors directly or maybe pulling the results out of a cursor and storing them in a dynamic collection?

I don't disagree that models are pointless if they're not doing anything other than aping the DB. Why waste your time creating another layer of required translation if it doesn't buy you anything?

With that said, I do think there are some benefits to using statically typed models in languages line C# with larger codebases. It seems like it makes it easier to refactor the application code.


As somebody who loves databases, I think the real issue that folks are meaning to critique isn’t actually using a “model.”

It’s using an ActiveRecord-style ORM (or any ORM) without grokking what lies beneath.

A database layer IS a model. It’s just not a class or an object.

ActiveRecord is a really nice trick when it works … but it can create some really performance-killing side effects.

Ruby’s Datawrapper ORM and its siblings in other languages requires understanding both sides (the object system and RDBMS) but can let you get your class/object semantics to play nicely with your database.

And just passing around database connections and arrays of hashes can get you awfully far.

But, if you want to not think about the database layer, ActiveRecord-style ORMs are a real win for developer ergonomics.

And that’s part of the win of Rails/Django/etc. You can live in a single mental model (classes/objects with references to each other) and ignore the database layer.

Except when you can’t.

One reason (not a criticism) that NoSQL can be such a win is that the semantics are closer to class/object semantics. So you’re not trying to manipulate data with an abstraction that doesn’t quite fit.

But most of our projects aren’t Twitter or FaceBook or Google or anything else functioning at galactic scale.


> Having used NoScript with temporary allowances, it's glaringly obvious how much of the web doesn't show anything without JS enabled.

Counterpoint: I generally browse with JS disabled and there's a strong association between sites for which this is existentially problematic and sites whose content or utility are garbage.

> If there's a trend to not make everything render on the client, I'm not noticing it.

This trend is not so strong as suggested. Far from server-rendered websites being "fetishized", this remains the standard.

I recommend SPA-first development to all my competitors. Crystallising architecture around the front-end is one of those anti-patterns my grandmother warned me about. "You'll struggle to pivot," she said. "Spikes, prototypes and reimplementations are much easier to develop in proximity to business logic and persistence schema." What a wise lady. See also: nosql and microservices.


   "Having used NoScript with temporary allowances, it's glaringly obvious how much of the web doesn't show anything without JS enabled. If there's a trend to not make everything render on the client, I'm not noticing it."
You're not noticing it because JS SPA frameworks have been the new hotnessTM for ~10 years.

   "We could very well end up back in dumping grounds of abstraction spaghetti bad object-orientation.[...]it's way too easy to munge business logic with the view. Yeah, Model-View-Controller suggests a separation of those concerns, but now it's up to the human to follow that convention"
This is a problem with modeling and coupling that happens on SPA's as well, unless you've somehow solved that pesky human problem :D The fact that there's an API layer helps....sometimes...but many times your munging of business logic just ends up being in two different layers instead of one.

   "...MVC being a flawed concept. It might be okay for hammering out a prototype, but in the long term so many things need to happen in order to keep it intact; Rails introduced concepts like "concerns" exactly for this purpose, because an MVC structure in a pure OO language like Ruby is very rigid."  
Agree. IMO one of the main issues with MVC was that the layers were somewhat ambiguous, which caused bloated controllers or munged up models. In the past I found that "splitting" MVC to be more like MVVM with dedicated backend and view model layers resulted in a much cleaner separation of concerns without the need to go full SPA just for the side benefit of having an API layer.

TBH having worked on multiple high traffic, large-ish MVC apps and high traffic, large-ish react SPA's, I'm a fan of "the old way" of building apps. It felt faster, easier, and much cleaner. I can't wait until things like blazor, hotwire, and htmx help server side templating become a thing again. IMO it would help clean up the state of the web dev industry immensely. Maybe I'm looking at the past with rose colored glasses?


Since I need to actually get off HN and get something done for a change, I can't reply 100% to everything you wrote here, but I appreciate your critical response.

> TBH having worked on multiple high traffic, large-ish MVC apps and high traffic, large-ish react SPA's, I'm a fan of "the old way" of building apps. [...] Maybe I'm looking at the past with rose colored glasses?

That bias exists in all of us, but I wouldn't guess you're looking back with rose colored glasses.

I'm actually more of a fan of "the old way" than I might be letting on. My work for the last 4.5 years has been almost entirely on SPAs, so I'm part of the problem, but I think I've seen the weakness in it as well.

All in all, I'd like for us all to try and avoid complexity from the get-go. That means starting with sites that are server-rendered and to try and not immediately jump to fancy SPAs and other tools. Maybe not everything must be containerized. Maybe monoliths and relational databases are A-OK for most purposes, and scaling up in response to financial success is something that can be handled when the time comes.

At the same time, if we're going through a sanity adjustment as an industry, let's make sure we reexamine the old assumptions that we ran away from in the first place. Otherwise, it's just a pendulum swing.

My main concern about the new/coming generation of tools is whether we also see a similar brawl we witnessed at the Cambrian era of frontend frameworks. A lot of things that seemed like innovations turned out to be headaches and even universally reviled. I can just picture the blog headlines of the next 5 years: "Why we moved away from Phoneix Liveview"


I have found that all the SPA apps I have inherited (current one particularly) have way too much logic in the frontend. People complain about Django's template language being limited, but that encourages you to keep that stuff for display only.


"Way too much logic in the frontend" is entirely objective, and honestly doesn't hold much water, unless you're talking security where the logic is not mirrored in the backend. Any time performance and responsiveness matter, across the engineering board, client-side logic wins. The times that integrity matters, you still apply the logic client side and also on the server side. See the amazing Source Multiplayer Networking article for more info on where I'm coming from with respect to ultra high performance but validated architecture.

https://developer.valvesoftware.com/wiki/Source_Multiplayer_...


Nah, all he SPA apps I have worked on have had terrible performance. Way to many calls between backend and frontend. Way too much complexity. I am sure you could write them better, but server side rendering is nowhere near the bottleneck that SPA people seem to claim that it is.


counter point: Microsoft is leading the charge with Blazor, bringing full single stacks to the front-end for the first time for non JavaScript developers.

For applications its hard to see why this model wouldn't be more preferential. Most .NET applications are monolithic in nature (though you can enforce really good module separation with relatively minimal effort IME). You can look at Rust too, for its developing some Blazor-like frameworks. I could imagine for example, the next GMail being written like this, or other deeply interactive applications.

I think people want to work in a singular language / framework more than anything else. One of the reasons I think JavaScript still prevails today, is it was the first language community to largely achieve this in an accessible manner. Others are now coming forward.

I think pre-rendering is still the best approach to truly static content though, using web components to progressively enhance features of a page for light interactivity, this is one area they really shine.

I could be very wrong though!


What I've feel is ignored in the community is that actual backend development never reached a point of wide adoption with javascript/node. Certainly node and alikes are used to deliver front end code, as front end api or api layer in between. But the vast majority of backend development remained in the power of established tech. The (natural) attempt of JS tech to push into backends, led to some accidental constructs, a mix of JS and other techonologies which may act more like walls then bridges. Tech stack responsibility is blurred because of JS's omnipresence, which leads to several problems. I think some people start to realize that JS is a useful front end platform but must not be the default tool for a backend platform.


I seriously hope MSFT is committed to pushing Blazor. It's has the potential to stand above the other "unified back end front end" technologies.

Unfortunately Blazor hype has really died down the last couple of years and I'm starting to get Silverlight vibes from it, which doesn't bode well...


I think MAUI is in part built on the back of Blazor. They might converge, but I don't think its underlying model is going anywhere.

I don't work at MSFT though, so YMMV? I don't think its going to end up like Silverlight. They seem pretty committed to not back pedaling anymore like that, they know it hurt them so much, at least thats the message I been getting from MSFT developer relations, is an some admittance they really borked their DX by going through such rapid turnover of UI frameworks and such.


No it is not, MAUI is having the option to support Blazor via WebWidgets as a kind of Electron competition, which probably only the Blazor team themselves see as an advantage.

MAUI by itself is Xamarin ported to run on top of .NET Core (now .NET 5+), with WinUI as backend on Windows.

In what concerns UI Frameworks coming out of Redmond, it looks like pretty messy civil war right now.


> Blazor-like

Isn't that just "GWT-like"?


Blazor relies on dynamically generated JS Bridges and WASM, where as GWT just generated JavaScript from Java Bindings. Seems similar on the surface, however the difference being that Blazor is trying to leverage direct compilation of C# constructs directly to WASM whenever possible, and supplementing that with some JavaScript interop, as opposed to trans-piling things directly to JS itself, as with GWT.


AIUI Blazor is 2 models today

    1. As you described, compile C# to WASM, download a blob and run client side
    2. Run the C# server side, send the client a thin page and a SignalR pipe back up to the server
With 2 it's more like react SSR (as in the mode where react components run on the server, not SSG where they run on the server and just emit static HTML). When i looked through the docs, 2 was the primary mode for now.

The benefits claimed were you don't need a modern browser, i think the thin shell + signalR combo works gracefully back as far as IE9 or something silly, also you don't need much processing power on the client because the signalR pipe is just a conduit for pre-rendered HTML generated by a blazor component running server side.

The down side is that for every client, there's a websocket (or long polling connection) for every client to the server.


Isn't option 1 more like compile .NET interpreter to WASM and ship with your dlls resulting in huge download and terrible runtime performance?


There are a lot of ways to trim the size. All in sure, its 2 MB I think, for the entire .NET runtime. However there are a few mitigating steps that really do have a dramatic difference in terms of how big the WASM runtime is.

If you setup the compiler with trimming enabled[0] it gets significantly smaller. You can also lazy load assemblies by route[1] to further restrict the upfront cost.

Of course, this is not acceptable for the average web page by any means. This is really intended for behind the login type applications where you load up the initial runtime once and its cached heavily for the rest of the applications lifecycle by the user. This is really targeted at true applications-in-the-browser type situations.

Blazor server side works too, though everything then has to run via a SignalR connection and can be a bit more flaky at scale.

Runtime performance however, I actually don't find it to be a bottleneck. The apps I've built with Blazor are pretty fast. I haven't worked with it in 9 months though.

[0]: https://docs.microsoft.com/en-us/aspnet/core/blazor/host-and...

[1]: https://docs.microsoft.com/en-us/aspnet/core/blazor/webassem...


Hm, that sounds a little like meteor. Thanks for the explanation.


That makes sense. But it doesn't seem a very significant difference on the face of it. More modern implementation of same idea?


On the highest level, yeah, it really is just more modern.

In practice though, its also flexibility. GWT shoehorned you to do certain things a certain way and only that way, where as Blazor only limits you based on what calls and modules can be safely converted to run against the WASM driven runtime. Which means you aren't limited to a specific list of ways to solve a problem, its more (and increasingly so) flexible than that. So for instance, GWT has specific widgets[0] that you should use to represent the use interface, Blazor doesn't limit you to just Blazor compatible widgets (there are some though, because at some point the abstraction runs out of juice). You can use regular conventions and classes too, like the normal .NET HTTP client stack, regular data classes etc. You can also re-use Razor components, most of the time.

To be clear though, Blazor isn't a panacea, it has its own caveats and downsides, however I think its really innovative in terms of concept and execution. For anything behind a login its a pretty sensible choice IMO. I wouldn't go making your marketing / info / purely static pages with it though. server side rendering or pre-rendering is a much better choice there.

Also worth mentioning, on top of all that, is Blazor can cross compile to native applications too, like iOS and Android, with (largely) the same codebase.

[0]: https://www.gwtproject.org/doc/latest/DevGuideUiWidgets.html


For those that have integrated something like htmx/server side rendered HTML over an API into existing JSON-oriented APIs - what's the cleanest way you've found to have both formats exist together? Not that every existing API call would need to include an HTML output, but for many projects I could see there being many endpoints that would also make sense to output HTML and wouldn't deserve an entirely different API.

My first thought was to add a format/output parameter to specify the format. In that case of HTML output, you could possibly use the JSON that would normally be rendered and insert it into a jinj2 template for example.



Did you make htmx? I love that thing.


yep, glad you are finding it useful :)


I see you just about everywhere on the internet. It's nuts. Prove that you aren't a pile of ants wearing a trenchcoat.


Beats half a million ants and half collapsing star.


htmx loks really cool, a performance comparison vs the fastest SSR SPAs would be helpful


>But as a general rule, I think we must not discard a technology just because it's old. Doing so because it's too new would make more sense, if you want to build stuff for businesses.

Agree with the first statement but, not necessarily on the second.

People dismissed React for being "too new" or "the latest stupid shiny new framework" for a long time because they didn't understand it. But when I read about it for the first time, I understood the problem it was solving with the VDOM. It made perfect sense for me to start using it immediately because it quickly starts saving a lot of time formerly spent on writing DOM setting/resetting code and making sure it never breaks. It was something I was already thinking about: "how could I avoid writing all this tedious DOM manipulation code". And there it was, and I knew it would become big. It took a while for the rest of the world realize it, but eventually it caught on.

Sometimes tools pop up that make perfect sense. So you shouldn't dismiss any technology for its age, whether it's old or new.


The problem with new tech isn't just that it might not make sense. It's also that if it doesn't gain enough traction to reach critical mass, it might not be a good technical foundation for your business. As a business owner the question isn't just "can I solve the problems I face today using this?" it's also "will I be able to hire developers that know this 10 years from now?" and "will there continue to be an ecosystem of useful middleware for this?" For React, the answer is yes to all of those, and has been for a couple years now, but 5 years ago, the latter questions weren't really decided yet.


If you only use React as a library for the VDOM, there's barely anything to learn, though. Even if nobody had heard of it previously, people would get up to speed in no time. That's how I started using it, keep my application as it is but just use React as the DOM diffing engine and delete tons of DOM manipulation code from the repo.

The "React as a framework" thing came later, which I personally think is a misstep. I'm not into frameworks in general.


Yup.

And that’s why author wrote "But as a general rule" instead of as an immuable rule.


Its funny how most languages have this whole separating between microservices and monoliths. Having worked with elixir for the last 5 years, One of its biggest benefits is the ability to have a microservice deployment while having a monolithic codebase.

It comes with a reasonably good internal pubsub system and vm linking right out of the box. And the built in application supervisor makes it trivial to spawn thousands of threads on one machine, each with their own independently allocated heap, that can be mounted under a supervision tree.

Want to create a microservice? just make a file, inherit the GenServer and add it to your application tree in your application.ex. Add a library like horde and you can create singletons in your codebase that are microservices. they run as a thread somewhere in your cluster. Just send it a message by nmae and the vm will know where to deliver it.

The end result is the overhead of creating and maintaining a microservice in elixir is about the same as the overhead for adding a new controller.


Elixir (and Phoenix) are truly amazing. I only wish there was a typesystem / spec system better (stricter) than Dialyzer. Or perhaps I am doing something wrong but I couldn't get it to be as strict - if there is a way i'd love for `credo` to force me to adhere to it.


its unfortunate that elixir's type system isn't too strict. That said, if you want somethign stricter, there is gleam (https://gleam.run/) which has interop with elixir. I haven't tried it myself yet but you could probably embed a service in gleam for specific cases where you need the stricter type checking.


Yep, I've seen Gleam and I absolutely love the idea, although it seems like its super early stage still.

Should play with it a bit more.


Guaranteeing a singleton process is a very difficult problem which horde definitely does not solve perfectly mind you.

All is well and good until it isn't. We opted for leveraging different mix release permutations deployed on k8s as needed instead.

Definitely takes more effort than what you describe, and has its own issues... But imo definitely better understood than fighting weird state with horde


I write UI apps, using Apple's UIKit. I can generally write a fully functional app, in a day (or less). I do it all the time, for test harnesses. I spend more time on apps that I'll actually be shipping (mostly doing stuff like aligning UI elements and applying accessibility and localization, which can take quite some time. Lots of iteration).

I'm putting the finishing touches on my second app in about a month and a half. It's a "total rewrite" app; just like the previous one (which is already out there, and has had over a thousand downloads already).

I did these apps alone. After this one is out the door, I'll return to the app I've been working on for a year and a half. These were just "side trips," because I was getting burned out.

UIKit was designed as an MVC framework. If you use a different pattern, then "you're holding it wrong." You are using the framework in a manner for which it was not designed.

That is not always bad. I can't actually think of any examples, right now, but I'm sure that some of the new methodologies are more effective.

I strongly suspect that some of the new development patterns (I won't name them, because holy wars) were developed specifically to break up projects that are really best done by one or two skilled engineers, into ones done by a fairly large team of relatively unskilled engineers.

Might work out. I don't know. That's not how I work. YMMV.


really best done by one or two skilled engineers, into ones done by a fairly large team of relatively unskilled engineers.

I think this is mostly a way to flatter ourselves about things we don't like. It's perfectly fine to not like things but as an argument, it is pretty poor. It's at also at the core of PG's 'blub language' thing, a mistake at the time that's aged even less well.


I'm sorry. It must be my age, but I don't really understand the comment. Was I supposed to be insulted? It may have fallen wide of the mark, if so.

I wasn't railing against anything that "I don't like." I was simply stating that I use MVC, on a regular, daily basis, and it gives me the results that I require.

And, I know, for a fact, that some of these patterns are used for exactly the reason that I stated. I know this, because I have talked to the managers that decided to use them, and that was the motivation. I don't even have an opinion on whether or not that is bad. Many of these teams do great work.

Maybe things are better, done in ways beyond my limited, saurian, comprehension.

All I can say, is that I'm able to churn out a lot of stuff, of extremely high Quality, in a remarkably short time, using these prehistoric patterns. I know that Apple developed the patterns they use, in order to allow very small teams to create high-Quality, high-performance apps, in very short time (again, because I've talked to some of the folks involved in writing UIKit). People like me, working the way I do, were what they had in mind, as they developed their frameworks.

SwiftUI looks pretty cool. I haven't used it much [yet], because I have yet to be convinced that it is suitable for ambitious, shippable projects. I'm waiting for it to develop a bit of momentum. At first glance, it doesn’t seem to be designed for MVC (but it may work great. I don’t know enough about it, yet, to be sure). I’m happy to learn up on whatever methodology works best for it. I learn quickly, and adapt extremely well. Been doing exactly that, for quite some time.


Was I supposed to be insulted?

Nope, I just think you're wrong, sorry it came across as something more than that! I don't think this has much to do with the specific qualities of MVC (they're a lot of good things about MVC) or the problems of newer approaches (declarative UI doesn't fit everything, implementations are newer and buggier, etc). The 'made for chumps, not artistes' mindset/explanation ends up being statistically wrong, over the medium-ish+ term, just about all the time (60% of the time!) - a pretty great track record of wrongness which is interesting and useful in itself.


I'm sorry. I must be thick. I still don't really understand. It appears as if I am being told that I'm an "arrogant arteest."

That seems pretty insulting, to me. It might help, if you reached out, personally, instead of deciding my personality, based on a single post on an internet forum. I’m actually a pretty decent chap, and I’m not particularly up for online catfights. BTDT. I’m an old troll, and feel that I have some atonement in store.


> And, I know, for a fact, that some of these patterns are used for exactly the reason that I stated.

You've gone from "I strongly suspect" in previous comment, to "I know, for a fact"...

New methodologies are not necessarily designed for large teams of "unskilled engineers", just like old methodologies were not necessarily designed for "one or two skilled engineers"...


Actually, "I strongly suspect" is a rhetorical device that I use. I deliberately use ambiguous language, because exact language is often taken as confrontational.

As it turns out, I needn't have bothered. This was declared a p****ng match, anyway (I didn't mean it that way).

I guess the difference is who gets upset.

I wasn't talking about how they were designed (I apologize for unclear language in my initial posit that indicated that). I was talking about how they are used.


> I strongly suspect that some of the new development patterns (I won't name them, because holy wars) were developed specifically to break up projects that are really best done by one or two skilled engineers, into ones done by a fairly large team of relatively unskilled engineers.

I've always thought of microservices (or services) as a way to make your organization rather than your code scale. Past a certain size, you won't have an uniform group of developers. According to Conway's law, this will impact your codebase. Better embrace it in your organization.


Absolutely.

I write fairly humble apps, though. If things got a lot bigger, I'd hire someone to do bigger work.

In any case, it's really easy to toss out a codebase that took one guy, two days to write (I do exactly that, all the time). The advantage that I have, is that I write really good code, in those two days (feel free to check out my work).

The big app that I'm writing, is a native Swift app. The server is written in PHP. This is not because there's an inherent value in that, but it is because that is what I do, and we can't get anyone else to do the work for free.

You use the tools at hand.


A lot of these architectures you're referring to are intended to scale a team and solve collaboration issues, some intend to allow for easier automated testing or configuration etc. I don't think it's necessarily a bad thing either if they're designed in a way that larger teams of varying skill levels can contribute.

If you're busting out an iOS app in a day alone then of course MVC is going to be fine for your needs.


Web MVC and mobile MVC are completely different IMO. I've done both, it's incomprehensible industry lingo for them to wind up the same name.


Thank you! My team recently implemented a section of the app with SwiftUI in 'MVVM' and it is a unmaintainable tangled mess. We should have used something more like MVC.


I know it’s probably obvious to some but in an MVC framework: the API can be the view.

That’s the whole point of separating the code into layers: so you can have multiple implementations existing concurrently at the same layer.

For example you can have models talking to Postgres and models taking to Elasticsearch or Cassandra. Or as everyone knows you can have different controllers which all talk to the same models or use the same view for output.

And if you were to abstract beyond the framework level, MVC can apply to software written in totally different languages.

I consider my code to be high level MVC and it’s controllers in Elixir talking to Java model layers and outputting JSON views, HTML, or even Excel files.


IMHO, the primary value of any framework (from an organization / product lifetime perspective) is two-fold.

#1 - Organize and implement in a manner that adheres to some standards, so someone can always be found to maintain and fix it.

#2 - Create some interfaces that break components apart.

What standards and what interfaces (and how many) are an order of magnitude less important than that they exist at all.


Don't forget: #0 - Speed/Developer productivity

Not only when writing actual code, but there's less bike shedding and "fundamental" discussions.


This is what I was thinking. Got Tornado plus a load of home grown code at my work. Compared to Django it's just so slow to develop on. There are so many well tested, documented features in Django, while we have some half arsed buggy implementation doing a poorer job.


> 2 - Monoliths still make sense in many contexts

Yes and I'd argue it makes sense in most contexts for small startups.

Also, Monolith is sometimes synonymous with tangled spaghetti code but it doesn't have to be. If code is kept organized, it can be split into microservices when/if the need arises.


> If code is kept organized, it can be split into microservices when/if the need arises.

You can still keep a client interface, but in the monolith impl you can just have it call code directly instead of transport data over a wire. Then, when/if you separate, the only code changes you need are the transport layer (not necessarily non-trivial, but it's just a single point of code that needs major overhaul).

I find usually when people say they are pro/against monolith/micro-services, they are using very overloaded terms.

When I hear "anti-microservice" my brain thinks "wow, this person is against code that separates concerns into their own logical buckets - they must love entangled messes of code that take months to make changes in" when usually they are thinking "I'm against severe premature optimization, and rolling out/maintaining orchestration for all 5 of my miniature services when I can really just build this out in a single repository that is able to maintain those concerns for me without duplicating work/effort."

Honestly we probably just shouldn't talk about being pro/against any of these patterns dogmatically, and really should talk about the specific issues present in each and when those are worthy trade-offs.


'When I hear "anti-microservice" my brain thinks "wow, this person is against code that separates concerns into their own logical buckets'

... Then you're not really listening.

Especially in this day and age of cheap machines capable of doing thousands of requests per second and hundreds of thousands of IOPS... almost nobody needs 'microservices'. Hell, I worked at Google and that's not how they solve scaling problems and their scale is bigger than yours... So why does the industry reach for this as a tool?

Microservices != separation of concerns or designing for modularity. It's actually a pretty terrible way of designing for that.

(Sometimes I feel like if I see another client-side join in my life, blatantly abusing and/or ignoring the presence of a relational database in the stack, I'm going to cut my fingers off with wire snips, throw out all my computers, and leave this industry forever.)


You're not really listening, either, as you short-circuited and didn't read the rest of my comment.


IT is like the fashion industry. every few years, you get a new shiny new tech or old tech with new bottle and for some reason what work for Google. we must use it for our organization regardless if it make sense or not.


I mean look at the rise of things like Hotwire. We are seeing developers go away from JSON everything to let me just send an HTML partial over the wire instead, that and things like Phoenix Liveview: let me send what needs to be changed in the DOM as a websocket message and save on data.

I don't mind the idea for a SPA for a big social media type of site, but for ordinary sites, I feel like it's overkill in most cases. If you enjoy it and it works best with your team and workflow I'm not trying to knock on you. I just prefer not to add more layers and points of failure to my web application.


I chuckle cynically how simple things come back with fancy names - Server Side Rendering (SSR), Hotwire


Server Side Rendering annoyed me the most because I kind of like the idea of components for front-end code, but I don't want to host yet another thing when I could of at that point used a server side template engine for my back-end web framework.


That's the operational problem. However, I'm more amused at how similar this is to fashions clothes fashions. Those usually change often for frivolous reasons or plain boredom. This feels quite similar.


It only appears this way because you are not aware of the nature of the cycles, and what causes them.


I assume it's various things. I think for me at least React and other SPA frameworks fixed a different problem I think most could agree with: reusable web components. I don't think anything like that existed before on the front-end, and by making it a front-end specific solution, it is fluid enough to transcend back-end languages, but then it became bloated.

I love back-end work and I prefer front-end code to be as vanilla as can be. If I want a UI I pull in Bootstrap or some CSS framework because I'm not a designer.


I totally understand. They are just discovering the basics, and slapping fancy names to it.


There's really just LiveView and a lot of less capable copies. In order to do what Phoenix LiveView can do effectively, a number of specific elements must to exist in your runtime as well as your language. Anybody can send HTML changes over a websocket, but there's so much more than that happening.

It's one of the great show pieces for what the Elixir makes possible.


I think it is partly because it is shiny new ... but also sometimes it seems promising, you have to try to really figure out what works about it, and then you understand it better and its use case is more nuanced.


I think it's mostly because the new tech Solves-A-Problem-That-I-Recognize, and developers mislead themselves into thinking the new tech is does everything old tech does, plus this, and they downplay the trade-offs being made (or they aren't experienced enough to recognize the tradeoffs).


Or even more simply:

These solutions are often solved in environments that are running at a scale not most companies operate at, BUT think they will, so they invest in them early for future, potential, needs. It's no surprise many of the core contributors / founders comes from Facebook/Google/etc.


I'm not even sure they need to assume. So much programming is by organizational design task by task problem solving that people get lead into making one off decisions and really don't have time to explore otherwise.


> If code is kept organized, it can be split into microservices when/if the need arises.

You can even just run multiple copies of the monolith with feature flags to enable/disable different parts of the code.


> You can even just run multiple copies of the monolith with feature flags to enable/disable different parts of the code.

I can't tell if this is facetious. I hope it is.


Facetious. Everyone knows your feature flags can be determined based on user properties so you can have each user having a custom experience in a single version of the monolith.


Why? It's really not very different to having several microservices in a monorepo.


When reading it, it sounds bad, but it’s actually what a lot of big companies do.


I know, right?

It's also perfectly possible to split those things up, and to have them share the same database.

This causes microservice extremists' heads to pop off..


It's possible, but also a really poor idea. The whole point of microservices is to separate concerns. Reintegrating those concerns through the database is an excellent way to run into problems. Changes to the database schema will alter how all the services that touch it work, and now you have to manage schema migrations across multiple services and probably multiple teams.

So much nicer to expose a documented and versioned API (and/or published event stream), so that the consumers of data your service manages have some flexibility at which moment to migrate to the new schema.


We do this at work with multiple entry points. New engineers sometimes don’t even know it exists until the 6th month when they eventually run into shenanigans.


Hehe "shenanigans" ;-)

The truth is, most things can be made to work - though there are various trade-offs ("shenanigans") involved.

e.g: In my current project, there's a bit that loads large-ish files, parses/converts the data, and loads them (order of tens of thousands of rwos) into SQL tables.

For reasons that best escape me - since after the files are parsed the entire dataset is _literally_ in memory, instead of doing a bulk sql update there- and then, thousands of messages are enqueued to be consumed by a practically unlimited number of lambda functions that then... bomb out as the SQL database hits it's connection limit and keels over.

I guess those shenanigans have better buzzword compliance!


Sounds like one of my favorite shenanigans when the stakeholders tell you it needs to handle billions, so you engineer it that way. Reality: it needs to handle thousands. Sounds like you got the reverse shenanigans.


Except that there's only one database, you it wouldn't even billions. In fact, I suspect it will handle significantly less load than a batched model will. I've been able to get 100x speedups by batching SQL queries.


I miss the days of just one database sometimes…

Batching is the way to go, 100%. Remove the network as much as you possibly can.


You should really put a connection manager in between your lambda functions and database, like HAProxy or AWS RDS Proxy.

(still, that doesn't change the fact the code is badly designed)


The problem isn't that building clean monoliths is hard but rather that building messy monoliths in tempting.

Microservices, in most patterns, or even SOA will put more-than-arbitrary separation between code and databases. Without that more-than-arbitrary boundary then code gets spoiled because it's easy to import the other services library directly, it's easy to make that DB call yourself rather than route it through another controller, etc...


Isn't it equally hard, if not harder, to maintain a neat, well organized microservices architecture?


If you're in a monorepo then you're replicating the constraints a monolithic architecture applies, so yes, I could see that happening.

Edit: Inadvertently, this is why I also like to call these things "patterns" rather than architecture. Architecture being the idea, and pattern how you implement it.


Isn't the whole point of microservices is that it does not need a neat, well organized architecture.

Not saying it's true, but when you cut through the marketing bs that's the claim.


Depends in which stage you are, but if you are still figuring out what to build, or how to build it, fast-built messy monoliths are the least of your problems.

Once things are validated, opportunities compound to rewrite "properly" (^Wtemporarily) things around or aside the monolith.


Exactly, in any moderately sized corporation new functionality will be rewritten several times, sometimes deprecated/reshuffled completely and crystalized after some time – that is good moment to start thinking about extracting things. Changes on the service boundaries are much more expensive than internal changes.


I never got this kind of line of argument. It's also easy to write bash script which deletes everything, write infinite loop in any language, create oom, add sql injection, have exponential complexity etc. - so what? If people don't have time to organize code, introducing microservices won't magically solve those problems. They'll end up with poorly done microservice spaghetti with tons of extra complexity/consistency violations all over the place. Now the architecture doesn't fit into single screen anymore and it feels it's simpler because you have separate screens for each part - but simplicity will be just an illusion.


It's just a matter of incentives and how hard it is to do the right thing, or hard it is to do the wrong thing.

If you gap services or microservices by a Git repository, then it makes it harder for them to import each other's code; you also will probably have code duplication. If you have repositories for the shared code that's safe then you have new overhead problems. If you keep them in a monorepo then it's just as tempting as in a monolith to cross domains; the only inhibition is resistance to ease of access and the only way to catch it is in code review.

These are all tradeoffs. Fwiw, I still run a monolith project, and part of code review is checking imports, and part of our documentation talks about shared code standards and locations. It can be done, there's just no natural guard rails present.


If you read any respected source, they echo the same thoughts, ie:

"For many organizations, the modular monolith can be an excellent choice. If the module boundaries are well defined, it can allow for a high degree of parallel work, while avoiding the challenges of the more distributed microservice architecture by having a much simpler deployment topology. Shopify is a great example of an organization that has used this technique as an alternative to microservice decomposition, and it seems to work really well for that company."

"Unfortunately, people have come to view the monolith as something to be avoided—as something inherently problematic. I’ve met multiple people for whom the term monolith is synonymous with legacy. This is a problem. A monolithic architecture is a choice, and a valid one at that. I’d go further and say that in my opinion it is the sensible default choice as an architectural style. In other words, I am looking for a reason to be convinced to use microservices, rather than looking for a reason not to use them."

Sam Newman, Building Microservices, 2nd Edition

You can see the same thought rephrased by respected people - start with monolith, grow organically from that into services or microservices. It can take years. It may make sense not to do 100% transition ever. Just use common sense, your context etc.


> If code is kept organized, it can be split into microservices when/if the need arises.

This is my favorite thing about the nature of Elixir actually because it happens automatically.

With functional, no-side effects code you ensure that your logic is free from the entanglements that would otherwise complicate the separation. You take one function, you move it somewhere else and it works exactly the same.

The ability to cluster BEAM nodes allows you to call that function you just moved by just pointing to the node where it lives and then calling the function. And you'll get the response back just as if it lived in the same place it always did.


I’d put it this way: keeping things organized is something you need to do in any system when there is enough complexity, whether it’s expressed as microservices or a monolith or any other way (this isn’t even specific to computer systems, of course).


Hard agree.

If you aren't familiar, I find The App Continuum [1] to be fantastic for structuring these kind of conversations with my teams :D

[1]: https://www.appcontinuum.io/


> Also, Monolith is sometimes synonymous with tangled spaghetti code but it doesn't have to be

Yeah, I hate this false equivalence. When did we start assuming that monoliths couldn't be modular and have clear package boundaries?

IMO how you deploy your software (one binary vs many services, all on one box vs distributed) should be an incidental detail that is automated away.

You have two packages that are very chatty? Your infra figures this out and deploys them in the same binary. Two packages that rarely talk? Deploy them as separate binaries and maybe even on separate hosts.


>IMO how you deploy your software (one binary vs many services, all on one box vs distributed) should be an incidental detail that is automated away.

>You have two packages that are very chatty? Your infra figures this out and deploys them in the same binary

Are there any libs for doing that? Seems to me that that should be pretty complex.


Organized as in... kept in different repos? People live/die on this hill and it makes no sense. One or many repos, it's just an organizational question, not some Super Serious Big Decision.

I work at a small, early stage startup and I'm about to create a new repo for our Slack bot. I'm going to use the Docker image another of our repos generates as the base Docker image for this new repo, and it's going to be just fine.


Organized as in everything lives in the same deployment unit (and therefore most likely in the same repo) but with clearly defined module boundaries.

If you've got separate docker images, you have separate services. OP is saying that most startups that think they need to split out their code into independent services actually just need to organize their single service better.

> One or many repos, it's just an organizational question, not some Super Serious Big Decision.

When people talk about monoliths they're not usually talking about repo structure, they're talking about deployment structure. One or many repos is just an arbitrary organizational question. One or many services is a really big deal: It's the difference between running a distributed system and not.


It is just like static linking in c. We can split code into different object files, and statically link to a big application. We help of header file, we can invert the dependency relationship to keep implementation private to keep autonomy. It also applies to node.js/web frontend application, here is a vite demo: https://github.com/taowen/vite-howto/tree/main/packages/STAT...


"one or many services" is also not a Really Big Deal. You can have a few services that do a category of things and not even sniff microservices architecture.

It's not a boolean, it's a spectrum, and where precisely you land on that spectrum is not as important if you're actually focused on getting work done.


It's a spectrum, yes, and adding an extra service might be the right move, especially if they are well and truly independent (don't interact with each other). But it's still a significant architectural decision to go for a distributed system, much more so than whether you split your services into separate repos or go for a monorepo.

Getting work done is important, but it's not an excuse to avoid thinking through the long-term implications of your decisions. If you shrug and say "it's just one more service" every time you're tempted to add a new docker image, you'll not just end up with microservices, you'll end up with poorly planned microservices.


You're missing the point entirely. The big deal is choosing between 1. distributed vs non-distributed. 2. A single build deploy process / release cycle, vs multiple. A distributed system and multiple build systems / releases is orders of magnitude more complex over time.


I do not see why a startup would not go with microservices today. It is a nice way of separating concerns. The knowledge on how to do this in a good way is out there. It allows for smaller isolated changes, feature focused development and much shorter time to production.

Just build services that are not too small, avoid dependency hell between services and never build platforms.


Microservices are a solution to organizational problems, not to technical ones. They mean accepting increased technical complexity (your system is now distributed) in exchange for decreased organizational complexity (your teams can now deploy independently from each other, can safely make database schema changes, etc).

Going with microservices from day 1 will initially mean that you have one team maintaining many services. They have to deploy them separately. If you're doing it "right" you have separate databases per service. None of that is useful for a team that's just starting out.

If you want separation of concerns, the language's module system and a bit of discipline will get most teams as far as they need without introducing distributed computing into the equation.


I think the key is not to make the services too small. If you are a startup with one team, you should probably end up with a handful of services. And do not let the services talk to each other.

Individual deployment of services is the single best thing with microservices. If the services are not highly coupled that is. For example. To be able to fix a bug in the search function in a system without affecting anything else speeds up things. Such a deployment can take minutes instead of the hours as some of the monoliths I worked on takes to deploy. It also gives the possibility of rolling forward instead of always having to rollback.

EDIT: replied to the wrong comment.


> Microservices are a solution to organizational problems, not to technical ones.

Which is exactly what separation of concerns achieves:

> It is a nice way of separating concerns.

It's also why Uber got to "thousands of microservices" and found it terrible...for the same reason managing thousands of developers is a nightmare. The overhead eventually catches up.

I upvoted you both btw.


Didn't Uber end up with something like 2000+ microservices. I seriously doubt that their business requires that amount of services. More likely that they have a lot of highly coupled services, services that serves no meaning by themself, services that are just wrappers around database entities and so on.



It makes everything more complicated.

Deployment is more complicated, changes can be more complicated, performance is more complicated. Its a great way of splitting up teams, but when you are a startup and have a single team for a long time, why would they do this?

> Just build services that are not too small, avoid dependency hell between services and never build platforms.

This is basically "code without bugs and you will be alright". Avoiding dependency hell is difficult, sizing up the services at the perfect size is also difficult as hell. Everything is a lot more difficult.

Microservices can be amazing, I'm not saying they aren't, but acting they are just better than monoliths is not seeing difficulties where there are many.


You introduce a lot of complexity when you change the boundaries. Straight to microservices is almost always premature optimization. You'll know pretty quick when it is needed.


The boundary complexity is always present. When you want to tackle that problem is another concern. Horizontal dependencies can destroy any system no matter if it within a database or between services.

A lot of monoliths reach end-of-life because everything is glued together. I even worked with partners that have systems where each database table is dependent on a single table in the database. It failed. A design doomed to fail from start.


The way I look at it microservices come together to form a monolith. So you effectively have a monoloith with unreliable network connections between components instead of reliable method calls. Much more effort for little benefit for most startups.

My current work has put a load of developer resources into a kubernetes setup. It makes troubleshooting slower and I don't think we have ever scaled beyond the default two pods, except when some bug was playing up.


If you place things in the right service, the need for service-to-service communication is low. I know companies that straight out banned it. If you treat each service as a database table. Sure, you will be in for a ride. If you then are lenient about breaking changes, then you will have real problems. But microservices is a service architecture. Each service is almost a product. Each service covers an entire problem. Parts of that problem is not in another service. A service is not a task.


It doesn't always make sense to use microservices. If the app is big, you will have a hard time using a monolithic architecture. Also, microservices do better if you have a high load, if you have many teams working on the same project, if you have to feed data to many apps or frontends. And microservices are more reliable if done well.

Many people have a fear of microservices. They are afraid of dependency hell, they are afraid their app will lock up, they are afraid it will make the app more complex. In my experience, that is not the case if the architecture is done well.


You can separate concerns without microservices too! People in the past were able to do it, they even invented programming language keywords for it, like https://www.tutlane.com/tutorial/csharp/csharp-access-modifi...


Often I think this means teaching people how to make services large enough to do their job and small enough that you can replace it within a couple hours worth of a work for a junior developer (a senior developer should be able to do the same work within an hour or less depending on their familiarity with the language and build system).


I feel like MVC frameworks, like most application frameworks, are caught in a constant push-pull cycle.

Devs write without a framework, but complain that they have to spend a lot of time working on the structure of their application. There's a desire to use a third party framework that handles the broader structure, and the devs just have to slot in their application specific code into pre-defined places.

Devs then work with a third party famework for a while, but get frustrated when their application needs don't match up with what their framework is good at. They then have to write hacky workarounds to add the functionality they need. There's a desire to ditch the prescriptive framework, and design the code structure in a way that meets up with the application's specific needs. And then the cycle repeats itself.

Application frameworks, whether MVC or something else, are a useful tool. But there's no perfect framework, and it's easy to feel like the grass is greener on the other side.


That doesn't in the .NET world as the framework does not stand in your way. Everybody uses the framework. It comes with batteries included but it's very modular and you can use only the parts you need and use custom functionality if the framework provided functionality does not match your needs.


>But as a general rule, I think we must not discard a technology just because it's old.

I forget who had the quote that went something like:

"If SQL is so great why have people been trying to replace it for decades?"

(the failure to replace it is of course the punchline, in case anyone misses that)


An excellent article. As huge Django fan, I could not agree more (so yes, biased :-) )

Since 2015, I've always rendered final HTML server-side, whether that HTML was travelling over a complete http request/response cycle (big page load or XHR or fetch), or over websockets.

The only JSON ever involved was stuff similar {"markup":"you html code here"} and a few other more subtle bits to make up for a dumb/blind client. In other words, the client does as it's told by the server. (my client is Azatoth)

Opinionated? Yes.

Full-of-bad-suprises-after-release-oh-shit-what-did-I-do? No

Then again some light stuff works (flask vs django). But as (more than) hinted in the article, a hello world may satisfy this immediate need to get that shiny first HTTP 200 from your new website/api/whatever. But doesn't do much insuring a certain level of quality, sanity and stability in the long(er) run.

my 2pence.


"MVC" is such an overloaded term. The article is quite clear it's about Django, Rails and Laravel¹, yet there are already people here complaining about different meanings of that name.

Hell, Microsoft has a framework literally called "MVC" that has only a passing similarity to them, but is completely different on practice.

"Framework" is also too overloaded to my tastes, less so than "MVC", but you will still get confused people if you use it without context.

Using those terms make ideas less clear, not more. I do really avoid using them, and I would recommend doing the same for anybody.

(Anyway, it's nice to know that getting the region's HTML and applying it to the innerHTML of an element, like I was doing with JQuery decades ago is called "HTML over the wire". And it's coming!)

1 - "And many others" that I'm sure was added to the text just to avoid angry email from wrong people, because there aren't many others like those ones, and the few I know about wouldn't get called MVC.


I personally wouldn't start a new, greenfield web API project with an MVC framework. There are still too many choices present in how to handle concerns like querying, authn, authz, caching, migrations, etc, etc. I would use a system like PostgREST or Hasura and generate the API from the database directly.

Business logic can be handled on the control plane with replication/subscriptions/whatever mechanism. Small, stateful services that react to the event stream and perform whatever actions needed based on your business policies: send notifications, insert new records, call external partner APIs, etc.

For server-rendered UI I think there's still a strong case for these frameworks but they could probably take some lessons and generate their data-layer models from the database DDL, push business logic down to the control plane, and focus on rendering current state from read-only models/streams. I've been meaning to try something like this in Haskell (I've written some foundational libraries to enable this on Postgres [0]) but there are frameworks that do this like Phoenix in Elixir.

[0]: https://hackage.haskell.org/package/postgresql-replicant-0.1...


Or you could just use Rails and actually ship a product within the first year of work.


Whatever floats your boat.

I can stand up a REST API server with PostgREST and a database in a couple of hours. I can deploy new models with a SQL migration.

Like I said, if you want to do server-side rendering it's still a place where Rails/Django/etc shine but I think they could learn a thing or two there to make them better. They could improve so that folks can write/maintain even less code.


>I personally wouldn't start a new, greenfield web API project with an MVC framework. There are still too many choices present in how to handle concerns like querying, authn, authz, caching, migrations, etc, etc. I would use a system like PostgREST or Hasura and generate the API from the database directly.

It depends on the framework. In .NET the only difference between MVC and API is the base class of the controller. You either return JSON or you return a view. Database connections, migrations, routing, serialization, error handling, exception handling, authentication, authorization, Swagger specification, Docker file generation, can be handled by the framework if you so desire. Also, you can mix MVC controllers and API controllers in the same project if you so desire.

>For server-rendered UI I think there's still a strong case for these frameworks but they could probably take some lessons and generate their data-layer models from the database DDL

I find it better to just generate the database from the domain entities using code first approach. It's faster and the ORM will generate proper indexes and also provide optimized SQL queries.


> However, from what I've seen, there is still no Rails / Django equivalent in JS world. Sails JS has a low satisfaction rate. Nest JS looks more like a wrapper around existing tools than a real framework. Blitz JS looks promising but has not enough traction. That may just not be the philosophy.

I think there is AdonisJS : https://adonisjs.com/ which could be the Rails / Django equivalent you're looking for.


AFAIK RedwoodJS seeks to be the Rails/Django of the JS world: https://redwoodjs.com


Redwood looks promising, but it only went 1.0 ~a month ago. Rails went 1.0 in 2005.

The JS ecosystem has had a long time to come up with something equivalent.


There's also https://remix.run


I think the point stands. Nest, Sails, Adonis, Redwood, Blitz and many others you could plausibly lump into this list, all of which are playing catchup/reinvent to the alt-language sharks, none of which have crossed the threshold where the Lindy Effect [0] would apply. There is as yet no MVC/Rails shark for the Node/JS ecosystem.

[0] - https://en.wikipedia.org/wiki/Lindy_effect


1) Meh, MVC was never all that, it's kind of a guiding idea, but isn't in the realm of relational databases as far as specific utility and power.

2) Dinosaurs are still around in the form of birds and have found amazing utility, so even the analogy lacks nuance ;)


The MVC frameworks he was talking about all rely on relational databases as far as I am aware. Django certainly does.


ASP.NET MVC doesn't rely on any database. People tend to use it with relational databases but it will work happily with MongoDB, Cassandra, Redis and just about any database.


What about maintenance of the framework? It can be a nightmare, not all frameworks are considered equal (I’m looking at you, SailsJS). And even mature frameworks come with plenty of woes WRT patching vulnerabilities. But, that can be attributed to the programming language package manager ecosystem. I will admit that I miss Spring Boot, it ruined all MVC frameworks after that for me other than native stuff like Swift.

I just build mostly from scratch now, (update) [using MVC monolith] feels icky because I’m not trying to start a company or working at a startup. Frameworks are bad suggestions in mid/late stage company.

It’s nice to drop some value bombs, but if you intend to stick around for many years and you got more than 10 engineers in your organization, don’t choose a monolithic framework because it just makes the politics of getting your framework broken out into logical parts (ie micro services) a PITA. That’s my two cents.


You can't reason about solutions when the problem is not defined.

Of course a monolith is a great choice for a team of 4 devs for example. It could be great even for a team of 20 devs. But when you have 30 teams of N devs, then the monolith maybe is not the best idea.

Businesses scale in different ways, and different stacks exist for this reason.


Exactly. The problem is that so many small teams (even solo devs) believe they should be acting like big corps with 30 teams of N devs.


True, that is also very funny in my opinion. It's our need to play with new toys that makes us over-engineer stuff.

As I get more experienced, I try to find the easiest, most profound solution to a problem. At the same time I "design" escape scenarios. If that business will scale in that x way then I will be able to do y. Most times the need for scale never comes.


It also depends on the problems you are trying to solve, not only on the Team size.

Let's say you work for a small /medium company and it has a few monoliths. But if more than one needs to manage user authentication and authorization, needs to send mails or SMS, it kind of makes sense to build an user management service and an alerting service.


Yes but bear in mind successful companies do grow and you are then frequently stuck with the startup type choices made early on because "now it's too late" - even if it just due to original small team now being the tech leadership and know no other way. Been at a 30 team company working on an old Rails monolith and it totally sucked.


A well architected monolith isn't necessarily that different from a well architected set of microservices.

The problem with rails is it encourages you to not think about your architecture, and after years of putting every model, controller, or view in literally the same folder, you have a giant pile of shit.

Modularize by feature. And under each feature, you modularize by layer. Slicing your code N ways will scale better with your team than slicing your code 3-4 ways.


>Modularize by feature. And under each feature, you modularize by layer.

Sounds like Vertical Slice Architecture combined with n-layer or onion architecture.


> Vertical Slice Architecture

I didn't realize it had a name like that. I've always just called it "package by feature" (Java). Been using it for over 10 years at this point.


Like Twitter and their Ruby problem.

But it's not an issue. If you chose tech X because it was the easiest way for the small team to produce an working prototype, when you find success, you might afford to rewrite using tech Y like Twitter switching to Java or alternative approaches like Facebook doing their own PHP version.

It's not like being microservice based and modular from the start will save you from rewrites and architecture rebuilds. Google rewrote parts of their services and modified their architecture hundreds of times. Of course, if the architecture is modular, rewrites and architectural changes are easier.


Yeah, I get that and it happened to me too. That can happen in any design though. Even with a microservices approach.

As I said in the previous comment, you have to design your way out of the thing you are building. Not easy at all, and sometimes, not possible to predict.

I mean successful companies do grow, but a lot of times they pivot, etc.


>But when you have 30 teams of N devs, then the monolith maybe is not the best idea.

Can you give an example or two of a software product where 30 teams of N devs are working on the same single codebase? Seems hypothetical. Bank/IRS type systems on a mainframe/AS400 maybe.


This is exactly how Shopify operates on their primary monolith, afaik.

https://shopify.engineering/shopify-monolith


I don't think such a monolith can exist. That's my point


Meh.

From .NET Core 2.1 + Angular 5/6 to .NET Core 6.0 Web API + Angular 12/13 on the front-end = Total Win.

No need for changing what works. Svelte? React? Web Components? Naaaah, I am good.


I am very happy with ASP.NET. But I either use MVC + HTMX or Blazor for the rare cases I also have to do the frontend (which is mostly work done for myself).

If other people are doing the Frontend, I don't care what they use. I just show them the data contracts and wish them good luck.


Some people just forgot why the MVC and these tools are invented at first place.

They complains the framework you use isn't shiny enough. While in practice, things they write using these shiny frameworks performs outright painfully.

It's really a weird trend that everyone just run for the newest thing and forgot why these are invented for at first place.


They jump on the new thing - thinking it will solve their issues.

But the reality, the issue is them not 'understanding/knowing/being able to build semi-good software.

Think about it - you have the Fed literally printing money, then many VCs giving MILLIONS to half-assed, barely functioning prototypes.

The idea is not there, the craftsmanship is not there - it's just a bunch of kids on Adderall slapping some nice-looking UI and getting paid.

Guess what the thing they like to talk about is - the technology behind it because sure as hell it doesn't solve any business need.

It is hilarious.


Talking about the Emperor's clothes...


Same post from yesterday (on second page of HN right now): https://news.ycombinator.com/item?id=31310073.


But you are not forced to build only monoliths with MVC frameworks. At least with .NET you can do the logic in a few API microservices and render the HTML/HTMX in one or more MVC microservices.


And how do I build offline-first web applications with HTML over the wire?!? This whole HTML over the wire trick is actually pretty old and you can do some nice stuff with it, but it doesn't solve all the use-cases you can solve with with JS frameworks/libraries.

The big advantage I see for MVC applications, if that it is a well understood pattern. JS frameworks, on the other hand, come all ~3 years with some new innovative way to structure your code, while even the last approach isn't fully understood by many people.


I think you can run MVC applications locally just fine. Instead of shipping electron you'd ship a server runtime, and you'd access the application on localhost instead of in the web view of the electron app. Unless you're talking about PWA, but in theory you'd get MVC PWA apps that do that same (but you're right in that it's an endless cycle of distinctions without a difference in the JS ecosystem).


> And how do I build offline-first web applications with HTML over the wire?!?

I can't give you an perfect answer, because I am not aware of a practical example. But I think its quite doable. In a very naive way you can just store the results locally and use them when requested while offline. That potentially increases the storage size of individual items due to the HTML overhead of course. A fairly small app with little offline content could be quite simple. An advanced approach could be to push template parsing to the frontend. HTML fragments could use i.e. micro formats to retrieve data from the HTML and store it for later use. The question in this case how much logic you have to duplicate. I.e. template parsing should be very simple, unless you can share the template engine.


> An advanced approach could be to push template parsing to the frontend.

Sounds more like a JS view/vue ;-)

Sure, it is possible, but then you are probably better off using a proper JS framework with SSR (server side rendering) support.


MVC is a great pattern, that solves some problems very well. Quite often it is just overkill, and makes the solution much more complicated.

But I don’t really see why you need a MVC framework for doing MVC.


If you are doing MVC why wouldn't you use a framework? Lots of well tested code written for you already. Why reinvent the parts of the wheel the are there already?


Is the argument that because it's a shark, we should accept that and avoid feeding any contenders?

Was the shark perfect at its inception or did it adapt as all things do?

Doesn't its shark-ness come by virtue of surviving against the best efforts of challengers through time?

Isn't that continual evolution at least in part dependent on a continuum of challengers?

Don't we implicitly give up on searching for something that could potentially overtake the shark by dogmatically accepting the shark's shark-ness?


Go for it, but also admit you’re trying to out-compete a shark.

The bar for success in that case is pretty high, and the challenge significant.


I might replace the references to “MVC” with “monolith” to make it more clear. MVC is a dated and painful paradigm and orthogonal to monolith vs services.

I also would advise against using Hotwire for Rails, some of our worst bugs have come from the desyncing of HTML and JS from Hotwire, as well as unexpected network status code handling. And most people only run Hotwire in production, so bugs are easy to miss.


> And most people only run Hotwire in production

eh?

Aren't you confusing the full "Hotwire" suite with just Turbo? I don't think you cannot run "Hotwire" in development.


I played through this short guide a week or so ago: https://codelabs.developers.google.com/create-an-instant-and...

Add SPA performance to a multi page app thanks to guided pre-rendering. I came away from it thinking that's one less benefit an SPA holds over MPA now.


Of all the topics our design pattern study group gnawed on, MVC et al took the longest, sparked the most debate.

IMHO, "MVC" is the catch all term for anything defying neat summary and categorization.

Consequently, I assume any and all usage of "MVC" is an admission of ignorance, intentional or otherwise.


I like that people seem to be re-discovering Martin Fowler and the decades of thought that have already been put into the application development space.

Originally coming from a Java rich client world and moving into JS in ~2013, it always surprised me how much historical context the JS world was missing.


Node/Angular are exactly what you get when you disregard Chesterson's Fence.


https://thoughtbot.com/blog/chestertons-fence

> There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”


Here I am remembering when Rails was the shiny new thing and competing against various staid approaches in Java and Python, not to mention Cold Fusion and some of the other former hot things.


Cold Fusion was my first introduction to web apps and templated html! I wouldn’t want to go back, but those were some fun days.


I wonder how WASM will change the landscape for these frameworks. If Python had something like Yew [0], I'm not sure I'd be writing React for my Django backend.

[0]: https://yew.rs/


ha, I thought this was going to be about C++ MVC frameworks.


Sharks are some of the oldest creatures on earth...


That's the point of the article. Oldest and well-adapted.


Btw if here on HN, people engaged less in which framework is better and how to render a table with the flavor of the day JS framework.

Then maybe just maybe, you will all have more time to work and become millionaires, just saying.


I hate MVC, or to be more specific Microsoft’s implementation of it. I know enough asp.net to build any site quickly, but when I move to mvc I have the feeling that I write a shit ton of unnecessary code just to get things in a state that might benefit me in the future. Well fuck that. I’d rather rewrite the whole thing in the future if it doesn’t suit my needs any longer, than spend 50% more time to write it in the first place.


Can you provide an example of unnecessary code you have to write? I assume you are referring to Core based MVC, not the old one.


MVC requires me to write at least 50% more code than asp.net. That's what I mean by unnecessary. And I'm referring to the "old" mvc model, I haven't bothered with asp.net core mvc.


>, I haven't bothered with asp.net core mvc.

You should try, because now there is almost no difference between MVC and WebAPI. You just have controllers with different base classes.


Good thing that you have Razor pages which is insanely easier, and for most projects, good enough.


Yeap. Blazor too is promising.


This reads like someone who spent a lot of time using one particular framework (maybe two) and now wants to justify their inability to survive without said framework(s). Not sure at all why it's also about monoliths and microservices; I've written microservices in Django, for example.

Sometimes a monolith makes sense. Other times, a microservice architecture makes sense. Sometimes a "just right" sized repo makes the most sense, neither a monolith nor a microservice.

Why does it have to be so tribal? Why is there a #monoteam and a #microteam?


> Why does it have to be so tribal?

I think it's just a symptom of "When the only tool you have is a hammer, everything looks like a nail."


> You see, relational databases aren’t dinosaurs. They aren’t lumbering prehistoric relics doomed to extinction by a changing world. They are sharks. Apex predators honed by millions of years of evolution into a perfectly adapted creature that is just as effective today as it was eons ago.

RDBM is optimized for file size; it is very much an artifact of its time, when hard drive space was limited. I don't buy it for a second.

People think in dictionaries, so just use one. If you are making a simple app and don't need transactions, aggregations, or multiple connections, then just use JSON in an S3 bucket or something. If you are going a step farther, JSON-based databases are quite feature complete these days. For monolithic business logic applications, relational databases are the right tool IMO, but they are used for every use case when they are absolutely not needed.

Controversial claim: Joins are a foot gun. It is so easy to make too many tables and mis-use unnecessarily complex table structures because it feels safe and "engineery" to do it. I don't know how many young startups I've seen that absolutely struggle with RDBM performance, and almost always they have gotten themselves into some horrible situation joining five tables deep with a spaghetti ERD and no clear pathway forward. It is hard to make the same mistakes with a JSON foundation.


If the data is highly relational, no JSON structure and data duplication will save you from joins. You will either do them at database level or in memory.

I see a need for relational databases, NoSQL, data lakes and all other storage strategies. You chose based on the type of data, the way you acquire it, the way you process and deliver it, based on security and reliability needs.

In many non trivial application there is more than one data storage solution implied. Since I work with large microservice based apps, it's a long time since I have to use more than one data storage strategy in the same app.


Agreed on all points. I still think RDBM is over-used; it's the default in many shops for any minor micro-service.

It's like using a 20-piece multi-tool for something that only needs a regular kitchen knife.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: