The refrain against the "we should go back to MPA apps with server rendered HTML" is often "well what about Figma and Photoshop", which of course, yes those don't really work in the MPA, server rendered HTML model.
The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple. The phrase becomes "well, what about Figma and Photoshop (and my mostly CRUD SaaS)?"
I think a valuable insight that the MPA / minimal JS crowd is bringing to the table is the idea is that you shouldn't strive for cool and complicated tools, you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.
This is motte-and-bailey argumentation in my opinion.
The motte: SPAs are a good way to write highly complex applications in the browser, like Photoshop and Figma, to compete with desktop apps.
The bailey: SPAs are a good way to write most web applications.
If you attack the bailey, proponents retreat to the motte, which is hard to disagree with. With the motte successfully defended, proponents return to the bailey, beneficial for those enthusiastic about SPAs but much harder to defend.
The only way to tease this issue apart is to stick to specifics and avoid casting SPAs or MPAs as universally good or bad. Show me the use-case and we can decide which route is best.
At the end of the day, we're talking about whether a specific interaction (or a set of interactions) can be handled over the network or not.
If you need the interaction to fully resolve (as in the state is updated and the success or failure of the interaction is visible to the user) within 800ms or so, then it shouldn't be performed over the network.
For interactive editors like Figma, you often have interactions based on key repeats, which usually fire at 50-200ms intervals. So client-side rendering is really the only feasible option.
> If you need the interaction to fully resolve (as in the state is updated and the success or failure of the interaction is visible to the user) within 800ms or so, then it shouldn't be performed over the network.
Most real-world SPA sites perform a lot more roundtrips over the network than the MPA equivalent, not less. And every roundtrip adds yet another 800ms to your update latency, plus the risk that some random network failure will break the SPA state update and force you to reload it from scratch.
Strangely enough, this dichotomy seems to exist only for the web platform.
Everywhere else (desktop, mobile etc) the model is SPAs.
The only reason people distinguish it for web is because of legacy: html + DOM, i.e. Documents.
Documents don't generally require programmers even if using lateX.
Both models can coexist. I believe that SPAs somewhat supersede MPAs and that an MPA can be a simplification sometimes for a specific kind of apps, a website being simply an app that has been broken apart and is sent piece by piece.
> Strangely enough, this dichotomy seems to exist only for the web platform.
> Everywhere else (desktop, mobile etc) the model is SPAs.
It exists for CLIs too, where some projects provide a collection of single-purpose programs (e.g. imagemagick) and others provide a single program which can do many things (e.g. git)
git is a facade and "git add" actually calls "git-add". On Windows this means separate exes, git-add.exe, git-commit.exe, git-update.exe, etc. But all these exes are actually identical. So git is multiple copies of a single program which can do many things!
and you have identified the problem. web dev tooling, despite the extreme churn, is still absolutely poo-poo to use the technical term.
of course, mostly because very few oh-new-shiny-woah-such-modern tools doesn't even attempt to solve the problem of backend-frontend state sync. sure, maybe you get a piece of the puzzle (eg. a library that conveniently rerenders on state change, React, or one that you can wire up with all the fancy observables, Angular, or one that's super simple, sleek, even has magical runes that help with only rerendering the things affected by the state change, Svelte ... and maybe on top of these you get a state manager library, and then you still end up writing a thousand mutator/reducer in Redux by hand).
So we are still nowhere near a nice end-to-end full-stack tool that helps model both backend and frontend changes and then helps to design and implement an efficient API between them. (Because, obviously, it seems that this is not obvious to most people. Hence we get solutions like expose your DB as REST API and ship your DB via WASM sqlite, and so on.) That said, even those might be better than one more frontend-only state management lib.
I think you're discounting (1) the fact that there is only one front end runtime, and any number of backend runtimes (2) the application is distributed across a network.
Those are the reasons why SPA and MPA are so different
> Everywhere else (desktop, mobile etc) the model is SPAs.
> The only reason people distinguish it for web is because of legacy: html + DOM, i.e. Documents.
There’s another key difference: the other model has preinstalled software and most functionality is on trusted clients. If I email you a Word document, you don’t have to download Word from the Office servers to open it, which creates very different trade offs. There’s also a big difference in trust: I don’t need to ask whether I want each web page I open to have access to the data on my computer.
The web’s big selling point was the immediacy of being able open anything quickly and not needing to trust the remote server to run native code on your system (even in the 90s we knew that was a bad idea), so it’s not surprising that there’s so much gravity to its core model. The addition of more app-like abilities is really useful but it’s lead to a certain amount of app-envy where people often pick the cool technical challenge without asking whether they’re working on an app which needs it.
To me, the core of this entire issue is complexity and where it belongs.
My view is that you should perform as much processing as possible in the backend. This allows you to send as little data as possible over the network and allows your application code to make more sense because all your domain logic is centered in one self-contained app.
Now this self-contained app is just a software library, it doesn't do anything on it's own. But you can throw a CLI on top of it and use it through the terminal. You put a web api on top of it and use it from an SPA. You can use it as a backend for a server side rendered app.
The ideal, in my eyes, is that th frontend only concerns itself with actually displaying things. It doesn't get a big list of data, filter and transform the data, then display it. It asks the backend for exactly what it needs and the backend provides exactly what it needs and it displays it and that's the end of it.
Now you can have a website and a mobile app that are both trivial to develop and both use the same backend - if you fix a bug in the backend you've fixed it on both frontends.
I realize this may not always be possible or practical, but I think it is both of those things more often than not.
When we start to even think about debate fallacies when comparing engineering methodologies, it's completely clear we already lost the game. In fact, we also lost the meta game, and probably some 2 or 3 outer meta layers of it.
So yes, we should design software for the specifics of the function it will provide. Do not let people evade their competency by talking in generalities.
A fun metaphor. A SPA-inclined team/consultancy/department will retreat to their motte when necessary. They'll live to fight another day. Given a chance, they'll return to the bailey, advocating for SPAs under a relaxed standard.
Using this metaphor can imply significant disingenuity: a lack of honesty about one's true belief.
> Using this metaphor can imply significant disingenuity: a lack of honesty about one's true belief.
I actually disagree. I think this is the natural state of people, and they come by it honestly. We make decisions emotionally and then justify them rationally. It's just the way we are.
You could maybe say it's a lack of honesty about one's true belief to themselves. But even then it's hard to fault somebody for lack of awareness for something that is very subtle.
Honestly I think pointing out this human tendency and calling it out with examples like this is the best way to combat it. Once people become aware of it, they are more likely to fight it internally.
Two things. First, yes, people can deceive themselves. To the extent this is true, I take your point; it isn’t a matter of honesty in the usual sense; it is perhaps better stated in terms of self inconsistency; i.e. having internal contradictions in one’s beliefs.
Second, people do can and do lie about this kind of thing. I’m talking about conscious deception. Motives vary; they range from “I’ll pick my battles” to “this is good for my income stream” to “these other people don’t know it yet, but I’m right, and they’ll thank me later” and others.
> … I think pointing out this human tendency and calling it out with examples like this is the best way to combat it. Once people become aware of it, they are more likely to fight it internally.
Sometimes that works. Sometimes it just causes people to dig in deeper.
That’s not why. In my experience, applications accumulate interactivity over time. At some point, they hit a threshold where you (as a developer— not an end user) wish you had gone with an interactive development model.
Also, for me, the statically typed, component-based approach to UI development that I get with Preact is my favorite way to build UIs. I’ve used Rails, PHP, ASP (the og), ASP.NET, ASP.NET MVC, along with old-school native windows development in VB6, C# Winforms, whatever garbage Microsoft came up with after Winforms (I forget what it was called), and probably other stacks I’m forgetting. VB6 and C# Winforms were the peak of my productivity. But for the web, the UI model of Preact is my favorite.
I don't see why you cannot add interactivity later on. Frameworks like VueJS provide an easy way to deliver interactive widgets on a subset of rendered pages of a traditional website. If you need an API on the backend, you need to write that API one way or another anyway.
This way people who are just looking for some information on a website can go visit, get, and leave, without having to enable intrusive JS blobs, while those in for the interative things on the website can get their preferred experience as well. Instead many websites are developed with only the second group in mind, often intentionally forcing you to run their code on your computer, or not delivering useful information at all.
Agree, too many believe in the silver-bullet that solves all their problems. Different problems require different solutions, it's kind of simple but hard to realize when you're deep into the woods.
If you want to build a vector editor in the browser then yes, probably you want to leverage webassembly, canvas, webgl or whatever you fancy.
But if you're building a more basic application (like most CRUD SaaS actually are) then probably you don't want to over-complicate it, as you instead want to be able to change and iterate quickly, so simplest tools and solutions gives you the most velocity for changes, until you've figured out the best way forward.
Trouble is to recognize where on the scale of "Figma <> news.ycombinator.com" you should be, and it's hard to identify exactly where the line gets drawn where you can justify upfront technical innovation in favor of true-and-tested approaches.
I think there's a similar dynamic as behind "nobody got fired for choosing Oracle" - it's safer to choose a more complex, but also more flexible technology.
If you're a tech lead, your worst nightmare is when you have to say - "this is very difficult to do with the current stack. When we were choosing this stack, we assumed you'll never want these things". You're not going to extract a binding promise from product/business that they will never want a certain class of things - you can only guess, and then hope, that the product will remain a dumb CRUD.
This exactly. And with modern approaches (not that it's not without a fair amount of effort), you can achieve both an MPA style with SPA features via "isomorphic" JS (SSR).
The converse is also true, you can add a lot more interactivity than you used to in a server rendered HTML world with stuff like LiveView in Phoenix or Hotwire in the Rails world.
I think a good heuristic is looking at whether the UX of your app feels more multi-page or single page, that should be a pretty big factor in your decision.
Particularly with the web though, you're very rarely completely locked into one front end technology. It's 100% reasonable to say "this particular complex interaction should use React" without needing to port the entire application. I'm sure even Photoshop and Figma could build their account management or settings pages with MPA if they wanted to - I don't use them and have no idea if they do, but "some parts of my application require complex tools" doesn't mean "all of my app requires complex tools"
Absolutely, but it's a matter of tradeoffs and where you place them. It's still plenty common to use an MPA framework like Rails or Django for the management or CMS portion of your solution, while using an SPA framework for your front presentation. It's much more acceptable to say "doing that for this staff workflow is hard to do with our stack" than "doing this for our customers is hard to do with our stack".
Sure! I think it's just a question of scale and knowing your problem. If it's "the entire customer-facing area needs a large amount of interactivity" then going with a SPA makes sense. If it's "this particular UI element on the customer app needs a large amount of interactivity", that's easy to build as an isolated component regardless of the technology of the rest of the app.
One thing that I'll always view as a smell though is "we don't need X now, but we might need X in the future, so we should adopt this more complex technology just in case". We've learned that lesson MANY times as developers, it's not any less true for front-end technologies.
From my brief look at my log and history usage and generally my estimate, 95%, or dare I say 99% in terms of my traffic could be a MPA. Currently the only site I go regularly that are JS SPAs are Feedly, Youtube, Discourse Forums and Twitter. And apart from Twitter the others could have been MPAs and still be perfectly fine. ( Although Youtube is debatable ) I did like to think 80-90% of the web population browsing usage dont deviate from mine that much.
The thing about JS SPA is that they are hard to make it 100% right. Even the simplest thing. And this goes back to the topic about Web Development and computing. Modern day web is designed by Google for Google. Making things easier for 98% of the web simply isn't their thing. And that is not just on the Web but everything else they do as well. And since no one gets fired for using what Google uses, we then end up with additional tools to solve the complexity of what Google uses.
Depending on how you count it we are now fast coming close to 20 years of Google dominance on the web. And there hasn't been a single day I didn't wish for an alternative to compete with them. I know this sounds stupid. But may be I should start another Yahoo.
"Currently the only site I go regularly that are JS SPAs are ...."
Are you sure about that? Well made SPAs don't look like apps. They navigate seamlessly and instantly because they're not redownloading and parsing all their header and footer HTML, re-constructing a brand new DOM, loading and reinterpreting CSS, and bootstrapping a new Javascript runtime on every click.
Look at https://react.dev/learn, click around the documentation pages - do you think that's an SPA or an MPA?
Open up your network tab, and you'll find out what's happening: When you hover over a nav header, it starts preloading a JSON file containing the content for that page. When you click on a navigation link, that content's loaded into the react DOM and some more prefetches of just JSON content for likely next pages are loaded up. Your browser navi is updated, but you are still in the same original page context you started in. It is insanely snappy to interact with.
https://react.dev/learn is so slow on my phone, it takes 1.5s to open the burger menu, and about 1s to jump to a section. (Google pixel 5a). It must be some SPA that loads the whole documentation all at once I presume. A traditional MPA would probably work much better here.
edit: and like the sibling comment noted, the history back button gets messed up.
edit2: I mistakingly wrote nexus 5a instead of pixel 5a
Do you mean the pixel 5a? Just wondering because it would make a big difference if it was nexus 5 from 10 years ago, versus a much more recent pixel 5!
Whichever it is, it should certainly not take 1.5s to open a menu. Especially not on a website, that aims to teach people something about web development.
If I go to react.dev/learn, click on "Escape hatches" in the menu, and scroll all the way to the bottom of the page, the browser Back button no longer works because they've added nine duplicate entries to my history.
If the official React documentation website can't implement SPA page navigation properly, what chance does anyone else have?
Well that bug is clearly idiotic, and makes me feel a fool for thinking react.dev would be a strong example of sane SPA architecture to link to.
The idea is sound, and the basic loading behavior is as I said (not sure what the people who are encountering 1.5 second navigation times are doing), and the existence of an implementation bug doesn't undermine the theoretical soundness of the architecture.... although, as you say, having one on the react docs is embarrassing.
> (not sure what the people who are encountering 1.5 second navigation times are doing)
On CPU-limited devices (and my computer with 4x CPU throttling enabled in devtools), react.dev appears to block the main thread for 500-1000ms while navigating to some of the "Learn React" pages—even if all the data for that page is already cached in memory.
I remember reading all kinds of blog posts about how Concurrent Mode and Time Slicing were gonna magically solve this by breaking up long tasks and prioritizing above-the-fold content so that it would pop into view faster. It would be funny if, in addition to being unable to correctly use the History API, the React team was also unable to use their own framework's performance features.
>The idea is sound, and the basic loading behavior is as I said
Yes, and that is why despite hating the idea from the get go, which was before 2009. I gave it plenty of time to mature. But the truth is, any technology is only as good as the human factor. We aren't perfect, and that is why even basic thing like this we would make mistake.
And this example just proves it even more. And I am ignoring the site's performance which felt really slow for what should be a MPA ( and it is not ).
Then well-made SPAs seem to be exceptionally rare. Somehow that site lags more than opening a new page on HN. You list all the work the browser is doing and yet somehow the SPA is making it do more. I agree it makes no damn sense and yet that is the experience I have of using them.
Yes, and every major MPA framework optimizes this away, the same way that SPA approaches support server side rendering so you don't see a literal blank page before the app downloads.
I think GP is talking about solutions like https://turbo.hotwired.dev/, which just paste server-generated HTML into the page instead of passing JSON into a client-side UI framework.
> During rendering, Turbo Drive replaces the current <body> element outright and merges the contents of the <head> element. The JavaScript window and document objects, and the <html> element, persist from one rendering to the next.
Does SSR make React a MPA? If "MPA" limits us to only frameworks that have to do a full browser navigation for every interaction, it's a pointless discussion - "MPA" frameworks have had these sorts of optimizations for a decade+ (Hotwire is the newest, but there was Turbolinks before that and PJAX before that). Sure, I'll agree that React is a better approach than using the 2005 version of a framework, but that's not useful.
Architecturally, you'll still designing your application as though the user is performing a complete navigation, there's just Javascript present to optimize away some of the issues with that approach.
There's the picture in picture stuff when you navigate away too. I recently did that with a MPA, and it was a not straightforward experience to get right.
It's kind of an awful experience though? Do people actually want their videos to follow them? If I'm navigating away it's because I'm done, it actually makes me kind of angry that the video chases me.
In my case it was a podcast player, so primarily audio, where playing in the background is a perfectly normal thing to do and you might want to browse other content while playing.
Maybe such functionality is best left to the browser itself. Firefox already has the functionality to "detach" a video. Then you can scroll wherever you want and still see that video.
I still think you are not too wrong though. I usually use invidious and there are no interactive widgets, except for the video player, which I think is default HTML5 player. I rarely need anything else. And PIP can be done in Firefox with ease, without the website needing to implement anything.
> you should strive to make solutions that require the simplest tools possible whenever you can
I’ve gone back to making MPA apps with minimal JS. It helps me actually ship my projects rather than tinkering and having an over complicated setup for mostly CRUD tasks.
In one project that is a bit more data intensive and interactive I’m using Laravel Breeze / Laravel + inertajs (SSR react pages).
I’m also a big fan of Jekyll lately, I made my own theme on Thursday with only 2 tiny scripts for the mobile menu and submission of the contact form.
Using DOM APIs and managing a little bit of state is fine for many, many projects.
OTOH when you don’t control the requirements and the business asks for a ton of stateful widgets progressive enhancement can become a mess of spaghetti in the UI and API unless very carefully managed and well thought out. At that point you might as well go all in on React/Angular/Vue, especially when you have to account for a mix of skill levels and turnover.
A big factor in that “tinkeriness” of SPAs is how nearly every part of making an SPA well-behaved and pleasant to use falls almost entirely on the shoulders of the developer. Due to how little browsers provide on that front, well-behaved polished SPAs are very much not on the happy path or default. Even if you use the big popular libraries, special care must be taken to not build a product that’s a frustrating mess for users.
In comparison a server-side MPA will probably be at least decent to use unless the dev has been entirely careless, because that model better matches what browsers have been built for.
The takeaway is that for SPAs to be consistently good for both devs and users, browsers need to do the bulk of the heavy lifting and provide a “happy path”, largely eliminating the need for overwrought JS libraries that try to paper over browser inadequacies.
> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites
It is not only Figma or Photoshop. Any site with multiple steps of interactions or complex filters over search result etc. benefit from SPA and declarative code. The experience is smoother and development of anything, but simple forms is much faster.
People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.
This attitude is the reason why I now dread filling out any random web form, because a poorly implemented crappy JavaScript ”flow” will take over my browser, break the back button, accessibility and autofill features and randomly fail and start over in step 6 out of 19. It’s the reason why a simple web form requires 100 MB of bandwidth to deal with.
People working from a train or on data roaming are not a fringe case. They have amounts of bandwidth that, 20 years ago, were enough to serve any complex web experience. It’s not acceptable that we now require more data than is contained in all the books in the library of congress to buy a ticket on Ticketmaster.
I can’t understand how someone looks at the ratio of useful action to code size, sees it’s something like 1:1e6, and thinks this is fine.
The modern web’s usability, efficiency and reliability are terrible, and worse each year. Defending the tech stack that led to this with “it’s a good UX” is both wrong and makes me feel like web devs live on another planet from the rest of us.
> It’s the reason why a simple web form requires 100 MB of bandwidth to deal with.
I have a hard time taking these arguments seriously because they get so exaggerated on HN.
I’ve done a lot of work from flights and extremely low bandwidth, high latency connections in foreign hotels. Not once have I encountered anything like a web form taking 100MB.
Two weeks ago, I let my partner use my data roaming to submit two forms on some Adobe collaboration thing, reply to a chat message on Teams and send an email. The counter on my phone said these actions used about 500 MB of bandwidth in the space of about 20 minutes.
I agree it sounds exaggerated, but I don’t think it is. This is kind of my point, it’s past the point you would think likely.
Stuff like filtering search results is very easily accomplished by a MPA with query parameters to a results page. The specific elements that allow you to specify query parameters often require more interactivity, but this is easy to layer on with a progressive enhancement type approach on top of a fundamentally MPA application.
>Stuff like filtering search results is very easily accomplished by a MPA with query parameters to a results page.
But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.
The issue isn't what's "easy" to implement. Instead, users prefer a fluid and responsive UI. An example of a website that has superfast filters in SPA style instead of "MPA results page" is McMaster-Carr: https://www.mcmaster.com/
On that website, when the user changes the categories or filters, the results of items instantly change without the jarring disruption and delay of a new page being loaded.
There were several previous HN threads about it. Based on the near-universal praise in the comments of those threads, I don't think converting McMaster architecture to your suggested "MPA search results" would be considered an improvement. :
+ https://news.ycombinator.com/item?id=32976978 : Mcmaster.com is the best e-commerce site I've ever used (bedelstein.com)
1402 points by runxel on Sept 25, 2022 | hide | past | favorite | 494 comments
> But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.
Pretty much every modern full stack framework includes approaches to do partial renders and / or DOM morphs of server generated HTML responses, eliminating the full-page refresh effect.
www.mcmaster.com seems to utilize this to some degree, actually - while yes there are JSON responses, there are what appear to be HTML partial responses as well that are presumably injected on the page. In any case, everything on that search engine would be trivially accomplished using a server rendered HTML approach without needing to utilize a SPA. It's actually a great example of something that would work great with progressive enhancement - the search bar can start as a simple input that leads to full page search results, the navigation can do a full page refresh if the partial page refresh JS isn't available for some reason. Javascript can make it better without being a requirement.
A good rule of thumb is that if an interaction existed at roughly the same fidelity during the Web 2.0 days, it's not something that requires a SPA framework. Typeahead search results and categorized product listings existed and were functional to the level of the site you linked back then.
>partial renders and / or DOM morphs of server generated HTML responses, eliminating the full-page refresh effect. www.mcmaster.com seems to utilize this to some degree, actually - while yes there are JSON responses, there are what appear to be HTML partial responses as well that are presumably injected on the page.
Uhm... yes?!? The behavior you listed is exactly why I gave you that McMaster example. So I guess I'm a little confused. In any case, your comment matches up with the wikipedia definition of SPA (https://en.wikipedia.org/wiki/Single-page_application):
>A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of a web browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.
An alternate way of interpreting your reply to me is if you also categorize McMaster's website behavior as a form of "MPA". In other words, you classify McMaster's loading of new HTML fragments and rewriting DOM as "multiple pages". I've not heard others define MPA in this way.
>, everything on that search engine would be trivially accomplished using a server rendered HTML approach without needing to utilize a SPA. It's actually a great example of something that would work great with progressive enhancement - the search bar can start as a simple input that leads to full page search results, the navigation can do a full page refresh
Yes, we've already agreed about it being technically trivial. The issue is end user's preferred UI experience. Users don't want the "page refresh/reload" even though it's trivial.
I think a lot of the time "SPA" vs "MPA" essentially actually means "does the client largely render it's own HTML" or "does the server render HTML and the client just displays it". Whether it displays that with a full page refresh or by injecting HTML via Javascript does not in practice matter. The idea of using AJAX to render HTML fragments to increase interactivity predates the term "SPA" by about a decade.
That's not really strictly the same thing as what the acronyms SPA and MPA, but in reality, people refer to a Rails application that uses large amounts of Hotwire as a "MPA" (even if it never results in full page refreshes and often doesn't even feel like navigating pages) and things built with tools like React as "SPAs" (even if you're perfectly capable of navigating between pages and getting React rendered by the server until the client takes over routing.
If your definition of "MPA" is "every interaction requires a full page load", it's a pointless discussion, because that's not really the reality of development even with "MPA" frameworks like Rails or Phoenix (I can't really speak to stuff like Laravel, but I'm sure they have an equivalent)
Maybe a good way to think about it is that the fundamental interaction model of frameworks like Rails is that they're built around the concept of the server returning new pages on navigation, and optimize that to provide a better experience, and SPAs are designed around the idea of a single web page visit instantiating an application at which point the client is control of navigation, but they optimize that to provide a better user experience (i.e. server side rendering of pages on first load).
>I think a lot of the time "SPA" vs "MPA" essentially actually means "does the client largely render it's own HTML" or "does the server render HTML and the client just displays it".
It seems like there was already terminology of CSR-vs-SSR (client-side vs server-side rendering) to differentiate that so there was no need for SPA-vs-MPA to overlap with CSR/SSR to try and make the same distinction.
>Whether it displays that with a full _page_ refresh or by injecting HTML via Javascript does not in practice matter.
It seemed like the 'P' in SPA-vs-MPA is literally about the Page(s) being reloaded. It's "single page" or "multiple pages". That's why developers like to clarify that Next.js -- even with SSR HTML hydration of various subpages -- is still an SPA because the page on the client-side browser isn't reloaded. I just did some skimming of various "SPA vs MPA" search results and none seemed to use those acronyms as a way to classify CSR-vs-SSR. (https://www.google.com/search?q=spa+vs+mpa)
I'm also not clear how you classify Mcmaster.com ? Is it an MPA to you?
>If your definition of "MPA" is "every interaction requires a full page load", it's a pointless discussion,
No I'm not saying every interaction. I was responding to your original suggestion of "MPA with query parameters to a results page" ... and showing how McMaster.com search filters do not work the way you recommend it should. Each click of navigation and filters triggers a JSON payload and dynamically rebuilds the DOM tree. The browser's performance.timing.loadEventEnd property value does not change.
> But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.
This is the worst and dumbest excuse for SPA bullshit. It's not jarring. You'll get over it. It's a fraction of a second where your devices is obviously doing a thing.
Web devs love the word "jarring" like it's some world shattering visual effect. SPAs break all the time in dumb-ass ways that are way more jarring and experience breaking than a page load.
>Web devs love the word "jarring" like it's some world shattering visual effect.
I've never been a web dev. I'm just explaining why the typical mainstream end users who don't have the same patience as HN-type techies (who more happily accept MPA) do not like the discontinuous UI of reloading pages.
Each click on North/South/East/West buttons and Zoom In/Out blanked out the entire page and loaded a new page to shift the map viewport. This was a suboptimal UI experience for the typical user. I concede it wasn't "jarring" to you but it was to a lot of users -- especially compared to a CDROM maps experience. Example video of a smoother maps UI experience circa ~2000 from Microsoft CDROM desktop software without "blank reloading pages" to move a map around and change zoom levels: https://www.youtube.com/watch?v=4YO_KGdsUm4
The Mapquest "MPA page reloads" from 2006 was a UI that was less smooth than the Microsoft Streets CD software from 2000.
In 2005, when Google launched Google Maps with extensive usage of Javascript live-loading map tiles to provide smooth scrolling without reloading pages, end users liked that UI because it felt more interactive. In response, Mapquest also eventually switched away from the old-style SSR MPA page reloads: https://techcrunch.com/2007/10/12/exclusive-mapquest-plays-c...
The 2005 SPA-style of Google Maps just gets the UI back to what users already experienced in 2000 with desktop software. The SSR MPA page reloads was something that end users endured with Mapquest but it wasn't actually the UI they really wanted.
I'm not advocating the web devs use SPA (or SPA frameworks). Instead, I'm saying that responding with "SPA websites can be redone as MPA and it's trivial" is saying something that's true but still doesn't actually address the issue that mainstream endusers don't like the discontinuity of MPA type of UIs. That's the subthread I was addressing: https://news.ycombinator.com/item?id=38901249
E.g. McMaster.com is not a "web app" like Figma/Photopea but users prefer the SPA-style UI of that parts catalog website.
> People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.
Satellite internet (even before Starlink) is actually plenty fast for a modern website, as long as delivery is halfway optimized to avoid a thousand round trips.
It’s people on mobile phones in 3rd-world countries that suffer the most, but they end up with specially optimized websites and even separate mobile apps if they’re a target market.
People who disable JS are virtually non-existent in the real world, outside of bubbles like HN comments. Building technology strategies to cater to this tiny minority is not a good decision.
> People who disable JS are virtually non-existent in the real world, outside of bubbles like HN comments.
Except for all the microbrowsers[0] and crawlers that don't have JavaScript enabled or don't run all the JavaScript bullshit on a page. Building accessible sites that can be used in the widest possible context is good engineering.
Accessibility workers have to hear rather too much of the "these people aren't relevant to the business" arguments. And while every business has its own concerns and priorities, standards based on exclusion don't belong on the web.
For most scenarios, the experience should be better with a well-designed SPA as while first load may be slow, and person may have to wait a min. Once loaded data transfer per interaction is much smaller. For a use-case of just loading a page reading it and submitting few fields on it, will be worse. But for complex things like multiple filtering, searching for different dates, seat selection it will be faster.
One difference is that server interactions on a MPA are usually more predictable. I can wait for a good internet connection to submit a form or click a link. On top of that, I'm using browser navigation to navigate a lot of the time, and while it's not impossible to provide good feedback about interactions in a SPA, many sites don't (or worse, use optimistic UI updates without handling failure states well so it's impossible to tell what's persisted and what's not).
Their experience shouldn’t be much different on an SPA vs an MPA. If they can do an MPA round trip involving a medium-size image, then they should be able to load an SPA.
Because they break when a request fails. MPAs have a request resubmission UI out of the box. They also have request history navigation, easy resource bookmarking and other stuff you can reimplement in an SPA but usually don't.
I don’t know. Recently I was viewing an image in the Discord web app and it suddenly disappeared because my device had lost connection, even though the image was already fully loaded in the browser.
I mean to be fair, MPAs are by definition unusable without a consistent internet connection. By design every meaningful interaction needs to communicate with a server.
> you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.
Why do you believe this? I couldn’t disagree more. People should strive for the most effective tool, and most of the time that’s what they already know, unless some new tools’ efficacy outweighs cost to learn it
It also depends on your definition of simple. The architectural model of Preact is simple. You change state, your application renders correctly. The architectural model of an MPA with interactivity sprinkled in seems as simple, but quickly becomes more complex over time, and has ultimately not been as simple in my experience.
Preact / React is simple because it solves for a very small slice of what an application needs to do and willfully ignores the rest. For example, (P)React has no real opinion on how it interacts with a server, which is a fundamental requirement of 99% of web applications (the new server components stuff is a counterargment I guess, but even then it doesn't consider the complexities of what a backend needs to do and is essentially a "put backend stuff here" slot)
MPA frameworks tend to present themselves as complete and batteries included. If you're using Rails, you can build a complete application without thinking about a single other library than what Rails ships with.
Neither approach is correct, but comparing them is like saying that HTML is so much simpler than C++ so everyone should use HTML.
But, that is a form of simplicity. It’s kind of the UNIX approach. I use Preact and a typed RPC client and a very simple router. The result is a reasonably small, easy-to-reason about program that I find very enjoyable to work on. If Preact shipped with their own communication layer and router, that would reduce the simplicity, and I’m not sure I’d actually like the choices they made for the part of the stack that is unrelated to rendering. Angular is an example of what you describe, and it’s not for me.
Sure, that's fine, I'm just saying they're not really directly comparable. A batteries included framework is an entirely different beast than a view library.
Fair point that my point was overly reductive. But given you understand multiple tools, you should reach for the simplest tool possible to solve the problem. And in some cases you should still reach for the simplest tool possible even if you don't understand it yet. If you've only ever used Kubernetes but someone asks you to host a static HTML file at an address, you should learn how to use a simpler solution for that.
Yea, tools and processes they’ve decided on decades ago. You don’t see these people writing blog posts about new tools and wasting time evaluating them yearly like in tech.
If there’s an actual issue like there was with deaths in the medical profession due to not washing hands, then they evaluate.
1. There probably is quite a bit of discussion about tool selection in some fields. Surgical innovations didn't end with the invention of the scalpel. I'm sure there's lots of discussion about the appropriate use cases for robotic vs laprascopic vs traditional surgery, we just don't see them because we're on a tech forum and not where medical doctors discuss tools. I can say for a fact that I've seen more written about the merits of various screwdriver heads than I would have thought possible.
2. Software development is a little unique as an industry since it's not all that common that the users of the tools are also the people who make the tools. There's naturally going to be a lot of discussion about tools if you're both the maker and the user of them.
3. Us not having a standard for which tool to use is a reason to have these discussions, not a reason to say "pick whatever, it doesn't matter". The reason that those people don't write blog posts and have discussions about the merits of a hammer vs. a screwdriver is precisely because they're so well established - if both were a couple of years old, absolutely people in construction would be discussing whether to use nails or screws for an application.
...except those field are literally a thousands years old, while software industry is about 70?
Them being more mature doesn't change the fact that processes and tools are crucial
"the problem" "you should" – this is the language of special interests
developers are salarymaxxing first, second virtue signaling to support their case in their employers' selection process, third work-minimization and pain-minimization. Even the Simplicity Paladins are min/maxxing the same three priorities, perhaps weighing pain-minimization above salarymaxxing, yet still subject to the same invisible macro forces that shape our lives. and I postulate that this is a complete explanation of developer behavior at scale.
I feel like I understand 50% of your comment, is this some DSL from a different ecosystem being used to explain developer behavior, or something like that?
Lol, this is written in very game-like language where you often need to prioritize certain aspects (to max something) above others. This is often because you get a limited number of "ability points" when you level up, so "maxing" strength means you prioritize using those points to gain strength.
Without a study of some sort, we’re just exchanging anecdotes. I’ve seen resume-driven development a handful of times in my 20-year career. You may be right, but we won’t know until we come up with some way to measure it.
Most developers I’ve worked with have just been interested in solving problems.
Physicists work on interesting problems. Developers work on profitable problems, mostly manufactured, for huge piles of money, from home, and with yoga over lunch.
I didn’t say “interesting” problems. Just problems. Anyway, sometimes they are interesting. I think Rich Hickey worked on some interesting problems. Clojure and Datomic are pretty neat, and Electric looks like an interesting problem, too :)
Rich Hickey is a founder, not a developer and regardless he is motivated by pain-minimization in the context of money making: "I had had enough!" [of manufactured complexity in commercial development] — his paper
> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple.
This is very similar to the NoSQL arc. Some people at prestigious places posted about some cool problems they had, and a generation of inexperienced developers started that they needed MongoDB and Cassandra to build a CRUD app with several orders of magnitude fewer users, transactions, or developers. One of the biggest things our field needs to mature on is the idea of focusing on the problem our users have rather than what would look cool when applying for a new job.
The SPA obsession has been frustrating that way for me because I work with public-focused information-heavy sites where the benefits are usually negative and there’s a cost to users on older hardware – e.g. the median American user has JavaScript performance on par with an iPhone 6S so not requiring 4MB of JS to display text and pictures has real value – but that conflicts with hiring since every contractor is thinking about what’ll sound “modern” on their CV.
"Keep it simple, stupid!", is a design principle first noted by the U.S. Navy in 1960 [0]
... but some coders, including yours truly, has been brought up with that principle as a keystone of programming from day one (which was decades ago). It is related to the more modern DRY principle.
If this is brought to the table now it is only seemingly so, caused by the fact that those at the table must have forgot it, or never learned it. Of course, there are also commercial interests in keeping things as complicated as possible - it could just be that these have had too much influence for too long.
I think it's (increasingly) not as binary as either MPA or SPA. Although it has been for quite some time now.
A lot of web developers strive for some amount of templating and client-side interactivity on their websites. And when frameworks like React came up they solved interactivity issues but made it hard to integrate into existing server-side templating systems, which were mostly using different programming languages.
So because integrating the frameworks for client-side interactivity was hard, the frameworks also took on the job of templating the entire site and suddenly SPAs were popular. I think a big draw here was that the entire tooling became JavaScript.
But the drawbacks were apparent, I guess a big one was that search engines could not index these sites and of course performance, so the frameworks got SSR support. The site was still written in the framework, rendered to HTML on the server and then hydrated back to a SPA on the client.
Now, even more recently we got stuff like islands, where you still use the handy web framework but can choose which parts of your site should actually be interactive (i.e. hydrated) on the client. And I believe this is the capability that has just long been missing. Some sites require no JS on the client (could even be SSGs), others require a little interactivity, and some make most sense as full blown SPAs.
We're finally entering the era where the developer has that choice even though they use the same underlying framework.
When you vote on a HN comment while writing a reply, it reloads and you lose your reply. That's the kind of problem you have with MPAs, even if you aren't building the next Figma.
Simplest feels like a folly. No project of significance stays in a simple phase. They all grow and expand.
Having a stable reliable generally applicable decision/toolset you can apply beats this hunt for optimization to smithereens. Don't optimize case by case. Optimize for your org, for your life, lean into good tools you can use universally and stop special casing your stuff. There's nearly no reason to complicate things by hounding for "simplicity." Other people won't be happier if you keep doing side quests for simple, and you won't be either.
(Do learn to get good with a front-end router, so you can avoid >50% of the practical downsides of SPAs. And I hope over time I can recommend WebComponents as a good-for-all un-framework.)
The core of what differentiates applications isn't what happens on the front end. Putting all the focus on the client which gets delivered seems like a misappropriation of funds.
Especially as one man teams it just doesn't make sense. Any on teams with multiple people having relatively static HTML is a really effective abstraction.
I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages. If you want an app, make a damn Desktop app that can run on my OS.
I couldn't disagree more. Desktop apps are often so invasive that they almost feel like malware. Every time I install a desktop app I have to ensure that it isn't reading random files from my filesystem, snooping on my clipboard, or making itself persistent so that it restarts automatically every time I reboot my computer.
Adobe apps like Photoshop are some of the worst offenders. Sometimes I'll kill an Adobe process running in the background, only to realize that there's an additional background process ready to restart the first one. It's like playing whack a mole trying to stop all of the creative cloud junk processes. I would much rather sandbox software like that in the browser where I can close a tab and be done with it (and where I'll be prompted before an app tries to read passwords from my clipboard or access files from my filesystem).
You can do a lot of things in theory, but in practice browsers are much better sandboxes than desktop operating systems.
> a browser that is so complex that nobody can even imagine writing a new one.
I'm not sure how this is relevant? As a user I don't care how complex a browser is. I care that it sandboxes applications better than my desktop operating system. Unless you mean to say that the complexity implies a greater surface area for security related bugs, in which case surely the underlying os is even more complex (which is what native apps run on). I would imagine writing a new desktop os would be even more complex than writing a browser app that runs on top of it.
> For instance Android does that sandboxing by default.
Ok, so now we're talking about mobile operating systems rather than desktop operating systems, which to me feels like an implicit concession that desktop operating systems are indeed bad at sandboxing applications.
But even if we do shift the goalposts — even mobile operating systems pale in comparison to a web browser when it comes to sandboxing. Android and iOS will notify you if an app reads from the clipboard (which they've only recently started doing), but your browser won't just notify you, it'll ask you to confirm before a website reads from the clipboard. A website can't even make a request to a third party domain unless the website has cors enabled. And new vulnerabilities pop up all the time. There was an article that generated a lot of traction on HN just a few weeks ago about how certain iOS applications can pinpoint your location by scanning for known hot spots your device has access to [1].
> but in practice browsers are much better sandboxes than desktop operating systems.
Do you know anything about sandboxing, or are you throwing that there for the sake of the argument?
> I would imagine writing a new desktop os would be even more complex than writing a browser app that runs on top of it.
Oh right, I guess you don't really know about sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about sandboxing.
> As a user I [...]
I don't know who invented that "As a user" thing, but I find it completely stupid. In my view it is just used as a justification for anything one wants when they don't have a better argument.
In this case users are perfectly fine running standalone apps on their smartphone, let's not pretend they would really not want the same model on their desktop.
> which to me feels like an implicit concession that desktop operating systems are indeed bad at sandboxing applications.
Not at all, I was just giving a real-world example of sandboxing of apps at scale.
> even mobile operating systems pale in comparison to a web browser when it comes to sandboxing
If your baseline is sending a full web browser with every app you make, on desktop you could run each app in a VM and it would obviously be better.
> A website can't even make a request to a third party domain unless the website has cors enabled.
An app can't make a request unless it has the internet permission, what's your point?
My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS, I don't want to rent an OS provided by BigTech, whether it is ChromeOS, Windows or anything else. The model where users pay for a product but don't own it is good for companies, not for users. "As a user", I want to own the product I pay for. And I want to pay for the product I want to own.
> I don't know who invented that "As a user" thing, but I find it completely stupid. In my view it is just used as a justification for anything one wants when they don't have a better argument.
> "As a user", I want to own the product I pay for.
Ironically, you're the one who lacks a concrete argument, which is why you're attacking the wording of my statement, rather than the substance of it. You're honing in on the first three words of my sentence, because you can't debate the argument on its merits. You then use the exact same wording in your final sentence, but encase it in quotes as if that somehow absolves you of your hypocrisy. Given that this website is frequented by software developers, I think there's a useful distinction to be made between thinking about problems in terms of their development, versus thinking about them in terms of their utility to end users.
> Not at all, I was just giving a real-world example of sandboxing of apps at scale.
It would be charitable of me to call it a "real-word example". You simply said "Android does that sandboxing by default", without a single supporting statement or example, despite your so-called extensive knowledge of sandboxing.
> If your baseline is sending a full web browser with every app you make, on desktop you could run each app in a VM and it would obviously be better.
No one is "sending a full web browser with every app you make". Browsers come preinstalled on every popular operating system.
> An app can't make a request unless it has the internet permission, what's your point?
What in the world is "the internet permission"? I've never had an operating system ask me if I'd like to grant an app "the internet permission". Have you operated a computer before?
> Do you know anything about sandboxing, or are you throwing that there for the sake of the argument?
> Oh right, I guess you don't really know about sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about sandboxing.
As someone who's written both web apps and desktop apps, I do in fact know a considerable amount about sandboxing. Do you know anything about sandboxing? You're questioning my knowledge to deflect from your lack of a coherent rebuttal. What exactly have you written during this conversation to demonstrate your comprehensive knowledge of sandboxing? You're concluding that I lack knowledge on sandboxing because I admitted to not having single-handedly written an operating system or web browser? Really? Are you writing an OS in your free time when you're not writing about the mysterious "internet permission"?
> My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS, I don't want to rent an OS provided by BigTech
You've got it backwards. If I want to add support for users running a free open-source operating system like Linux, as a web app developer I don't have to do anything special. Linux can run web browsers, and therefore it can run web apps. Case in point is Photoshop. Neither Photoshop nor the rest of the Adobe Creative Suite runs on Linux, but the Photoshop web app does, because web apps are universal. There's a reason why Apple took years to finally add push notification support to iOS web apps, because web apps threaten the mobile operating system duopoly.
> Ironically, [...] You then use the exact same wording in your final sentence
Thanks for explaining to me what I did ;-).
> You simply said "Android does that sandboxing by default", without a single supporting statement or example
Are you questioning the fact that Android apps are sandboxed? If yes, you may need to do some reading on your own. I am not here to teach you how Android works.
> No one is "sending a full web browser with every app you make".
All the webapps that try to look like Desktop apps have to ship a browser with them. You mentioned VScode, right?
> What in the world is "the internet permission"? I've never had an operating system ask me if I'd like to grant an app "the internet permission". Have you operated a computer before?
Oh come on... you just don't have the slightest idea how native apps work, do you? It's literally called "android.permission.INTERNET". Have you ever tried something not web?
> Are you writing an OS in your free time when you're not writing about the mysterious "internet permission"?
As a matter of fact, not an OS but embedded distributions. That... wait for it... use sandboxing. Do I need to get back on the "mysterious" internet permission? Wait, here's a link to help you: https://developer.android.com/develop/connectivity/network-o....
> as a web app developer I don't have to do anything special.
Not even Google "Internet permission" before dismissing someone's point. I love that kind of webapp developers.
> because web apps threaten the mobile operating system duopoly.
They threaten every platform by making everything a ChromeOS system (no, not literally ChromeOS, but something based more and more around Chromium, which is owned by Google).
> Are you questioning the fact that Android apps are sandboxed? If yes, you may need to do some reading on your own. I am not here to teach you how Android works.
I'm not questioning whether or not Android apps are sandboxed. I'm questioning how well they're sandboxed relative to web apps, which is why I gave you several examples of capabilities that a native app has that a web app does not. You're losing the thread of the conversation.
> All the webapps that try to look like Desktop apps have to ship a browser with them. You mentioned VScode, right?
> Not even Google "Internet permission" before dismissing someone's point. I love that kind of webapp developers.
I never said anything about VSCode. You can't even remember what we've talked about. I love that kind of commenter. We're not talking about desktop apps that use web technologies vs desktop apps that don't. We're talking about desktop (and now apparently mobile) apps vs web apps that run in the browser. Allow me to quote your original comment that I replied to as a memory refresher: "I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages." This is what we are debating. You want to shift this conversation into a flamewar about desktop apps built with Electron, because you know your actual argument has less merit. This whole conversation has consisted of you shifting goal posts, and retreating to a lesser version of your original argument. I'm still waiting for you to compare the security of desktop apps to web apps, which was my entire original point.
> Oh come on... you just don't have the slightest idea how native apps work, do you? It's literally called "android.permission.INTERNET". Have you ever tried something not web?
I'm talking about permissions that a user has to intentionally grant via an explicit prompt, not a list of bullet points that appear to a user if they happen to view an app's detail page [1]. Your own link explains it best: "Note: Both the INTERNET and ACCESS_NETWORK_STATE permissions are normal permissions, which means they're granted at install time and don't need to be requested at runtime."[2]
But you were actually responding to my comment about CORS when you brought up "the internet permission", which unlike the coarse grained permissions that most operating systems offer allows any website to prevent any other website from accessing its resources. Which means I can't use a web app to form a botnet that attacks some innocent server, unless that server explicitly allows it via a CORS header (and also ignores the incoming origin header). A desktop app can connect to any domain it wants, and can even directly connect to the server's ip and impersonate a legitimate client by forging the origin and user-agent headers.
> They threaten every platform by making everything a ChromeOS system (no, not literally ChromeOS, but something based more and more around Chromium, which is owned by Google).
No...they don't. Have you forgotten that Firefox and Safari exist, or should I send you a link to their home pages? But even if we put that aside, during this entire discussion you've been championing Android which is...wait for it... developed by Google. Please tell me you're being intentionally obtuse?
> But even if we put that aside, during this entire discussion you've been championing Android which is...wait for it... developed by Google.
I am not AT ALL saying that we should push for Android everywhere. I am just saying that Android (and iOS, but I don't know the details of how iOS works) are sandboxing apps. I don't think security is an argument for PWA. The argument for PWAs is "I know webtech and it would be cheaper if everything ran in Chromium".
Be assured that if the discussion was about using Android everywhere (web, mobile, desktop), I would be against it as well. I don't want a one-size-fits-all solution, because it usually doesn't fit that well, and it kills diversity.
Oh right, I guess you don't really know about iOS sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about iOS sandboxing. Remember that line [0]?
> I don't think security is an argument for PWA. The argument for PWAs is "I know webtech and it would be cheaper if everything ran in Chromium".
Security is MY argument for distributing software in the browser vs as a desktop or mobile application. If you refuse to engage me about the point I'm making, then you're arguing against a straw man, which ultimately indicates that you just don't have a strong rebuttal, which is what I've been saying since the very beginning [1].
> Be assured that if the discussion was about using Android everywhere (web, mobile, desktop), I would be against it as well. I don't want a one-size-fits-all solution, because it usually doesn't fit that well, and it kills diversity.
So you're an Android developer who's mortally afraid of Google hegemony? That's some next level cognitive dissonance. If you're afraid of Google dominance, I'm sorry to tell you this, but Android is their best tool for accomplishing that goal. The EU fined them 5 billion in 2018 over this, and told them to stop "forcing manufacturers to preinstall Chrome and Google search in order to offer the Google Play Store on handsets. Google will also need to stop preventing phone makers from using forked versions of Android" [2]. You're afraid of the influence of Chrome, and want people to develop directly for Android, but Google is using their Play Store and all of its Android apps as leverage to force manufacturers to preinstall Chrome (and Google search). Android apps give Google the leverage to force Chrome down everyone's throats.
You're afraid of a Google browser monoculture, and don't think Firefox and Safari present enough competition, and your solution is for people to develop apps directly for Android and iOS, where there's even less competition? And by the way, the only competition Android has is a closed source operating system (iOS) that doesn't even allowing sideloading apps or competing app stores. If web apps were more popular we wouldn't have a mobile duopoly (iOS and Android), or a desktop duopoly (Windows and macOS), because the web is an open platform and there are web browsers on every operating system (including desktop Linux, the various BSD variants, Ubuntu Touch, et cetera). This is why I told you a long time ago that you've got it all backwards [3].
Alright, let's take a step back. First, I am not a mobile developer. I was mentioning Android as an example of sandboxing outside the browser (mobile developers don't have anything to do with that sandboxing). Other examples include whatever iOS does (which I don't know), containers (docker and the likes), VMs, and everything in-between (like what snap or flatpak use). My point there was that running code in a browser is not - and by far - the only way to do sandboxing.
Sandboxes usually have to give permissions, with some granularity. The more permissions you give, the larger the attack surface. There is nothing that makes browsers inherently safer than other sandboxes: a browser is just a process running in user space. If anything, modern browsers are so complex (and getting worse with time) that the attack surface is big, which is why they require a ton of resources in terms of security.
Moreover, Web UIs bring their own class of issues that don't really apply to native apps. You insisted on CORS, which is one mitigation for some of those issues. But CORS is really a browser thing, I don't think it really makes sense to compare it to anything outside the "webview world".
If security is your concern (and you seem to insist that it is), then webapps are really not better than the alternatives. Actually, the Apple Store and the Play Store (to give an example in the mobile world) allow Apple and Google to somehow monitor the apps that users install, which is most certainly more secure than a model where anyone can load any webapp from any website.
I see many reasons to want PWAs (which I may or may not share), but security is not one.
> Alright, let's take a step back. First, I am not a mobile developer.
I think you're whichever kind of developer your current position requires. You've been talking about Android non-stop throughout this conversation, and conversations you've had with others on this website [1]. When you were lambasting me about my perceived knowledge of mobile development you were touting your Android knowledge, and taunting me about whether or not I've done anything outside the web. Now that I've proven Android is actually one of the primary tools Google uses to promote Chrome (and you admitted you don't know much about iOS) you want to distance yourself from mobile development altogether.
> Other examples include whatever iOS does (which I don't know), containers (docker and the likes), VMs, and everything in-between (like what snap or flatpak use).
We're not discussing theoretical means with which you could sandbox an application, we're talking about how apps are actually used in reality. If you need to fire up a virtual machine every time you use your favorite desktop apps, then you're only proving my point that they're not inherently very secure. Not to mention, the average user probably has no idea what Docker or a virtual machine even is. Like I said in my original response, lots of things are possible in theory, but in practice web browsers are much better at sandboxing apps than desktop operating systems (and even better than mobile operating systems). And by the way, you can run a browser inside of a vm too, so if anything the technologies you're advocating for bolster the security of web apps rather than compete with them.
> If anything, modern browsers are so complex (and getting worse with time) that the attack surface is big
Ironically, a lot of that complexity arises from the web's insistence on security. V8 is complex because it has so many safeguards in place to sandbox JavaScript, and that sandboxing is taken very seriously. There's a reward anywhere from 10,000 to 150,000 USD if you can escape the sandbox [2][3]. Browsers are inherently more secure than desktop apps because they limit access to the underlying platform. Someone developing malware as a web app has to first escape the browser sandbox, just to gain the privileges that a desktop app has natively. If it helps, you can think of every desktop app as a webapp which has already escaped the browser.
> Moreover, Web UIs bring their own class of issues that don't really apply to native apps.
No, web developers have just spent so much time thinking about security, that native app developers haven't even realized these security issues are relevant yet. It took years for Apple and Google to come to the brilliant conclusion that they should notify users when an app is reading from the clipboard, something which at the time was considered just a browser "class of issue". Maybe in 2034 they'll figure this out for desktop apps.
> But CORS is really a browser thing, I don't think it really makes sense to compare it to anything outside the "webview world".
It makes sense to compare it to things outside of the browser because it protects users and servers. You seem to want to disqualify any point I make that you can't disprove. If you don't think web technology is comparable to anything outside the browser, then what are we even arguing about? This whole discussion has been about comparing the security of web apps to non-web apps.
> If security is your concern (and you seem to insist that it is), then webapps are really not better than the alternatives. Actually, the Apple Store and the Play Store (to give an example in the mobile world) allow Apple and Google to somehow monitor the apps that users install, which is most certainly more secure than a model where anyone can load any webapp from any website.
Security is not some new thing I'm insisting on, it's been my whole point from the very beginning. You're just finally deciding to engage with me about it, instead of derailing the conversation constantly. Apple and Google have to monitor which apps make it to their app stores, BECAUSE apps are so much more prone to security problems. You once again have it completely backwards. No one has to gatekeep websites because browsers are so much better at sandboxing applications. And allow me to remind you that you admitted you have no idea how iOS sandboxing works, so you can't really be confident about this stance even if it did make sense.
And now you're arguing in favor of the app store duopoly which contradicts your point about software diversity. You can't have it both ways. You're trying to hold on to two contradictory points at the same time: you don't like the supposed lack of browser diversity (which is why you seem to detest Chromium), but you like the supposed security guarantees of the mobile app store duopoly, which is even less diverse.
> You can't have it both ways. You're trying to hold on to two contradictory points at the same time: you don't like the supposed lack of browser diversity (which is why you seem to detest Chromium), but you like the supposed security guarantees of the mobile app store duopoly, which is even less diverse.
Ok I get it.
Let me rephrase it just to make it clear: It is true that I don't like the lack of diversity (that would come from everything being webtech on top of Chromium), and it is also true that I like the security that comes from a managed app store. I do! I can have it both ways! Isn't that marvelous?
If you can't understand how this is possible, I think we can stop here. We won't get anywhere if you can't understand what I write.
You've completely abandoned any attempt to argue the point about the security of web apps vs non-web apps, which was the original point of this discussion, so now let me address all the tangents you like going on to deflect. You're an expert at cherry picking which arguments you'd like to reply to, to avoid tackling the main issue at hand.
> It is true that I don't like the lack of diversity (that would come from everything being webtech on top of Chromium), and it is also true that I like the security that comes from a managed app store.
You've said previously: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS". [1]
So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems, instead of using open web standards that work on every operating system? How does that make sense?
> I do! I can have it both ways! Isn't that marvelous?
No! You can't! Not if you value logical consistency.
> If you can't understand how this is possible, I think we can stop here. We won't get anywhere if you can't understand what I write.
I don't think you comprehend what you're writing, or rather, you're not willing to admit that what you're writing is incomprehensible. Saying "my argument makes sense, you just can't understand it" is just you being petulant. You want to "stop here" because you've argued yourself into an illogical corner.
> Saying "my argument makes sense, you just can't understand it" is just you being petulant.
I did not say that. I said that my preferences are consistent. Security and diversity are orthogonal concepts. I can say: "I want as much security as possible AND as much diversity as possible". It is not an argument, it is a preference.
You come and say: "Aha, I got you! You cannot want both security and diversity! You have to want one or the other, not both, because I say so! You just lost the debate, you dumb ass".
First of all, I've been saying from the very beginning that your stance implies both less security AND less diversity. But I knew you would grasp onto the security part like a lifeline, because you've run out of ways to derail the conversation, which is why I clarified in my previous comment. You ignored my clarification, and once again decided to argue with a straw man. I've never seen so many bad faith straw man arguments in my life. Forget the security aspect of it since you clearly can't debate that, and just focus on the diversity, and you're STILL wrong.
As you like to say when you're clarifying, "let's take a step back here". I'll just repeat my last comment, and hopefully you won't evade it like you always do:
You've said previously: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS". [1]
So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems, instead of using open web standards that work on every operating system? How does that make sense?
Do you get it yet? You're claiming you want OS diversity, but you're advocating for the solution that results in LESS OS diversity, that's why you're contradicting yourself, and that's why your position is logically inconsistent. You absolutely know this, which is why you're dodging every attempt to actually debate it. And I know you know this, because you purposely omitted the first sentence of my paragraph when you quoted it, which was [2]: "And now you're arguing in favor of the app store duopoly which contradicts your point about software diversity." That part didn't fit your narrative, which is why you omitted it. You're better at evasion, and rhetorical trickery than you are at actually discussing technical topics. If you had said instead: "I admit my position implies less OS diversity, but in this case I'm willing to make that trade off in exchange for better security guarantees", then we could move on to the security question (and you'd lose that debate too).
You can admit that one of those pesky web developers you're so fond of condescending to actually has a good point, it won't hurt.
> So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems.
No, I don't. I think that having different tools, more or less specialized for particular platforms, is better than using webtech everywhere. My reason being that I tend to hate webtech and all it represents to me: I don't like unmanaged language package managers like npm and how they allow devs to have no clue about their dependencies. I don't like Javascript. I don't like having to run a browser to access Discord, or alternatively to have a fake Desktop app that is essentially a hardcoded one-tab browser. I don't like to run complicated webapps in a tab that can freeze my whole browser. I don't like that if my browser crashes, all my webapps stop. I find that pushing for WebAssembly to run everything in the browser is completely overkill given that we already have tons of ways to run stuff on different OSes. I don't like how web people tend to not know anything not web (including native/non-native-but-not-web mobile apps, native/non-native-but-still-not-web Desktop apps, mobile OSes like iOS/Android/Linux-based-but-not-ubuntu, Desktop OSes like Windows/macOS/Linux/-BSD, embedded OSes like OpenWRT/-BSD) but still claim that webtech is better.
I like C when it makes sense, I find merit to C++ in many situations, I think Rust is interesting (except for the language package management which seems to come straight out of the webtech hell). I like Java/JVM and its evolution in the last years (no, it's not just an interpreter and web applets since the beginning of the century, but too many web people missed the memo), I find that Android has done a lot of interesting stuff with JIT and AOT, I think that GraalVM is really promising. I love Scala and Kotlin, and the new Jetpack Compose way for UIs (coming to Desktop apparently). I wish I could spend more time on Swift and discover SwiftUI, and I had fun learning Flutter and Dart (though it's still has the fundamental issues of cross-platform frameworks IMO). I don't know anything about .NET, but it doesn't seem bad. I like making custom Linux with fun tools (buildroot, Yocto, pmbootstrap) or learning how relatively mainstream distributions work. I like running stuff on -BSD (not in a browser, actually on the system). I like how Linux distributions approach their package management.
I am a big fan of open protocols, which mean that I can run my TUI IRC client (written in C) on my OpenBSD, my favorite email client (written in Go) on my Alpine Linux, and a whole bunch of stuff like git/gpg/ssh/podman/pass in CLI. I can even enjoy tools written in niche languages like Hare!
Those things I like, TO ME, represent diversity, and allow me to choose the tools that are more ergonomic for me, and even to contribute to them. Webtech, TO ME, represents those shitty Slack/Discord/Teams/NameYourCloud proprietary apps (and those are the good ones), written by people who want a one-size-fits-all solution so that they can be more productive by knowing ONE tech and making ONE mediocre app that will run badly on all those systems they never cared to study, governed by rules like "no need to optimize for memory, memory is cheap ahahaha!!!1!". All that forcing me to run full-blown apps (and not websites anymore) in a damn browser, in a world where Safari is Apple's way of refusing webtech for as long as they can, Firefox is a joke (which I use, don't get me wrong) and everything else non-Chrome is about customizing Chromium and pretending that they own their codebase.
PWAs are a promise to move that shitty world out of the browser and into mobile devices (because ElectronJS already succeeded in moving that shitty world out of the browser and into the Desktop... by duplicating a browser I did not choose, and in my back). All of that is transforming my Desktop OS and my mobile OS into basically a big browser that I hate (Chromium) running bad apps written with webtech that I hate.
Native Android and iOS apps are not perfect of course. But they are not webtech. And at this point I'm holding to anything that is not damn webtech (or worse: "AI" bullshit).
Go on, tell me why I should not feel the way I feel or, even better, prove it to me, with cross-references to whatever you find (I still won't click on your links, though, I really don't give a shit).
> then we could move on to the security question (and you'd lose that debate too).
I am not here to win (is there a price for the winner?). I would genuinely be very happy if you taught me something (just a small thing) about why browsers are fundamentally better in terms of security than any other kind of sandbox I can imagine. But something constructive, like why it is that whatever is used to sandbox processes in a browser cannot be used to sandbox processes outside the browser. Or why granular access control works in the browser and fundamentally cannot be used outside of it.
But if it is to tell me that browsers are better because smart people spend a lot of time working on V8, or that web people invented access control last year, please don't lose your time.*
> I don't like how web people tend to not know anything not web
This is the reason why your responses have been so arrogant. This is why you assumed I lacked knowledge about sandboxing before we'd even had a chance to discuss the topic in any sort of depth. You have this preconceived notion that all web developers are myopic and can't see anything outside of the web, and you've projected this stereotype onto me as if you're omniscient. If you truly do enjoy engaging in good faith arguments, and learning from other commenters, then you wouldn't start with the pompous assumption that the person you're talking to is ignorant.
> I don't like unmanaged language package managers like npm and how they allow devs to have no clue about their dependencies. I don't like Javascript.
Finally, you just came out and said it. You have a deep seated visceral hatred of JavaScript and anything even tangentially related to it. This is why you've been trying to bait me into talking about Electron, to the point of literally fabricating statements (at one point you claimed I was talking about VSCode). This is your pet issue, and your clamoring for a chance to talk about it.
I get it, you don't like JS. It's a popular opinion amongst snobbish developers who like to promote this culture of contempt that pervades the software development world [1].
The problem is...we're not talking about the pros and cons of JavaScript as a language, or npm as a package manger. I have feelings about that as well (which I may or may not share), but my primary conjecture has always been that software is safer when run in the browser (especially on desktop operating systems). That's why I originally responded to your comment about Figma and Photoshop, and provided my own anecdote about my experiences using Adobe Photoshop on my desktop computer.
> Those things I like, TO ME, represent diversity, and allow me to choose the tools that are more ergonomic for me, and even to contribute to them.
The preceding paragraphs read like a CV with every technology you've ever interacted with, and many of them are very interesting, but all of that is completely besides the point. I'm going to quote you again here, you said: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS".
We're not talking about the diversity of tools used to build applications, we're talking about the diversity of operating systems used to run graphical user interface apps. You absolutely refuse to stay on topic. Submitting apps to proprietary app stores that only run on their respective operating systems is not the best way to promote operating system diversity. If I build an app for the browser it'll run on every operating system (since they all ship with a web browser), that's just an objective fact.
> is there a price for the winner
You should be a comedian. I'm here to talk about technology.
> I would genuinely be very happy if you taught me something (just a small thing) about why browsers are fundamentally better in terms of security than any other kind of sandbox I can imagine.
We're not talking about what you can fundamentally imagine, we're talking about how software is used in reality.
> why it is that whatever is used to sandbox processes in a browser cannot be used to sandbox processes outside the browser. Or why granular access control works in the browser and fundamentally cannot be used outside of it.
I hate to keep repeating myself but, we're not discussing theoretical means with which you could sandbox an application, we're talking about how apps are actually used in practice. You seem to want to discuss how desktop apps could theoretically be just as safe as web apps, but I'm more interested in reality than theory. I've given you several examples of security features which are present in the browser, and have no proper analog built in to desktop operating systems.
Here's a non-exhaustive list of things that make webapps more secure than desktop apps (many of these points haven already been mentioned, but you keep ignoring them):
- Webapps can't read from the clipboard without user confirmation.
- Webapps can't make themselves truly persistent the way a desktop app can.
- Webapps can't record your keystrokes when their tab isn't active, whereas keyloggers are one of the most pervasive forms of desktop malware. On a Mac for instance, I normally have to use Reikey to mitigate this threat.
- Webapps can't forge the origin and user-agent HTTP headers to impersonate legitimate clients.
- Webapps can't read the response of an HTTP request to a third party origin unless the site allows it via a CORS header.
- Webapps can't read a single file from your filesystem unless you explicitly allow it.
- Webapps can't see which SSIDs your computer is connected to in order to pinpoint your location by matching them against known wifi networks.
Could some of these protections be implemented on the desktop in the future? Sure, and if they do I'd be happy to revisit this discussion in a few years. But my arguments are firmly rooted in reality, not speculation about future enhancements. And please don't bring up onerous security measures like virtual machines. First because that only proves that desktop apps are insecure by default, second because most users are likely unaware that such measures even exist, and third because those measures can be applied to a browser as well, so they only augment the security of webapps if anything.
Well… if you have ever supported a desktop app you know how difficult “version dispersion”, users that never update their OS, users that always update their OS, different hardware, other hostile software, etc. can be. If you know, you know.
Sure, I'm not saying it's easier. It would be completely stupid to go down the webapps road if the desktop apps one was both better and easier.
I kind of find it ironic, though: why not write one desktop app that only supports the latest version of Windows, and tell your users to use that? If you're big enough, surely you can force them to use the OS you want, right?
I am convinced that most people who love webapps kind of hate the idea of being forced to use the latest version of Windows. But somehow they find it okay to force everyone to use Chromium? What's the difference?
For stuff like figma and photoshop I can't help but suspect that the creators would be better of writing their program in CPP with the GUI toolkit of their choice, and compiling it for the web with emscripten.
I tend to dislike this approach for the simple reason that it's an extra "compatibility layer" where you give up control. If you're developing for the web you may every now and then want to do things a specific way or use a specific feature and be unable to do so because the transpiler doesn't support it or is programmed not to.
Why? If there's one thing JavaScript and browser tech is good at, it's making GUI dev easier. Just look at how even Qt is basically pivoting completely towards QML which to my naive eyes look very very similar to how GUI/Layouts/styling is done with html5/Js. Why would you purposefully use something worse just to not use browser related tech? I would agree if this was about raw number crunching, where compiling to wasm makes sense and where a html5 GUI can be used as a frontend, but the GUI itself has no reason to be built with CPP.
It’s reactive/declarative UI programming, which Android does with Jetpack Compose, and iOS and MacOS with SwiftUI. The other way is doing imperative UI, like the web was doing with jQuery.
Once it does it's painfully long download and bootstrap it works pretty nicely. This is a big complicated legacy app, but I'm sure if reasonable file sizes and graceful loading was an actual goal you could get some pretty good results. Sure, it's not going to be as easy to hire for right now, but I think for complicated programs that general kind of workflow is likely to be better than the big pile of JS scripts.
Google seems to think the same if flutter is any indication.
It's funny that you, and probably a lot of HN folk, consider MPA simpler than SPA. It's opposite in my experience. The name itself is actually telling you that it has more complexity (multi page vs single page).
In practice, you can make both as complicated as you want, but SPA seems like a simpler starting point.
The earliest web apps I worked on were multi-page apps, with pages generated by Perl CGIs, later PHP. There was almost nothing going on on the client side except form submissions and a bit of JS-based form validation. I can tell you with 100% certainty this was simpler to build than most anything I see today with React SPAs and REST APIs. Even a simple form submission can be a PITA with modern tools.
I think the argument is generally that most applications' "complex interactions" are artificial contrivances and are unnecessary. I certainly think so.
SPA definitely have their place. However, when I see them be used for content-oriented sites with minimal user interaction (few forms, etc.) I wonder what guided the decision.
It can definitely seem this way if you only consider the front end. But a challenge that many SPA apps run into is that for the vast majority of SPA apps you end up in a situation where the front end and back end need to share business logic, and this can be a very complex thing to model and maintain, with either duplicated effort (and the potential of drift) or complicated solutions to keep them in sync, particularly if your front end and back end technologies aren't identical.
Most MPA apps treat the browser and front end as dumb clients basically - strictly responsible for putting stuff on a screen
The complications are not coming from the M or S part of the acronym, it comes from the words “Page” and “App” being intertwined. Or in other words 18 years of trying to hammer the web browser (conceived for “pages”) into an app platform.
It is the spectrum of interactivity. If you are a CPP/AI/Go dev who needs a static blog with simple form, you'll believe server rendered MPA are way to go. If your site has interactivity and dynamic status/notifications, you'll believe SPA makes sense. Unfortunately like in everything now days people assume other side is idiot and pick up pitchfork.
How is SPA a simpler starting point? It requires more code and more abstractions in the client from the onset. One might argue that that complexity would just exist in the backend in an MPA, but that's not true: there is some additional backend complexity, but not nearly as much as required to support the multitude of clients that exist for the baseline in an MPA.
Because in most web apps you still need to have client-side logic anyways, like form validation and such, so familiarizing yourself with a SPA framework is simpler than learning to implement this in addition to the MPA framework you'll probably end up using anyway.
The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple. The phrase becomes "well, what about Figma and Photoshop (and my mostly CRUD SaaS)?"
I think a valuable insight that the MPA / minimal JS crowd is bringing to the table is the idea is that you shouldn't strive for cool and complicated tools, you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.