Hacker News new | past | comments | ask | show | jobs | submit login
Defaulting on Single Page Applications (zachleat.com)
122 points by 0xblinq on March 27, 2023 | hide | past | favorite | 124 comments



Note that MPAs can also opt-in to prerendering, where the browser spins up a hidden background tab (sort of) that renders the next page, and then gets activated when the user clicks a link. This gives truly instant-feeling navigations. More detail at https://developer.chrome.com/blog/prerender-pages/ . For now page-to-page prerendering is Chrome only, but since it's a progressive enhancement that seems fine. (Safari does prerendering from the URL bar as well.)

SPAs can get this sort of thing too, but it's less automatic and generally requires framework support for keeping track of chunks of not-yet-visible, non-user-interactable inert DOM and matching them up with link clicks. Whereas with MPAs, because you're using the standard page rendering and link paradigm, the browser can manage for you.

Source: engineer on Chrome working on this feature :)


> Note that MPAs can also opt-in to prerendering, where the browser spins up a hidden background tab (sort of) that renders the next page, and then gets activated when the user clicks a link. This gives truly instant-feeling navigations.

I recall Opera doing this sort of thing around two decades ago. It had a setting to aggressively cash all links, and navigating between those was instantaneously.

Is this what Chrome does?


As the sibling comment notes, this would unfortunately be too aggressive. In addition to the problems with non-idempotent GETs, doing full prerendering means downloading all subresources and running all JavaScript, so things like analytics or tracking pixels or "update the user's most recently visited document list" in a document-viewer app might get triggered as well. That is, in reality while a HTTP GET might be idempotent on a well-designed page, very few people design their page so that a HTTP GET + running all the page's on-load JavaScript is idempotent.

To solve this, for the modern implementation you can read about in the article, (1) prerenders are limited to same-site pages, to avoid cross-site identity joining; (2) prerendering is opt-in on the referrer side (and, for cross-origin same-site cases, on the destination side as well).

This unfortunately means only sites that are willing to put in the extra work, get the speed benefit. In particular, right now the effort of auditing all your third party libraries is probably the biggest cost. We're looking into what kind of providers we can work with to reduce this, by evangelizing them into becoming prerendering-aware (e.g., not logging analytics while being prerendered).


I remember reading once (but now I can't find the details, this may be apocryphal): a few browsers did try this for a while, but it turned out be a disaster for certain sites. If, for example, you have a page loaded with a list of items, each of which has a delete link next to it (think PHPMyAdmin): what happens if the browser automatically sends a get request to every link it sees in order to prefetch the content? The server will just see this as a user clicking a link to the delete page and automatically delete the item.

The solution to this is obviously that servers should obey the rules of HTTP methods better: GET is idempotent and doesn't change state, so it should be impossible to delete data just by clicking a link. Instead, you should at least need to submit a form via a POST request in some way.

In practice, though, browsers need to deal with badly designed servers, which is why almost all of this prerendering, precaching, etc stuff requires an explicit opt-in from the website.


> SPAs can get this sort of thing too, but it's less automatic and generally requires framework support for keeping track of chunks of not-yet-visible, non-user-interactable inert DOM and matching them up with link clicks

Do you know of a React library for doing this? tanstack react-query can prefetch [0] but it just prefetches a REST API, it doesn't prerender in an invisible DOM node or something (also "prerendering" in React usually refer to server-side rendering of HTML text, not to prerendering something in the browser, so this seems a bit hard to search for)

[0] https://tanstack.com/query/latest/docs/react/guides/prefetch...


The SPA vs server side rendered page debate is so exhausted at this point. The tech is such that they’re nearly equivalent if you have a good team that understands the appropriate technology.

You can have an SPA with a proper router that respects page history, loads html fragments from a server, remembers positions on reload, etc.

You can also have a server side rendered page sprinkled with JavaScript and use web sockets to create dom element level transitions and interactivity

The “MPA” described in the article has always been possible in a “SPA.” It’s more about the UX of your app than anything else


Your second paragraph essentially is creating a little mini browser, and you have to make sure to get every detail right or you’re back to the uncanny valley of SPA web pages. Performance can even be worse with SPA “page” navigation if multiple round trips are needed for the data on the next page.

SPA’s should be used web apps (think slack or Asana), and stick to sever rendered for web pages.


I think you've highlighted a big problem with SPAs in that they can encourage multiple API calls per user action. IMO, you should probably have a single endpoint per thing a user might do that might combine several conceptual APIs. i.e., if you have an API to add something to a list, have that API also return the current list.


I see this in most single page apps.

They have a waterfall of multiple API requests by the browser to display a single page. That means the front end has a bunch of logic to aggregate and piece together requests. This in itself isn’t a big deal, if you move to doing it on the backend you still have the same logic and need to make the same sub requests.

Where it does become problematic is it leads to fat API endpoints sending far more data to the client than is needed. GraphQL lets you choose what fields you query but odds are there’s still data available you can ask for that shouldn’t be asked for and sent to a browser.

If your requests/aggregations are done serverside it gives you a better chance of filtering out data that should never be accessible in the browser. You can also make full use of Cache-Control headers and backend caching.

Creating individual endpoints for web apps went out of fashion as it involves more effort and people don’t like effort to do things well. I see it re-invented recently as backend-for-frontend.


> Creating individual endpoints for web apps went out of fashion as it involves more effort and people don’t like effort to do things well. I see it re-invented recently as backend-for-frontend.

Not true, individual endpoints requires more work for client side optimizations but the CRUD query mapping effort is not reduced with graphql. If anything, graphql endpoints require more effort on the backend side to build because of having to wire up the resolvers unless you are using something like Hasura.


Multiple queries from the frontend is still not great because the latency is much higher than doing at the server all else being equal.


> SPA’s should be used web apps (think slack or Asana), and stick to sever rendered for web pages

Everyone seems to agree on this, but if this is the case than i’m very confused why this is still a debate. The vast majority of developers are making apps, not “pages”. Pages are useful for a handful of well understood applications that already use CMS systems (blogs, ecommerce, forums etc). So why are we all pushing for MPAs instead of better apps?


> and you have to make sure to get every detail right or you’re back to the uncanny valley of SPA web pages

You can’t get every detail right, because the browser doesn’t expose some of the pieces that would be required to do so—even apart from details that vary between browsers. The most obvious example of unexposed functionality is the loading indicator, but there are more around things like slow or failing networks.


Is this an SPA? Or is it MPA?

https://intercoin.app

When you refresh, the server renders some of it.


No, it's not exhausted - people and businesses pushing heavyweight frameworks stand to lose out, and trying to silence the debate would help them.

React seems like a bubble to me, created by fancy code that seems good at first glance but doesn't make a huge difference. What's it supposed to be amazing at, simple stuff or complex stuff? If simple stuff then why are people downloading 200K for Hello, World? If for complex stuff why isn't something like VSCode, Monaco, CodeMirror written with React? Those have state galore, and React's one-way data binding is no revelation for them. In fact inside React things like the ironically named react-hook-form get around React's one-way data binding by letting the form inputs be uncontrolled* (ironically named because react hooks makes it sound like it's going with the grain of react when it's thankfully going against it). Maybe ChatGPT will pop the bubble by churning out vanilla js that beats it. :)

* https://react-hook-form.com/advanced-usage/#Controlledmixedw... React Hook Form embraces uncontrolled components but is also compatible with controlled components.


React 16 is about 5 KB uncompressed, it’s a tiny agnostic library that does very little and can be used to render to anything from a DOM to a native GUI to a dot matrix LCD screen (https://github.com/doodlewind/react-ssd1306/blob/master/docs...). Together with React-DOM it is about 100 KB, 30 KB gzipped (DOM library is bulky in part because it needs to support all of the default HTML and SVG elements). It’s not supposed to be amazing at anything except what it does, build complex or simple stuff it is up to you.


But of course nobody uses just those and instead bring megabytes of dependencies.


They don’t have to. This indiscriminate piling on of dependencies is a problem, but React isn’t at fault as if it was somehow encouraging it.


> Together with React-DOM it is about 100 KB

That's a surprisingly high amount. I had to check again, and yep, it's true...

Not that you would have to change how you code if you prefer React.

Pretty clear why Fresh defaults to Preact. https://fresh.deno.dev/docs/introduction


It may interest you to know that React was first used in production in 2011 and the library was subsequently open-sourced in 2013.

As far as programming tools go I don't think we can call it a fad anymore.


I disagree. Every time something is hyped that extends the time it can be called a fad*. React gets hyped up freshly again with every announcement from Vercel**.

* well maybe a collection of fads rather than a fad in itself

** I say this lovingly, it's similar to the magic of Steve Jobs and Apple. That said Vercel wouldn't interest me that much at this point if it was React-only. But the DX works for many different sorts of projects.


For there to be a bubble, it has to be pierced. We haven't seen that with React. We did see that with Angular. I think the reason why the bubble did burst with Angular is because it made many opinions about things, and the more opinions you make, the fewer people in aggregate are going to agree with your total outlook. It's easy to convince people of 1 thing, and less easy to convince people of N things. React being less opinionated and not a batteries-included option means it won't be going away for awhile.


> React being less opinionated and not a batteries-included option means it won't be going away for awhile.

That's what they love to say, but it has never seemed that way to me. One way data flow seems to be a huge opinion.

Also dangerouslySetInnerHTML, htmlFor, className, etc, etc.

That last one was handled by the community, partly coming from Facebook, with stuff like GraphQL, but now they've gone more with batteries included route even on their website by suggesting to use a big framework: https://react.dev/learn/start-a-new-react-project

Also their opinions don't align with those of a lot of the community. A lot of glitches stem from controlled components.

"In most cases, we recommend using controlled components to implement forms" https://legacy.reactjs.org/docs/uncontrolled-components.html ( which they might have finally changed their position on https://react.dev/reference/react-dom/components/input#contr... )

"React Hook Form embraces uncontrolled components but is also compatible with controlled components." https://react-hook-form.com/advanced-usage/#Controlledmixedw...


A more fruitful way to think about the degree of opinionated would be to compare to alternatives. If "className" versus "class" is too much 'opinion' maybe vanilla JS is best for your eclectic taste. Also, you'd absolutely find stuff like Angular to be maddening in its degree of opinionatedness.


It isn't just the framework that's opinionated, it's the people behind it. An example is their website where they don't mention create-react-app at all here: https://react.dev/learn/start-a-new-react-project They could have added it so as not to confuse someone who had heard about create-react-app and understandably expected to see it on a page called "start a new react project" but they prefer to blot it from memory...


This is on purpose. `create-react-app` is now considered outdated, "older workflow". See: https://github.com/reactjs/react.dev/pull/5487#issuecomment-...


Is the discussion about frameworks or just React? They seem to be treated as interchangeable but React's foundations are not shared among other frameworks.


It's really about hype. I think React is good but overrated.

Edit: I thought about it some more and the problem is that in order to see adoption, frameworks need to convince devs that they need something. With Vue what you have is reactive() and ref(). So they're convincing a lot of devs that they need MobX. Where in reality most devs don't need React's useState one-way bindings and they don't need MobX-like reactive objects either, nor do you need Signals from Solid. That said, if you prefer one of these, go for it!

Ergonomics of binding events and setting attributes/properties/styles is useful and I like how Lit, snabbdom, and Svelte provide ergonomics without one of these state management paradigms.


React is perfectly hyped, the introduction and complete absorption of flux architecture proves this. It's changed how we even write vanilla javascript. Most of the lessons we're talking about right now about questioning whether we even need state management paradigms to such a degree would not have happened without React.

So if a problem exists about this scenario, it isn't about React, but how teams decide on tools. And unfortunately, most teams that aren't building their own solutions, go with the tool that has the largest "safest" community, not the one that accurately solves their issue.


That's not how I remember it. People were thinking a great deal about state management with Backbone, Angular.js, and Knockout.js. It intensified with React. I wonder what would have happened with Angular.js if not for React...


There are smaller SPAs like preact or mithril. One could similarly argue that why should you have to use Django to show a user hello world?

It really doesn’t matter. Master your tool and most of the supposed downsides can be avoided.


I have a lot of respect for mithril, they created a tiny vdom that works really well. Though you don't always need one.

https://svelte.dev/blog/virtual-dom-is-pure-overhead

I have played thousands of bullet chess games on Lichees and is and was very snappy and used to be based on Mithril so it's very battle tested in a literal sense.

https://bestofjs.org/projects?tags=vdom

FWIW I disagree that something like virtual DOM can be pure overhead. It is useful to have virtual just about anything. Virtual filesystems for instance.


> No, it's not exhausted - people and businesses pushing heavyweight frameworks stand to lose out, and trying to silence the debate would help them.

What do you think about frameworks like Leptos that take the best of both worlds?

> progressively-enhanced single-page apps that are rendered on the server and then hydrated on the client, enhancing your <a> and <form> navigations and mutations seamlessly when WASM is available.

https://docs.rs/leptos/latest/leptos/


I think they're pretty neat, and that Svelte is one.

"I used this technique in my own emoji-picker-element. If you’re already using Svelte in your project, then you can import 'emoji-picker-element/svelte' and get a version that doesn’t bundle its own framework, ensuring de-duplication. This saves a paltry 1.4 kB out of 13.9 kB total (compressed), but hey, it’s there. (Potentially I could make this the default behavior, but I like the bundled version for the benefit of folks who use <script> tags instead of bundlers. Maybe something like Skypack could make this simpler in the future.)"

https://nolanlawson.com/2021/08/01/why-its-okay-for-web-comp...

Thing is I actually like coding in Vanilla JS, at least some of the time.


It’s good when you want interactivity that’s technically straightforward but in high volume.

It doesn’t work for code editors because they need very close interaction with the DOM, whereas React provides an abstraction layer that covers the common cases.


It works but clearly it's not needed, and to me I don't see a whole lot of innovation from a computer science standpoint because the most complex js projects don't use it.


Well something is needed. The alternative is:

1. Building apps with manual mutations (which is what we did before in the jQuery era). This involves a lot of boilerplate code, and it is hard to ensure that the UI state stays in sync with the app state properly.

2. Re-render the world on every state change. This works and allows you write simple code that is react-like in that it is pure transformation of app state into UI state, but it's slow and quickly runs into performance issues even in small apps.

1 still makes sense in the most performance sensitive scenarios. And 2 can work for the very simplest apps. But React and similar frameworks are great for everything in between.

> I don't see a whole lot of innovation from a computer science standpoint because the most complex js projects don't use it.

That seems silly. You could make the same argument for something like SQL. The most complex data manipulations won't use it. But it's still useful for the 90% that aren't that complex.


> (...) they’re nearly equivalent if you have a good team that (...)

That's a lot of hand-waving weasel words to blame those experiencing problems for the naturally occurring problems.

If it was the same thing, we would not be having discussions on their pros and cons. This is not a team-specific issue.


How do you make sure your SPAs work well with screen readers?


More or less the same as a regular page - use aria labels, update names and titles of elements, make sure the focus is set correctly, etc.


And try to be frugal on the tags you make.

Component design is the enemy here. Just take a peak at the nav elements in many designs and see how embedded the divs go.


Screen readers don't read the hierarchy of elements back to you. They find the text that renders inside the elements and read it out loud to you. And for navigating between elements, the level of nesting is much less relevant than having the correct aria-role assigned to each element that contains an interactive component.


Fair. My experience has been that these labels are typically afterthoughts and at a simplistic level don't offer much over the markup. :(


Are you referring to styled-components and the like? What is exactly the problem with that?*

* except media queries, those seem hard


Many prebuilt component libraries design for maintenance and all possible uses over being frugal. The actual markup can get really ugly with i.e. MUI.

When you put in the effort to build the components yourself, you aren't trying to be everything for everyone, so you get to skip a lot of the cruft.


Exactly this. Is a large reason to not try to build the most general components, either. Build what you need, where you can.

That all said, I also have to ack that the libraries are going to be hard to beat for speed of delivery. :(


To be honest, I think people tend to overestimate just how long it takes to make a nice looking button or write some flexbox styling. Even standard form components are really not too time consuming to style.

What pushes me towards using third party code is stuff like autocomplete search with drop-down selects, mostly because I don't want to mess up on the accessibility front, either keyboard navigation or screen readers, and there's at least a few that have that part figured out already.


The idea of components is fine. I should have been clear, there. The problem typically comes from the authoring tools. To have the hooks necessary to put the decoration that we want, in html, they typically add a ton of div elements. When, realistically, you could almost certainly get what you want with very minimal markup.

And being fair, I'm sure this has gotten a bit better in recent years. But the Rube Goldberg efforts people would put in to get the "flow" of the browser to automatically place things in locations that were easily calculated is frustrating.


Why would they be different? The DOM structure can be exactly the same between an SPA and MPA.


Because in an MPA a navigation to a new state is obvious: the page reloads, the screen reader can inform the user and start again from the top.

In an SPA clicking on something could replace the content in some other area of the screen - but if you don't add the right additional code the screen reader user has no way of understanding what just happened.


Exhausted but always new to someone.

It's interesting to see technologies like livewire come out that take things back to basics.


> You can have an SPA with a proper router that respects page history, loads html fragments from a server, remembers positions on reload, etc.

In theory.


And in practice too?


I'm really enjoying using htmx nowadays to sprinkle interactivity on server-rendered pages with almost no JavaScript. Its extra attributes make it easy to do common interactions on pages. Highly recommended 12 KB gzipped zero-dependency JavaScript. Example:

    <button
      title="Remove 'ABC' from cart"
      hx-delete="/cart/123/item/abc"
      hx-target="#item-abc">
Clicking this button will trigger a `DELETE /cart/123/item/abc` request and swap the response into the element selected by the `hx-target` selector. The response would be an HTML fragment. So instead of an API serving JSON you would have an API serving...HTML. It's a neat fit with any server rendering technology.


I take it there isn't a problem with the HTML response also using those "hx-" attributes?


Exactly, this enables setting up a series of interactions in the page, i.e. an interactive workflow.


I think it's worth calling out accessibility directly, in particular screen reader compatibility.

I have yet to find a great guide to making SPAs work well with screen readers that goes beyond "read the ARIA spec" - but the ARIA spec isn't actually that useful for understanding the nuts and bolts of how you should build things so that e.g. screen readers know when the SPA has navigated to a new page, or loaded fresh content in a smaller page region.

My understanding is that MPAs are, by default, massively more accessible than SPAs. But my experience is that SPA authors rarely seem to indicate that they care.


You need to have a first-class understanding of accessibility to not have a11y problems. If your MPA framework allows you to define an "onClick" handler with a div tag, you're back in React's set of accessibility problems (versus using a button element, etc).


What accessibility problems do you think React has? It doesn't make you put onClicks on divs.


Poorly phrased. React has no inherent a11y issue. People misapply it too frequently due to a lack of fundamental knowledge. MFA's cater towards building less interactive content, and more so static pages, so they are less likely to have a11y bugs.

I wouldn't be surprised one bit if Angular apps failed a11y audits at similar rates to React apps. But it isn't a problem with SPA's as an architectural concept, the problem is uneducated engineers.


Don't get me started on things React sucks at.


Using a framework that is accessibility friendly is probably the best first step. Vuetify has gotten good feedback, with little to no additional steps for my implementations. If I'm remembering correctly, Tailwind's Headless UI, Chakra, and a few others are in the same boat.

Searching for "a11y [framework]" on GitHub gives good results and some "awesome" pages that link to resources on the subject.


His way to make a website available to any browser is to use an API that nothing but the latest Chrome supports?


Google's propaganda runs deep.

Chrome is like the new IE, but far worse for user control and privacy.


I see comments like this a lot and it always makes me wonder if we have collectively forgotten just how damaging IE was the web or if our default vilification of a browser we don’t like is to just compare it with IE.


"damaging"? That's Google-propaganda for sure. Chrome is far more "damaging" than IE ever was.


I never said chrome was not damaging. In fact, it’s damage is likely to be more widespread. The underlying point here was that it feels like we’ve picked an apples-to-oranges comparison when we say just how bad chrome is when compared to IE. Both were popular for different reasons, during different points of the internet, and are damaging in different ways.


If you’re talking about the View Transitions API, it’s a fairly unimportant progressive enhancement that wouldn’t even be a good idea for most sites. It doesn’t matter if the browser doesn’t support it.


> You can’t JavaScript your way out of an excess-JavaScript problem. These large JavaScript bundles are costly to site performance.

I was gonna say:

"This is what some Hugo fans might say, but 11ty and Astro suggest otherwise."

Then I saw the author of the post is the author of 11ty.

Still, the middle ground is being pursued by Svelte and Fresh. I prefer something less frameworky though.


> large bundle sizes 228KB

That's a moderately sized image.

That's <1s of 1080p video.

And that's assuming you haven't heard of a thing called "gzip."

At some point, I think we're finding things to be unhappy about.


The issue with large JavaScript files is the overhead of the browser parsing, interpreting, and executing of the JavaScript. The network transfer size has nothing to do with it.


wasm would be faster regarding parsing, interpreting and executing


WASM cannot interact with the DOM.


it can through JS interoperability (only needs tiny JS files), all of the remaining things can be handled by WASM.


I only hear of "brotli" :)


Well even better :)


Just because video will be a much large file size, does not mean that text and forms should have to be, too (unless you are writing an entire book, I suppose).


What confuses me the most about these articles is the idea what ppl jumped on these frameworks just to chase new shiny things. There is always rdd going on but did these frameworks really not improve anything about the web experience for users? My career started around the same time react and angular started to take off. Why did we move away from mpas if they were so much better?


Did they improve things for users?

From a performance perspective it's a bit of a wash. You get slower first page load and lots of spinners in exchange for faster navigation


ok so how did they get so much market share? cargo culting from fbook? or was it the appeal of pushing computer to the client like ther other commenter said?


React & friends are fast to build apps in and are quick to onboard new engineers on. Developer experience has been prioritized over user experience.


Cause splitting backend and frontend lets you keep all the js code and client resources on CDN.


The shift to SPAs happened around the same time as mobile browsing was becoming the norm. Phones were alright, but data speeds were really slow.

So instead of having to download an entire HTML page and all the assets used every time you navigate, people built SPAs. Download everything up front, and then fetch data with small API requests. This way even though first load might take a while, everything is cached, and the client just needs to query the API for data.

The other side of it is that websites used to be pretty simple, and as browsers evolved past IE10, the kinds of things people build on the web also got more complex.


>So instead of having to download an entire HTML page and all the assets used every time you navigate, people built SPAs.

Don't browsers cache css, images, and JS and all that automatically for like 15 years now?


You can't cache the HTML though, and each page might use different assets (bundle splitting).


I think the view transitions API will be good, but it's currently only available for Chromium browsers so it is not really a competitor to SPAs for the time being. I'm also kind of opposed to really broad generalizations like this since what is most effective currently kind of depends on what you're building. A calculator built as an MPA would be kind of ridiculous.


I’ve liked the transitions demo from Chrome guys, but it has barely any support yet.


Nobody ever talks about the monolithic all-in-one framework upgrade? You really haven't come to terms with SPA until you've done the all-in-one framework upgrade, man. It's like running thru a hailstorm of razor blades


Devs who push for monolith all-in-one SPA frameworks are seldom ones who stick around long enough after shipping to experience what you're discussing. They're far more concerned with the new framework hotness vs. building a lasting and consistent user experience.

Just my anecdotal experience.


Remember the explosion of front-end tools no one uses anymore?

They usually follow the pattern of 3 seconds of hype followed by 10 years of digging out a mountain of technical debt from some asshole who has since moved on.


This article was the first time I learned about https://developer.chrome.com/docs/web-platform/view-transiti....

It looks really great, and I think Rails Hotwired / Turbo could really benefit from this API.

Of course, someone beat me to it: https://dev.to/nejremeslnici/how-to-use-view-transitions-in-...


Very cool!


> Search engines use site performance as a search ranking signal, which makes good performance crucial.

So this is probably wildly naive, but I find myself wondering why SEO is so important still in 2023. Like, how much traffic is coming from Google that converts to paying customers or clients? I suppose it's industry specific, but for my friends that have sold product on e-commerce sites, the best way to make money seems to have been find a way to post on reddit for a given niche community without getting banned, and Instagram ads. Maybe Facebook ads.

For a SAAS, I'm really curious how much converted traffic comes from Google. When I need a SAAS I usually first ask all my friends, then start searching "service_i_need reddit sysadmin" to see what other people are using and their experience. Maybe I'll read a listicle from an aggregator i trust.

Also, you can just pay Google to be at the top of whatever search results, so you can Optimize by just giving them money.


The exact data on how much search traffic converts varies wildly from business to business so it’s hard to give a ballpark figure. The big win from SEO traffic that I’ve seen is that CAC for these customers tends to be a lot lower than for customers coming in via ads.

Paid advertising and posting on social media has its place, but there are a LOT of people using Google and getting to rank 1 for important keywords usually works out cheaper than paying for the ad space.


As long as you don't count the SEO cost in the acquisition of those customers, it is much cheaper.


If no one can find you, then all of your effort is meaningless and you will make $0. That's why SEO exists.

It's impossible for shoestring apps to give Google money. They would fail at being businesses if they funneled money they didn't have to Google.


Sure, but I'm asking, who's being found these days on Google? It seems to me people are being found on listicles, reddit, instagram, facebook, and various word of mouth channels.


> Inclusive and robust by default, Energy-efficiency, Privacy-focused I imagine trying to use these reasons with executives and getting laughed out of the room


Single page applications do not have to be loaded all in advance:

  function main_state_changed(aState) {
    main_state_display.innerHTML = loadFileFromServer(aState);
  }
this way your app loads parts on on need basis. As for tooling - I do use minification but the rest is plain html, css and javascript and couple of selected libs. No frameworks and works wonders for me.


Do you have more examples of this you can link? I'm curious about what's going on here.


You can load any file from server into string. Use for example fetch call to do it. You can then assign innerHTML property of any DOM element of your choosing to that string [0]. The string can contain any valid HTML and JavaScript. This makes your front end modular and scalable. It is dead simple and does not require anything but plain JavaScript.

It is amazing how modern web developers flock to that mess of frameworks and tooling to achieve what basic JS and no tooling do way more efficiently and with less time spent.

[0] - https://www.w3schools.com/jsref/prop_html_innerhtml.asp


Interesting idea, though I'm wondering for example how you manage namespaces, or for example one component extending or using another one. Do your various modules just have lots of checks about certain valid states on the global namespace?


You can do some reading about state machines and their executors if you are curious about ways of managing states.

I do not keep to single tool / strategy etc and use whatever I believe is the best choice for particular task. I write everything: firmware for MCU, multimedia apps with hardware accelerated graphics, enterprise backends and middleware, web front ends, device control and whatnot, so while some tech is universal the specifics can be highly different.


Oh, I'm aware of the theoretical concepts of state machines etc, I guess I was asking about more in-the-weeds levels, but it sounds like you really and truly are just approaching these problems from engineering fundamentals and coding from scratch. That's pretty cool. All you've done is just pique my curiosity further though, I would love to be able to work on such a wide breadth of fields, are you open to sharing how you get to work in such a variety of platforms? I'm at the point in my career where I'd like to aggressively move off all the web-only work I've done the last half decade or so.


>"I would love to be able to work on such a wide breadth of fields, are you open to sharing how you get to work in such a variety of platforms?"

I guess we have different education and general background. I am 62 yo fart. Was raised in former USSR in a satellite town that existed soleily to accommodate bunch of scientists working in various fields. When not in school I would spend my time visiting my parent's place of work and got to play with very cool "toys" and people there were nice enough and let me "work" on tiny projects of various nature. I then done my M.Sc. in physics / biophysics and continued to work as a scientist. So I was basically trained to solve various problems in creative manner. Was never trained specifically in software.

I had wonderful teachers and mentors and can't ever express enough gratitude to them. Many of them are dead by now anyways.

During a course of my work as a scientist I needed to design various mechanical gizmos, electronics (digital and analog) and write some software since nothing existed for my particular field. I just got myself whole bunch of books and read and tried until I succeeded.

Then "perestroyka" started and I began to supplement my "official" income with writing a software as an independent consultant. Computer industry back then was very young, and I had no troubles finding contract work to create various products. My first side software project was PC based visual designer (something akin to music sequencer) to control lighting system for big theater.

I then immigrated to Canada and eventually ended up working basically as chief software product designer and implementor. First as an employee in software development company and then on my own.

As for diversity of the fields - when working on particular product I imagine software as a bunch of living components interacting with each other. I then nail down what those components and interactions are. I then get down to implementing it. Particular computer language, deployment platform and other pesky details are the last thing I worry about.

Over the years I've accumulated large portfolio of products and present those as a references when getting new contracts. I mostly deal with business owners and those do not give a flying fuck about languages. They want actual problems solved.

>"coding from scratch"

I do extensively use various libraries. But very specific problem oriented. Like graphics libraries. I avoid frameworks like a plague most of the time as those in my opinion degrade creativity and keep one hostage.


Angular allows you to split your code into modules and then load them on-demand. For example, we have a module for the admin pages, which is only loaded when routing to one of them, which means that module is never loaded for non-admin users.


No Angular needed. Simple Fetch does all that.


GP asked for an example, of which Angular is one


Déjà vu 2011. Sigh. This has happened before.

The failures need:

- Update the URL rather than depend on ephemeral DOM state like some damn Flash app

- Don't load the entire site all at once as one giant file

- Use server- and client-rendering wherever it makes the most sense

- Accessibility or be in a heap of legal trouble globally

- Get fancy by swapping js and html, but be prepared to deal with all edge-cases of things that don't know what magic is happening


An incredible example of what's suddenly possible with the View Transition API: https://twitter.com/charca/status/1637832314364497920 (Built with Astro)


Hype culture in developer tooling has gotten out of hand imho. A SPA with precaching is almost always going to be the most responsive experience.

Next and similar tools are decent if you NEED SSR and don't want to go the prerendering route - but there's some extra work involved.


> Single Page Application frameworks in the last few years have pivoted away from client-side rendering to server-rendering and we welcome this improvement. However, the large starting size of client JavaScript bundles customary to SPA persist: Remix (228 kB), Next.js (248 kB), Gatsby (210 kB), and Nuxt (191 kB).

Good point, I also think stimulusreflex.com deserves some highlight in this case too. This is where it shines!

https://v3-4-docs.docs.stimulusreflex.com/#faster-uis-smalle...


The benefits they list for classic multi-page applications are good. Even disregarding searching and caching, the other stuff listed there I care about sufficiently (to work even if scripts are disabled, or with extensions, with other web browsers, accessibility, back/forward/scrolling, energy efficiency, etc), that the classic multi-page applications are good.


single page isn’t far enough. the deployment artifact needs to be a single html file.

i do it like this: https://github.com/nathants/aws-gocljs


Quit using these ginormous frameworks and libraries. Vanilla Javascript is all you need.


The organizational and maintainability benefits you get from using something like react far out weigh the actually quite minor file size bump. That is the least of your worries.

source: myself; someone who moved to working on a project entirely written in vanilla JS… which has just become its own bespoke framework.


You can get the same organizational benefits and components with vanilla HTML & JS like web components.


Yes, but your bespoke solution will work differently to every other project and is unlikely to be well documented.


I agree, but note that we are talking about a benefit that accrues to the developer, not the user.


It also benefits users in that you can implement more features and fix more bugs in less time. Performance is a feature and what that means is that while you can't just completely ignore it, it's not always the highest priority.


> It also benefits users in that you can implement more features and fix more bugs in less time.

This is mostly a bullshit answer that is hardly ever the case in reality and is pretty much always used as an excuse to favor developers over users.


Can't think of any reason why this got downvoted, except that clearly the ReAnguVue mafia is omnipresent these days. Vanilla JS (and even that in moderation) FTW.


I mostly agree, but in most cases, you should not even need client-side document scripts at all. (In some cases, making optional features with scripts can be helpful. In much rarer cases, making it mandatory might be needed, but even then there should be ways to download the data or access the protocols even though JavaScripts is disabled.)


I haven't seen a large project in pure vanilla js in a while, do you have any examples you like that I can check out?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: