Hacker News new | past | comments | ask | show | jobs | submit login
Moving from React to htmx (htmx.org)
575 points by mpweiher on Oct 15, 2022 | hide | past | favorite | 315 comments



Love htmx and this talk is a brilliant run down of where it works well.

But as always it’s about choosing the right tool for the job. Server rendered pages/fragments solve so many issues around security and time to develop a product, however it only gets you so far.

Ultimately I think the decision when choosing a stack comes down to how much state you need to managed in browser. The vast majority of sites needs very little client side state management, htmx and other tools such as Alpine.js are perfect for this. But eventually as you reach a more “app like” experience with multiple layers of state control on the front end you need to reach for a front end JS framework.

Now that doesn’t always mean going all in on a full SPA covering the whole of your product. It could just be a small fragment that requires that level of interaction.

Point is, don’t pick a tool because it’s “in vogue”, pick one because it lets you build the best possible product as efficiently as possible. For 80% of websites that could be htmx, and for the next 15% htmx probably works for 90% of their pages. It’s the 5% where htmx is not one of the right choices at all.


> as you reach a more “app like” experience with multiple layers of state control on the front end you need to reach for a front end JS framework

I think that if you fully embrace HTMX's model, you can go far further than anticipated without a JS framework. Do you really need to be managing state on the client? Is it really faster to communicate via JSON, or protobuf, whatever, rather than atomic data and returning just small bits of replacement HTML -- inserted seamlessly that it's a better UI than many client-side components? Why have HTML elements react to changes in data or state, rather than just insert new HTML elements already updated with the new state?

I think you're describing a, let's do React in HTMX mindset, rather than let's go all in on the HTMX model. And I might be giving HTMX too much credit, but it has totally changed how I go about building web applications.


The slippery slope that scares me (as a React developer) about htmx (or Hotwire.dev, in particular is the one I was looking at), is that you start making the assumption that the client's internet is fast.

There was demo that showed it normally takes ~100ms to click a mouse, and if you attach to the on-mouse-down, then by the time the mouse has been released (100ms later), you can have already fetched an updated rendered component from the server.

And while that's very cool, my second reaction is "is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side?" What happens when my internet (or the server) is slow, and it takes more than 100ms to fetch that data? Suddenly that's a really bad user-experience. In the general case this is a subjective question. I personally would rather wait longer for the site to load, but have a more responsive site once it did.

There's not a perfect solution to this, because in a complex site there are times that both the server and the client need to update the UI state. But having the source of truth for the UI located in a server miles away from the user is not a general-purpose solution.

(I'm not advocating for the status quo, either. I just wanted to bring up one concern of mine.)


> you start making the assumption that the client's internet is fast.

The most common trajectory for react and other SPA framework apps is to also make this assumption, waving away the weight of libraries and front-end business logic with talk of how build tools are stripping out unused code so it must be light, while frequently skipping affordances for outright network failure that the browser handles transparently, oh and hey don't forget to load it all up with the analytics calls.

But maybe more crucially: what's the real difference between the overhead of generating / delivering markup vs JSON? They're both essentially data structure -> string serialization processes. JSON is situationally more compact but by a rough factor that places most components on the same order of magnitude.

And rendered markup is tautologically about the data necessary to render. Meanwhile JSON payloads may or may not have been audited for size. Or if they have, frequently by people who can't conceive of any other solution than graphql front-end libraries.

Whether you push html or json or freakin' xml over the wire is a red herring.

Heck, "nativeness" might be a red herring given frequent shortcomings in native apps themselves -- so many of them can't operate offline in spite of the fact that should be their strength because native devs ALSO assume client's internet is fast/on.


I think you're talking past each other: the problem isn't assuming the client's internet is fast, the problem is assuming the client's internet is stable.

If you replace most interactions that could be resolved client-side with a network transaction, you're betting on the client's internet being not just reasonably fast but also very stable. When I'm on the go, my internet is more likely to be fast than stable.


> The problem is assuming the client's internet is stable.

Yep. This is the major drawback of backend-dependent interactions. This is what scares me away from amazing technologies such as ASP.NET Core Blazor Server where I can code my frontend in C# instead of JavaScript.

If only Blazor Wasm wasn't so heavy. 4mb of runtime DLLs is a bit off-putting to any use but intranet LOB applications.


Recent versions trimmed it down to about 1mb.


Your comment dovetails with my primary point: how an app serializes or renders data is entirely trivial compared to planning for network availability issues when it comes to app function and user experience.

GP asks: "is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side?" This is a question that's oriented around what is generated on the server and pushed over the wire rather than the fact that there is a network call at all.

If the network is not stable, a typical 1st-load-heavy SPA-framework will make... a tenuous network call returning JSON with iffy chances of success instead of a tenuous network call returning an HTML fragment with iffy chances of success.


It may be common when starting out, but we do have paths to optimize out of it.

We can do code splitting, eager fetching js when page is idle, optimistic rendering when page is taking time etc. Unlike what a lot of people like to believe not every spa runs 20 megs of js on page load.

Also the initial load time being a few seconds and then the app being snappy and interactive is an acceptable compromise for a lot of apps (not everything is an ecommerce site).

When most fragments need to be server rendered it manifests as a general slowness throughout the interaction lifecycle that you can't do much about without adopting different paradigm. The hey-style service-worker based caching hits clear boundaries when the ui is not mostly read only and output of one step very closely depends on the previous interactions.

I joined a place working on larger rails+unpoly+stimulus app which started off as server rendered fragments with some js sprinkled in, but after two years had devolved into a spaghetti where to figure out any bug I'd typically need to hunt down what template was originally rendered, whether or not it was updated via unpoly, whether or not what unpoly swapped in used the same template as the original (often it was not), whether or not some js interacted with it before it was swapped, after it was swapped etc. .... all in all I felt like if you push this to use cases where lot of interactivity is needed on the client, it is better to opt for a framework that provides more structure and encapsulation on the client side.

I am sure good disciplined engineers will be able to build maintainable applications with these combinations, but in my experience incrementally optimizing a messy spa app is generally more straightforward than a server-rendered-client-enhanced mishmash. ymmv.


> Unlike what a lot of people like to believe not every spa runs 20 megs of js on page load

This is not a new take, it's exactly what every die-hard SPA dev says. While 20MB is an exaggeration, the average web page size has ballooned in the past decade from ~500KB in 2010 to around 4MB today. And the vast majority of those pages is just text, there is usually nothing really interactive in them that would require a client-side framework.

Others will say 2MB, 4MB is not that bad, but that just shows how far out of touch with the reality of mobile internet they are. Start measuring the actual download speeds your users are getting and you'll be terribly disappointed even in major urban centers.


On a transatlantic flight I recently had the displeasure of browsing over a satellite connection. A lot of sites simply never loaded, even though the connection speed was reasonable. The multi-second latency made these sites that loaded tens to hundreds of resources completely unable to render a single character to screen.


For a real world example of this, GitHub uses server-side rendered fragments. Working with low latency and fast internet in the office, the experience is excellent. Trying to do the same outside with mobile internet, and even with a 5G connection, the increased latency makes the application frustrating to use. Every click is delayed, even for simple actions like opening menus on comments, filtering files or expanding collapsed code sections.

I'm actually worried about developers in developing countries where mobile internet is the dominant way to access the Internet and GitHub is now the de facto way to participate in open source, that this is creating an invisible barrier to access.


I love HTMX and similar technologies but I think GitHub is a particularly telling example of what can go wrong with these techs. The frontend is so full of consistency bugs that it's appalling: https://youtu.be/860d8usGC0o?t=693


Github does? Maybe that's the reason why I often get the error message: "page took too long to render" after ten seconds of waiting.

example: https://github.com/pannous/hieros/wiki/%F0%93%83%80 this is admittedly a complicated markdown file, however it often fails on much simpler files.


I hate it when devs implement their own timeouts. That’s handled at the network level, and the socket knows if progress is being made. I was stuck using 2G data speeds for a couple of years and I loathed this behavior.


Sometimes the infrastructure causes this. For a long time (and now?) AWS Api Gateway has a hard cap of 30 seconds, so the sum of all hops along the way need to remain under that.


A timeout at that level should mean “no progress” for 30s, not that a request/response needs to finish in 30s. An naive timeout that a dev randomly implements might be the latter and would be the source of my past frustration.


that's a good reason to invest in self hosting! https://git.jeskin.net/hiero-wiki/file/%F0%93%83%80.md.html


oh, thanks for cloning! but [[links]] dont work and other (internal)[links] don't link to markdown.md.


gah, you're right. perhaps that could be fixed with a few clever grep/sed incantations. very interesting repo if you're the author, by the way.


Side note: in Thailand and the Philippines, at least, mobile internet is blazing fast and not more expensive.


As someone living in one of those countries, I beg to differ. Cheap maybe, but mobile internet is neither fast nor stable.


I'm guessing that's Philippines :-] It's still been good enough for me to get work done, video calls, etc. And mostly better than hotel/coffee shop WiFi.


It's mostly the metal roofs everywhere blocking the signal.


That's not universally true in all areas for both countries though.


In Vietnam it is also fast.


As someone who has been writing code for 30 years and has been developing "web apps" since the late 90s, it's really funny to me how things come full circle.

You just described the entire point of client-side rendering as it was originally pitched. Computation on the server is expensive and mobile networks were slow and limited in terms of bandwidth (with oppressive overage charges) just a few years ago. Client-side rendering was a way to offload the rendering work to the users rather than doing it upfront for all users. It means slower render times for the user, in terms of browser performance, but fewer network calls and less work to do server-side.

In other words, we used to call them "Single Page Web Applications" because avoiding page refreshes was the point. Avoid network calls so as to not consume bandwidth and not make unnecessary demands of the server's limited computational resources.

Now things might be changing again. Mobile networks are fast and reliable. Most people I know have unlimited data now. And while computation is still one of the more expensive resources, it's come down in the sense that we can now pay for what we actually use. Before we were stuck on expensive bare metal servers and we could scale by adding a new one but we were likely overpaying because one wasn't enough and two was way overkill except for peak traffic bursts. So we really scrambled to do as much as we could with what we had. Today it might be starting to make sense to make more trips back to the server depending on your use case.

To address your concern about latency or outages, every application needs to be built according to its own requirements. When you say "there's not a perfect solution to this", I would say "there is no one size fits all solution." We are talking about a client / server model. If either the server or client fails then you have failed functionality. Even if you can get yourself out of doing a fetch, you're still not persisting anything during an outage. The measures that you take to try and mitigate that failure depend entirely on the application requirements. Some applications strive to work entirely offline as a core feature and they design themselves accordingly. Others can accept that if the user does not have access to the server then the application just can't work. Most fall somewhere in between, where you have limited functionality during a connection interruption.


People always take a good idea to far.

There's nothing wrong with loading a page and then everything on that page loads data from the server and renders it.

Where the issues come in is that modern SPA claims loading a new page is unacceptable and that somehow doing so means you can't fetch data and render anymore.

It's just not true.


I think the term SPA is somewhat confusing. Why can't an SPA, or parts of it be rendered on the server as well.


> we used to call them "Single Page Web Applications" because avoiding page refreshes was the point

I wonder if the problem was really that the whole page was reloaded into the browser which caused a big "flash" because all of the page was-re-rendered. The problem maybe was not reloading the page from the server but re-rendering all of it. Whereas if you can load just parts of the page from the server the situation changes. It's ok if it takes some time for parts of the page to change because nothing gets "broken" while they are refreshing. Whereas if you reload the whole page everything is broken until all of it has been updated.


The problem was there was no concept of reusable components. IMO htmx is not the headline here but django-components (https://pypi.org/project/django-components/) is. Managing html, css and JS in component-reusable chunks on the server used to be extremely awkward, especially when you begin to need lifecycle events (HTML appeared on the page, lets attach all the necessary event listeners, but only to the right element - even in a list of 5 elements; internal HTML changed, lets see which things need more events etc).

I would try this approach out in a typechecked language, if I'm certain a native mobile app isn't going to be needed.


I think your explanation makes it very clear.

The difficulty with web-development is there are 3 different languages (HTML, CSS, JS) which all need to make some assumptions about what is coded in the other languages. The JavaScript refers to a DOM-element by id, it assumes some CSS which the JS can manipulate in a specific way.

The ideal goal much of the time has been: "Keep content separate from presentation, keep behavior separate from the content etc.". While this has been achieved in a superficial level by keeping CSS in a .css -file, content in a .html-file and behaviors in a .js -file, they are not really independent of each other at all. And how they depend on each other is not declared anywhere.

That means that to understand how a web-page works you must find, open and read 3 files.

Therefore rather than having 3 different types of files a better solution might be to have 3 files all of which contain HTML, CSS, and JS. In other words 3 smaller .htm files, each embedding also the CSS and JS needed.


We had this middle ground of returning html fragments and updating subsections of the page. For posting a comment, for example.

There were plenty of sites doing that in the mid 2000s.


> There was demo that showed it normally takes ~100ms to click a mouse, and if you attach to the on-mouse-down, then by the time the mouse has been released (100ms later), you can have already fetched an updated rendered component from the server.

I think what you're describing is a form of preloading content but it's not limited to React.

For example:

The baseline is: You click a link, a 100ms round trip happens and you show the result when the data arrives.

In htmx, Hotwire or React you could execute the baseline as is and everyone notices the 100ms round trip latency.

In React you could fetch the content on either mouse-down or mouse-over so that by the time the user releases the mouse it insta-loads.

But what's stopping you from implementing the same workflow with htmx or Hotwire? htmx or Hotwire could implement a "prefetch" feature too. In fact htmx already has it with https://htmx.org/extensions/preload/. I haven't used it personally but it describes your scenario. The API looks friendly too, it's one of those things where it feels like zero effort to use it.

Hotwire looks like it's still fleshing out the APIs for that, it has https://turbo.hotwired.dev/handbook/drive#preload-links-into... for pre-loading entire pages. There's also https://turbo.hotwired.dev/reference/frames#eager-loaded-fra... and https://turbo.hotwired.dev/reference/frames#lazy-loaded-fram... which aren't quite the same thing but given there's functionality to load things on specific events it'll probably only be a matter of time before there's something for preloading tiny snippets of content in a general way.


I actually have a module that I built ti loads pages using such an approach (prefetch). In my take, I refined Pre-fetching to be triggered a few different ways. You can fetch on hover, proximity (ie: pointer is x distance from href), intersection or by programmatic preload (ie: informing the module to load certain pages). Similar to Turbo, every page is fetched over the wire and cached so a request in only ever fire once. It also supports targeted fragment replacements and a whole lot of other bells and whistles. The results are pretty incredible. I use it together with Stimulus and it's been an absolute joy for SaaS running projects.


Sounds really good, do you have that code published somewhere? Any plans to get it merged into the official libs?


I do indeed. The project is called SPX (Single Page XHR) which is a play on the SPA (Single Page Application) naming convention. The latest build is available on the feature branch: https://github.com/panoply/spx/tree/feature - You can also consume it via NPM: pnpm add spx (or whichever package manager you choose) - If you are working with Stimulus, then SPX can be used instead of Turbo and is actually where you'd get the best results, as Stimulus does a wonderful job of controlling DOM state logic whereas SPX does a great job of dealing with navigation.

I developed it to scratch an itch I was having with alternatives (like Turbo) that despite being great are leveraging a class based design pattern (which I don't really like) and others which are similar were either doing too much or too little. Turbo (for example) fell short in the areas pertaining to prefetch capabilities and this is the one thing I really felt needed to be explored. The cool thing with SPX which I was able to achieve was the prefetching aspect and I was surprised no-one had ever really tried it or if they did the architecture around it seemed to be lacking or just conflicting to some degree.

A visitors intent is typically predictable (to an extent) and as such executing fetches over the wire and from here storing the response DOM string in a boring old object with UUID references is rather powerful. SPX does this really efficiently and fragment swaps are a really fast operation. Proximity prefetches are super cool but also equally as powerful are the intersection prefetches that can be used. If you are leveraging hover prefetches you can control the threshold (ie: prefetch triggers only after x time) and in situations where a prefetch is in transit the module is smart enough to reason with the queue and prioritise the most important request, abort any others allowing a visit to proceed un-interruped or blocking.

In addition to prefetching, the module provides various other helpful methods, event listeners and general utilities for interfacing with store. All functionality can be controlled via attribute annotation with extendability for doing things like hydrating a page with newer version that requires server side logic and from here executing targeted replacements of certain nodes that need changing.

Documentation is very much unfinished (I am still working on that aspect) the link in readme will send you to WIP docs but if you feel adventurous, hopefully it will be enough. The project is well typed, rather small (8kb gzip) and it is easy enough to navigate around in terms of exploring the source and how everything works.

Apologise for this novel. I suppose I get a little excited talking about the project.


This looks extremely similar to Unpoly to me.


Never heard of Unpoly, but seems really cool. I will need to have at look at it more closely but from the brief look I'd say SPX is vastly different.

In SPX every single page visit response (the HTML string) is maintained in local state. Revisits to an already visited page will not fire another request, instead the cache copy is used, similar to Turbo but with more fine grained control. In situations where one needs to update, a mild form of hydration can be achieved. So by default, there is only ever a single request made and carried out (typically) by leveraging the pre-fetch capabilities.

If I get some time in the next couple of weeks I'll finish up on the docs and examples. I'm curious to see how it compares to similar projects in the nexus. The hype I've noticed with HTMLX is pretty interesting to me considering the approach has been around for years.

Interestingly enough and AFAIK the founder of github Chris Wanstrath was the first person to introduce the ingenious technique to the web with his project "pjax" - to see the evolution come back around is wild.


>In SPX every single page visit response (the HTML string) is maintained in local state. Revisits to an already visited page will not fire another request,

Unpoly does this.


If the click causes a state change, it would be complicated to pre-render (but not apply) it before click.


> I personally would rather wait longer for the site to load, but have a more responsive site once it did.

If react sites delivered on that promise, that would be compelling. However, while my previous laptop was no slouch, I could very often tell when a site was an SPA just in virtue of how sluggish it ran. Maybe it's possible to build performant websites targeting such slower (but not slow!) machines, but it seemed that sluggish was quite often the norm in practice.


I wouldn't assume a fragment is any bigger than the raw data when it's compressed.

  { "things": [
    { "id": 183,
      "name": "The Thing",
      "some date": "2016-01-01",
    },
    { "id": 184,
      "name": "The Other Thing",
      "some date": "2021-04-19",
    },
  ]}
Vs

  <tbody>
    <tr><td>183</td><td>The thing</td><td>2016-01-01</td></tr>
    <tr><td>184</td><td>The other thing</td><td>2021-04-19</td></tr>
  </tbody>
They seem extremely similar to me.


The issue with this model is that many state updates are not scoped to a single fragment. When you go to a another page, you’ll likely want to update the amount of results on one or more detached components. That’s way more natural to by getting the length of an array of data structures than on a html fragment.


Possibly, yes, although many SPA sites seem to hit for updates every (visual) page change anyway, to get the latest results. It's rare that a UI will hold all the records to count locally, unless it's a small and slow-changing data set, so it's asking for a count whether or not it's getting it wrapped in HTML or JSON.


We have a set up like this at my job. The servers are based in the Pacific Northwest. We have users in Europe and India.

You can guess how awful the user experience is.


It’s almost as if developers are treating latency as having a kind of Moore’s law as with memory or cpu.


Would this not be a concern for React (and other SPAs) as well? I'm no UI expert, but from what I've seen of React/Vue UIs in previous companies, you still have to hit the server to get the data, though not the UI components. The difference in size between just the data in, say, JSON, and the entire HTML component would be very minimal considering both would be compressed by the server before sending.


There are frameworks that let you apply changes locally and (optimistically) instantly update the ui and asynchronously update server state. That is a win.

On the other hand, I have seen implementations of spa “pages” that move from a single fetch of html to multiple round trips of dependent API calls, ballooning latency.


Even worse for Chinese users who have to browse many US sites with a VPN (e.g. me)


> is it ever acceptable to round-trip to the server to re-render a component that could have been updated fully-client side ?

htmx doesn't aim to replace _all_ interactions with a roundtrip; in fact the author is developing hyperscript (https://hyperscript.org/) for all the little things happening purely client side.

But in any case even an SPA does round trips to get data stored on the server. The question becomes: is it better to make a backend spout json and then translate it, or make the backend spout HTML directly usable ?


If the internet is slow it will be horrible for the user on first load to download a full blownup JS bundle. It will also not removr the fact that any resource change will require a roundtrip to the backend and back forth


Oh, I completely agree with you. The vast VAST majority of sites don’t need that level of client side state management.

I’m currently working on a bio-informatics data modelling web app where htmx would not have been the right choice. But it’s in that 1-5% where that is the case. That’s kind of my point.

Outside of that project, I’m all in on the the HTMX model of server side rendered fragments.


I think that if you’re building tools and you want to do anything nice like optimistic rendering it’s not possible in HTMX, so I always wonder what kind of user experience is actually delivered on an HTMX app



Sort of. It’s more like when you create an object, you can display the object inline immediately with a JS layer, plus show some status attribute or however deep you like.

This explicitly only works with GETters and I can’t imagine how you can show async state with a tool like HTMX easily


What if combined with this?

https://htmx.org/examples/lazy-load/


With respect to showing a loader, it’s a solution. Not fully sure I understand the mechanism - is it based on the class or does all content of the div with hx-trigger get replaced? Or even worse is the image still there with opacity 0?

However, this doesn’t solve the optimistic rendering situation at all. In general the approach of HTML over the wire clearly seems barred from solving that, you need a client layer for that


IIUC I think it's possible, but maybe a bit clunky. Htmx let's you choose the swap target, and you can replace any part the page, not just the section that triggered the swap. You can also replace multiple targets.

https://htmx.org/attributes/hx-swap-oob/

Also, there's nothing stopping you from writing a bit of JS to handle something htmx can't do.

For example, the initial GET could return two partials, one hidden by default until a user action triggers a JS swap while htmx performs the request and eventually replaces the div again along with any other out of band div(s).


Oh god please let’s not go back to JSF


How was it worse than the current state of affairs of complexity with React? Bowser, npm, typescript, obfuscation, compressors, build pipelines.. it’s a lot. Life at the front-end today is so discombobulated, creating a bunch of backend APIs which will generally only be used and consumed by a single browser front-end.

I’m genuinely curious, because I never used JSF except for a single school exercise


Frankly, with yarn, typescript, and a packaging / compressing tools of your choice, web frontend development is pretty pleasant and efficient these days. (To say nothing of using Elm, if you can afford it.) Typescript is particular is nice compared to, say, Python, and even to Java (though modern Java is quite neat.)

The only unpleasant part is dependency management, but you have the same, or worse, with Python or Ruby, and neither Java nor Go are completely hassle-free either.


Typescript nicer than Python? Bc of speed? Python has typing module + mypy to be warned about typing mistakes.


It may have improved significantly since I last used it (9 months ago?) but mypy was a world away from the ergonomics, quality of tooling and maturity of TypeScript.


Really curious, how is it worse with Ruby?


With bundler, Ruby dependency management is excellent. I don't think I've ever had a problem setting up an app where the Ruby dependencies are the issue. I certainly can't say the same for JavaScript apps.


Primefaces makes it a bit more tolerable, but JSF is an ancient, slow, buggy beast that's hard to integrate with anything. Managing state on the server for every little thing is not scale-able, even for a measly number of clients. You don't have to grow to be a Facebook to feel the effects of the bad design of JSF.


I’ve used JSF when it was still beta. It was chosen by an external “architect” as the front end for a big site with millions of views (he was anticipating that JSF would be come popular, and a big project with it would be good on his resume).

Salesforce (classic) is JSF.

It’s full of bugs and quirks. But it’s kind of nice in certain situations.

The big problem here is performance load on both client and server. State is sent back and forth and it that kan be huge, and needs to be deserializes, altered, and serialized back again every action. It also doesn’t reflect any http verbs. Everything is POST

The big site was technically running on JSF, but in such a way that it wasn’t JSF any more


> Salesforce (classic) is JSF.

Here's a little bit of trivia... The Visualforce framework (that customers can write interactive pages in, and a small minority of the standard UI is build in) is based on JSF, but most of Salesforce classic standard UI is written in a home-grown system that generates HTML described in imperative Java. It's more akin to an HTML generating Tk.


Makes sense and they had a few big architectural changes of their front end. Lighting is unbelievably slow


> Do you really need to be managing state on the client? Sometimes an "app" needs to work offline.


I think the point of a SPA is not how to refresh the screen when you have to do the round-trip to the server. The point is that you can do more things without making the round-trip in the first place.


> Why have HTML elements react to changes in data or state, rather than just insert new HTML elements already updated with the new state?

But what's the big difference? Something somewhere must react to change. Either modify the DOM by client-side code, or modify/replace it by loading content-fragments from the server.

I would (perhaps naively) think that doing more on the client is faster than both re-rendering on the server and reloading from the server. Maybe it's just that React is too big and complicated and therefore slow.


Somewhere along the line the only accepted voice in the industry was everything has to be Javascript. I still have no idea why HTML5 hasn't evolved to include features purposed by HTMX.

> It’s the 5% where htmx is not one of the right choices at all.

I think even 5% is an over-statement. In my Top 100 site visit per month, the only three site that were SPA are Gmail, Feedly and Youtube. And I dont see how any of these three couldn't be done in HTMX. The Web Apps, if we call it that, that actually requires heavy JS usage are Google Work, Sheets, Google Map and Google Earth, and possibly some other productivity tools like Figma.


What security issues does server rendering solve? I resent that every website needs to be an SPA, but from a security perspective I’ve concluded that the clearer line in the sand that SPA application architectures creates is better than the security challenges that can result from server side rendering.

Navigating the risks around the NPM supply chain is another story, but I suspect it will be solved by large / popular frameworks gradually pruning their own dependency trees resulting from downstream pressure in the form of pull requests.


Here's one I've experienced. Suppose you have a table of customers, and you want to show an extra column of data on that page showing total orders, if and only if the viewer of that table has the manager role.

With an SPA, you'll be building an API (perhaps a 'REST' one, or GraphQL) to expose all the data required, such as customer name, email, etc, as well as the extra 'total orders' field iff they have the manager role. Now you need to make sure that either (a) that endpoint properly excludes that 'total orders' field if they lack the permission, or (b) you have a separate endpoint for fetching those kind of stats and lock that behind the role, or (c) (I hope not) you just fetch all orders then count them up! Now, having properly secured your API so that all and only the right users can get that data, you have extra logic in your front end such as 'if has role manager then show this column too'.

With a server side rendered page, you don't have to do the API security bit. You can just fetch all the data you like (often via a single SQL query), and then replicate the same extra logic of 'if has role manager then show this column too'. You've skipped the whole "make sure the API is secure" step.

Now suppose you want to add a dashboard for every admin user, not just managers, showing total orders system wide. Did you include that in your API already? Now you're likely going to need a new endpoint that allows any such user to fetch total system orders, while managers are still the only ones who can fetch per customer order totals. Having found a place to add that to your API, you can render it. With the server side page, there's no extra work to keep this secure. The dashboard is already restricted to the users who can see it, so just add the extra query to fetch total orders and display it.

In short, there's a whole layer that an SPA needs to secure for which there is no analogue with a server side rendered site. In an SPA you secure the API and then have the logic for what to show to whom. For the server side rendered site, you just have the logic for what to show to whom and that gives you the security for free.


Yes yes yes. You're not going to get much appreciation of this from newer devs. They've only known one thing. I shudder at all the human-hours spent on duplicative tasks related to the artificial frontend/backend separation.


As a slight counterpoint to that scenario, the front-end can often just get away with just checking whether the data exists in the response, rather than checking roles. This isn't quite as simple as the SSR alternative, but it at least removes most of the hassle with making sure the permissions match on server and client.

However, this doesn't help much when the restricted data is on a separate endpoint, since the app needs to decide whether to make the request in the first place.


Can't the backend decide which fields to send back in a JSON, the same way it would decide which HTML columns it would've rendered?

The front-end can pass a user role with every request and render whatever the backend sends it into a table.


manager role column

Tbh, I’d prefer a runtime which would be itself aware of such metadata, knew the role out of a request/session context and could build a ui table based on all-columns-ever template, from a query automatically built for this specific case, because querying and throwing away totals may be expensive. This manual “do here, do there” is a sign of a poor platform (not that we have a better one) and code-driven rather than data-driven access control in it. Painting walls and installing doors every time you want a meeting should not be a part of a business logic, regardless of which-end and its implications.


Yep, I think there's wisdom in what you say here for good design. My point is (using a very simple example) that there are ways in which server side rendering offers some immediate security benefits that don't automatically come for an API+front end design.

I'm not sure about its performance, as I haven't done a great deal of testing, but another tool to achieve some of what you suggest (assuming I've understood you) is using RLS. E.g., using the obvious query, and relying on RLS rules to return only permitted data. You can similarly send the appropriate role with the query [1].

I also note with interest that Postgres 15 includes some improvements that might make this kind of approach even more viable:

"PostgreSQL 15 lets users create views that query data using the permissions of the caller, not the view creator. This option, called security_invoker, adds an additional layer of protection to ensure that view callers have the correct permissions for working with the underlying data."

[1] https://news.ycombinator.com/item?id=30706295


Yes. It's a poor design. The UI shouldn't care about permission or visibility rules. Instead It should only take data and render tables based on their data+metadata. All the permission logic should be done in the API level based on the caller ID/context.


I do case (a) like this with SPA. Call to /orders (or any endpoint) will check roles/permissions and runs the proper query. A normal user will get [{name, email}], but a manager will get [{name, email, num_orders}]. A normal user call will not have the COUNT(*) from orders part anyway in SQL query (which is expensive). Only a manager's call will have that part. Most likely number of managers will be less than users, so the users get faster results. The results are returned to front end and if the {num_orders} column exists, it is rendered. Front end doesnt have to bother with any access control logic.

For the server-side rendered page, what seems to me is you are running same query for normal users and managers, which is fine too, but removing that num_orders column.

Ultimately in both cases, the access control logic happens on the server, and frontend don't have complicated access control logic anyway. My point is, with SPA also we can get the same server-side benefits, atleast in this case. Or am I missing something?


If you have non-html native mobile applications, you’ll need to have that complexity regardless of the techniques used to construct web page.


Yep, that's absolutely right. From my own experience (which is really quite small for that kind of thing), I tend to think that the API you want and would build for a mobile app is not going to be the same as the one you would build for your SPA site. But yes, when you build that API, you'll need that complexity.

I've been doing some hobby experimenting with doing this in a more generic way using Postgres RLS and putting a lot of that permissions checks into the database. That way, if I used PostgREST or my own solution for a mobile app API, the security rules should apply just the same to the API as they do to the website.


What do the tests look like for this? How do you check that you're not exposing the sensitive data?


I think anywhere you introduce more complexity, more ways for things to interact, it's inherently less secure without the additional work checking for both the App + the API being secure on their own.


There's no such thing as a secure "app". Only the API needs to be secure. That's more straightforward when your API looks like REST/RPC calls rather than "renders html templates to a string".


> That's more straightforward when your API looks like REST/RPC calls rather than "renders html templates to a string".

How so? You're now dealing with two applications (or two parts of an application) that need to understand and access authentication as defined by "the app"

If the same codebase handles auth across the board it's much simpler and more reliable.


From a security perspective, the client is 100% irrelevant. You might prefer to offer a good UX in the face of authorization failure, but that doesn't affect the security of your app one way or another.

Good APIs look like simple functions; they take certain very structured inputs (path, query params, headers, body) and produce a simple structured output (usually a json blob). They're usually well defined, limited in scope, often idempotent, and easy to write automated tests for.

HTML endpoints are more complex because they generally combine many different functions at once, rely on server side state like sessions, and generate large quantities of unstructured input. They tend to be hard to test exhaustively and it can be hard to reason about all the possible edge cases.


The operative word being good, which most APIs unfortunately aren't. You could make the exact same argument about APIs that you made about HTML endpoints, and vice versa. The problem, imo, is that writing a fronted is a lot harder for many people than writing a backend and they tend to spend more time on the front-end.

Security is hard, especially when most developers are ignorant or negligent about basic best practices. If I had a nickel for every website I've found that only has client-side validation I'd be rich.


Maybe it helps if you think of it this way:

An application can be very imperfectly thought of as having two bodies of logic, frontend logic and backend logic. In an SPA, if you secure the backend logic you are safe no matter what mistakes are made in the frontend. When rendering server-side HTML, the frontend logic and backend logic both run in a privileged security context. The attack surface area is larger.


> In an SPA, if you secure the backend logic you are safe no matter what mistakes are made in the frontend.

If. The problem I've observed is that people treat the backend as a dumb pipe for data and focus entirely on the frontend.

> When rendering server-side HTML, the frontend logic and backend logic both run in a privileged security context.

This isn't necessarily a bad thing. Business logic happening in a protected and opaque context means it isn't exposed and easy to reverse engineer or manipulate. An extremely common vulnerability on SPAs is "get a list of all the $STUFF user is allowed to see from endpoint A, then get all the $STUFF from endpoint B and filter out all the results they shouldn't see" because everything is still visible to the client; that exact same (suboptimal) logic is inherently more secure on the server. Another common one being "are we logged in or should we redirect?" Conversely, rendering content on the server makes it a lot easier to prevent certain vulnerabilities like CSRF.

That's not to say that I think SPAs are bad and AJAX is good, I just find the argument that SPAs are more secure if you secure the backend dubious. A SPA with an insecure backend can be just as insecure as a backend rendering HTML because the weak-point is the backend itself.

Edit: You could perhaps argue that SPAs are indirectly better from a security perspectiv because text serialization is safer than binary serialization. Though any serialization is still a potential weakness.


> The problem, imo, is that writing a fronted is a lot harder for many people than writing a backend and they tend to spend more time on the front-end.

The question then is whether the API is more part of the frontend or the backend.

If your backends are relatively easy and small, I think you should try to keep your APIs in that space and e.g. return JSON from a simple REST API with endpoint-level security.

On the other hand, if an API threatens to bloat and complicate your backend, use an API framework like Postgraphile or Hasura that gives you the tools to build powerful and secure APIs by writing some simple code or even no code at all.


> Only the API needs to be secure.

If users type passwords, sensitive data, anything into the frontend then any javascript, plugins etc pulled in by that page is an attack vector.


Unless you do something extremely silly with the login page, like sending it as a GET parameter, or storing it locally, or not having a CSRF token, or not using HTTPS, I don't see what special measures are required!


Sensitive data is not restricted to logins. If you are pulling in third party JS like for analytics, tracking, social, whatever then that is an attack vector. Marketing and business teams aren't responsible for security but they have the muscle to pull in dangerous code to the frontend that can be swapped out. It is naive to think that the API is the only thing to focus on.


The likelihood of a vulnerability in serverside logic is far higher and more impactful than a large marketing player like google analytics stealing PII.


If POST content type is application/json and is enforced, csrf is not possible. You cannot csrf put and delete unless you modify cors policies, all of which are server side vulnerabilities and not related to you using SPA vs server side rendering.


You can import malicious client side plugins irrespective of if it’s an SPA or server rendered. I’d much rather a silly plug-in be limited to client side JavaScript (which is sandboxed) over server side logic, which is not.


It's more straight forward when you have multiple kinds of frontends (iOS, Android, Web app), but if it's just a Web App it's less complex to just return HTML.

The default today to make an API is because we assume it'll have multiple frontends or because we want to make it easier to change frontends. That does not make it more secure; it's just a trade off we, as an industry, have made to deal with the reality of delivering apps/services to users.


>" think anywhere you introduce more complexity"

When your website is the actual complex application SPA makes very much sense and is actually less complex. And no you do not have to download all of it into a browser. Load parts replacing some inner html with the other and scripts on on need basis. Works like a charm. I am using couple of JS libs but no framework and there is no need to "build".

As for security - JS app talks to my own C++ backend using some JSON based RPC so I just have to validate the API only which is way less work as well.



I am in 100% agreement with you: pick the right tool for the job.

My hope is that, with htmx, HTML/hypermedia is that right tool for more jobs.


I’ve been exploring various alternatives and I’m certainly of the same mind as you.

It’s always trade offs and that’s fine.

What I find interesting/ frustrating is every framework or option likes to sell you on some numbers that are very nice but so specific they’re not the big picture. And/or talk about how they are different / better than another framework that I might not even be familiar with.

Technical pages now have their own confusing sort of developer marketing.


I have 2 problems with htmx (IMHO):

1. SoC: the server API needs to return only the data and meta data requested, and it should not be concerned with the display-layer (html), because many different clients i.e. mobile app, browser, Electron etc. might want to consume this API.

2. Logistics & scaling: Imagine a large application with 100s of html/htmx components and views, now you alter your database, you introduce new business rules etc... If you used React or Alpinejs etc. you could just go to the relevant stores or app code and make the change as opposed to sifting and refactoring tons of html.

Personally I'd rather just use Alpinejs from the start, knowing its lightweight, fast and easy to implement, and not end up painting myself into a corner over the application lifecycle.


1. https://htmx.org/essays/locality-of-behaviour/

2. many large applications use htmx (or related approaches like hotwire, unpoly, etc.) and scale fine. hypermedia is better at handling databaase/logic/API changes than fixed-format data APIs because your API is encoded within hypermedia responses: https://htmx.org/essays/hateoas/


LoB yeah, on small scale perhaps, I guarantee you will end up with a cluttered mess on large applications, unless you spend a ton of extra time/work to design for scale, maintenance and dev onboarding.

I do not agree with HATEOAS, so now you have an API which job is to produce html (SoC problem), and what if a Flutter app also needs to consume this API, do you build another HATEOS API just for Flutter?

Every few years its the same, devs adopting and defending the new shiny thing, though I have to admit, I love that this time its a way more simplified, sane alternative.


I have experience with multiple large applications that use htmx. They scale fine. They just aren't what people are used to right now.

Give it time and mull it over. It may grow on you.


I also like htmx and this kind of comparison is fruitless. The executive summary talks about LOC, build time, # of JS dependencies, etc. Nobody started using React because of these things, so it misses the point. It doesn't compare htmx to react on the matters that really matter to people who chose react...


Some years ago, a bright guy created something called 7 GUIs. It’s a collection of seven problems that all UI systems must successfully implement, and it was intended to act as a guide for comparisons.

Unfortunately I think the author has passed away, but it would be interesting to see it rekindled and used for baseline comparisons like these.

https://eugenkiss.github.io/7guis/


I don't think Htmx is a good fit for those. I wouldn't want a server roundtrip for any of them (except maybe part of CRUD).


Hyperscript[1] pairs well with HTMX (as it should, it’s by the same group) and would fill those gaps.

[1] https://hyperscript.org/


What are the advantages of Hyperscript over plain old Javascript?


a few:

- it is designed to be embedded directly in HTML

- it can listen for any event (unlike on* attributes)

- it has native support for CSS selectors: https://hyperscript.org/docs/#dom-literals

- it has async-transparency: https://hyperscript.org/docs/#async

- subjectively, it is easier to read than JavaScript, particularly for the short, light scripts it is designed for (toggling a class, etc.): `on click toggle .clicked on me`

hyperscript is more of a glue language than something I would recommend building an entire application out of (although people are doing that!)


Wow, this is very cool! I think this applies not only to gui frameworks (react vs htmx), but also to design systems (bootstrap, tailwind ui, etc).


There are four main problems in the React SPA world:

- State management - hooks are awful for anything complex

- Packaging + Bundling - Webpack kitchen sink, JS/TS, browser vs node vs random runtimes, packaging vs bundling, etc

- Data/API binding - Adds distributed state to your data and the complexity that goes with that (cross dom reuse, caching, staleness, realtime updates, etc)

- SEO/Prehydrating - Mostly applies to landing/CRM/blog type things.

These 'old school' solutions help you avoid some of these issues in favor of relocating the complexity a bit (those templates in the talk are absolutely gnarly and I'm sure very bug prone) and reducing the experience. Also the API boundaries are blurred should you ever need one for non UI usage. Ignoring HackerNews taste for simpler websites, some complexity of UIs can and should only be captured with client sided javascript.

With that said, after a few years of things shifting around I'm really happy with the Typescript (lang) + Parcel (packaging) + MobX (state management) + Vercel (when server side rendering is needed). They've been really stable, no fuss, always work. Especially MobX for state, absolutely game changing for the dev experience.

As for the data binding problem, still sucks and my 4th attempt at solving it internally is still meh. This feels like it comes with the territory.

As much as I loved my working with Django/Jinja2/etc, I think theres a light at the end of this tunnel.


I've solved your data binding problem with: GraphQL (server) + UrQL (JS graph client with the graphcache plugin) + GraphQL Code Generator (creates TS types for your API).

The data syncs perfectly, keeps up-to-date. GraphQL subscriptions allow for real time updates. Oh, and React-Hook-Form for forms, which feeds quite nicely into my GraphQL mutations. It's a real neat solution.

As for server side, I've started using Python-Strawberry (was using Graphene but it stopped receiving updates) with Django.

It's super solid stack, with typings flowing all the way from server API through to my react components.


Htmx is great for developers who need client side interactivity, but would rather not write any js. If you don't mind writing a bit of js however, you can't go wrong with Preact. Same api as React, but at a fraction of react-dom's bundle size. It looks like the minified source is even smaller than htmx (which is already very minimal) [1][2]. They've also packaged Preact in such a way that you don't need to add a build step to use it [3].

[1] https://unpkg.com/browse/htmx.org@1.8.2/dist/

[2] https://unpkg.com/browse/preact@10.11.2/dist/

[3] https://preactjs.com/guide/v10/getting-started#no-build-tool...


The programming model is totally different when working with HTMX. The server returns HTML instead of a JSON API which is the usual solution when building apps with React/Preact and similar.


Yea htmx definitely has a different programming model than an SPA style framework like React/Preact. I actually like the JSON api programming model where the backend is as simple as possible, and the frontend js/ts code is completely responsible for handling the UI, but I guess it's a matter of personal preference.

The linked article is referring to a team switching from React to htmx though, so for them I'd imagine it would've been a much easier transition if they'd just added a Webpack alias[1] that replaced React with preact/compat, rather than switching to a completely different UI paradigm.

[1] https://preactjs.com/guide/v10/getting-started#aliasing-in-w...


The difference in programming model IS THE advantage of HTMX. Changing from react to preact is a perf optimization.

If you are doing the SPA thing correctly you must duplicate business logic (data validation). Also when your API just sends "dumb data" (JSON) to the client, you are forced to make changes to the backend and frontend in lockstep.


According to the article we're commenting on, the programming model is not the sole reason for switching from React to Htmx. Look at the executive summary's bullet points. Four of them are related to performance (build time, time to interactive, data set size, and memory usage). Performance might not be the reason you personally use Htmx, but it's certainly put forward as an advantage in the article which we're commenting on (which is from the htmx team by the way).

I've often seen this meme repeated in debates that js frameworks require you to duplicate logic between the server and client, but most of the logic isn't being duplicated between the client and server, it's being moved from the server to the client. It's relocation, not duplication. Instead of your rails/django controllers deciding what html to render, that decision happens on the client. In this model your server is mostly just an authorization layer, and an interface between the user and the database. Hasura, Firebase, Firestore, Mongodb Realm, and several other products have been successfully built around this premise. You might not like the thick-client thin-server model, which is completely fine, but it's a somewhat subjective preference. The only objective criteria you might use to decide which approach to use is performance.


The article we're commenting on dedicates an entire section to talking about the dev team makeup and how it completely changed and unified the team approach to being fullstack. That's how thoroughly htmx changed the programming model.


You haven't disagreed with anything I said. My original comment says "Htmx is great for developers who need client side interactivity, but would rather not write any js". I'm sure a team of Python developers is enjoying not writing Javascript. My point is that you don't have to switch to a completely different paradigm to reap the performance benefits that the article lists.


I didn’t read the comment as disagreeing either. Sometimes people are commenting not to disagree ;)


I read it as a straw man argument, but that might not have been the author's intention at all. It's hard to judge tone through text alone.


Wouldn’t that also work the other way around?

Move to a JS backend and now every dev on the team can write client and server side code?


"We are fond of talking about the HOWL stack: Hypermedia On Whatever you'd Like. The idea is that, by returning to a (more powerful) Hypermedia Architecture, you can use whatever backend language you'd like: python, lisp, haskell, go, java, c#, whatever. Even javascript, if you like.

Since you are using hypermedia & HTML for your server interactions, you don't feel that pressure to adopt javascript on the backend that a huge javascript front end produces. You can still use javascript, of course, (perhaps in the form of alpine.js) but you use it in the manner it was originally intended: as a light, front end scripting language for enhancing your application. Or, if you are brave, perhaps you can try hyperscript for these needs.

This is a world we would prefer to live in: many programming language options, each with their own strengths, technical cultures and thriving communities, all able to participate in the web development world through the magic of more powerful hypermedia, rather than a monolith of SPAs-talking-to-Node-in-JSON.

Diversity, after all, is our strength."

https://htmx.org/essays/a-response-to-rich-harris/


People certainly unify their stack on JS that way. For Contexte their team was one React/JS dev, two Django/python, and one full stack. So they had two and a half people writing python and one and a half writing Javascript before, then three python developers after.

Switching to JS would require a full backend rewrite. The backend devs may not do well with the switch, so they may have let go two developers and needed to hire a new JS dev. The presenter may have been let go with that approach, clearly not ideal for the person driving the project.

You'd also need to address the perf issues. htmx clearly sped things up. Is the alternative JS SSR?


Validating data on the client and not doing it again on the server will lead to security vulnerabilities. At the very least you will end up duplicating that validation code.


If I were using htmx I would still want to validate data on the client and the server. If you don't validate data on the client, then you're effectively allowing clients to DDOS your server with invalid data.


Client side validation does not protect against that. You can't prevent people from making requests to your server without your client, i.e. `curl https://example.com/api --data 'dkfjrgoegvjergv'`.


Well yeah obviously you can bypass the client code and directly connect to a server. That's not my point.

Client side validation doesn't prevent a malicious user from sending invalid requests, but it can prevent legitimate users from sending invalid data to your server accidentally. In fact, if I see validation failures showing up in my server logs for something I know should have been filtered out via client side validation, I can mark that ip address as being potentially malicious and rate-limit their future requests.

And as a user I would rather find out about validation issues immediately instead of waiting for a network round trip to the server. If I'm typing in a password for example and it doesn't meet the website's length/complexity requirements, I'd rather know as I'm typing instead of waiting for an HTTP request to complete. That extra HTTP request is wasting the user's bandwidth and the server's resources.


+1

SPAs were a workaround to slow CPU servers serving millions of requests in the mind-2000s. Client computers were faster, so it made sense to push UX logic there.

We've flipped things around. Servers are fast as hell for rendering HTMl. We can leverage that and re-focus the client on UI code that only it can do.


They were also in response native mobile applications and desire to have a single api tier for all applications.

That and the panacea of being able to run same javascript on server and client.


How do you implement, e.g. a sortable table if the server sends HTML rather than JSON? Have we gone full circle and gone back to jQuery?


Here's the sortable example: https://htmx.org/examples/sortable/

IMHO something slightly more complex than this example (more client side stat) will require using something like React/Vue/Svelte. But there's no need to rewrite the whole FE. Just create a component that performs the rich client side interaction and use HTMX for the rest of your app. Win-Win.


> The server returns HTML

So back to ASP.NET WebForms update panels?


I was just picking technology for my new landing page and was looking into Preact first.

I wanted basic SSR and things like that. Preact has a library called preact-cli for this. The last commit is from Aug 17.

I ended up with SvelteKit. It just felt much more alive. I don't love learning a new technology just for a landing page, but that was kinda fun.


> The last commit is from Aug 17. I ended up with SvelteKit. It just felt much more alive.

I don’t understand - are you saying you think a project whose most recent commit was Aug 17, 2022 is not alive?


Was the worry Aug 17 less than two months ago a sign the project was abandoned? Daily changes would worry me more.


Especially given the team has been actively working on a major version for several months and released an impressive improvement to their state model in the last month. They’re active and thoughtful about what they ship.


Svelte is another great option with a much smaller footprint than React. If you don't feel like setting up SSR by hand, you can also just use Next.js which works fine with Preact [1].

[1] https://github.com/vercel/next.js/tree/canary/examples/using...


Preact is not a good comparison. If it "has the same API as React", then it's not solving any of the problems that the post talks about.

Bundle size is 1 problem of JS-heavy SPAs but the post goes into many other drawbacks not addressed by something like Preact.


Preact would solve most (if not all) of the performance problems listed in the executive summary of the post (build time, time to interactive, memory usage, and maximum data set size). It doesn't seem like a terrible comparison to mention a framework with the exact same api as the one the team in the post was using for 2 years before adopting htmx.

It's a pretty short post, but a lot of it boils down to the fact that the team is using a single language throughout the whole stack (Python), which allows everyone on the team to be a full stack developer. Like I said, if you're against writing js then htmx is probably a better solution for your team.


If you want a non-JavaScript client, ditch the browser all together and build a native client.


I think you have it backwards.

If you want to build a client using HTML, then targeting the browser is exactly the right choice.

Javascript-heavy SPAs are much closer to traditional native apps - and lose some of the benefits of running on the web/ in the browser as a result.

There's a lot more reason to consider native app if you are building an SPA than if you are building an MPA/SSR app.


I'm suggesting not using HTML for the client. Anything running in a browser is heavy and slow compared to well written native code.


I think there's an argument for HTML-focused clients - the whole web (including this site) benefits from hypermedia.

I'm less convinced about Javascript SPAs. Perhaps the biggest advantage is that you don’t have to "install" these apps. The browser just downloads the source and runs it automatically in a (somewhat) sand-boxed environment. Also, they work on "most" systems (though mobile vs. desktop is still a thing).


People forget that React can be used for just the complex components that require it. Your entire page need not be a React app just because you use it for one component.

To me it's bonkers that all web sites seem to be moving towards SPA when they don't benefit from it at all and require so much frontend work when a little bit of simple HTML and Javascript could do most of the work. It feels like the whole industry is Reactifying everything because they're scared that if they don't they won't look professional.


React has a self perpetuating marketing machine. From boot camp courses to YouTube how-tos and frequency plof appearance in job description requirements, people end up hiring React developers who want to build the sites in React.


What is your point exactly? Please explain why is that bad and how the whole problem can be solved better - which lib or approach etc?


People think React or similar SPA libraries are the way to do frontend development, so all frontend development seems to be React at this point.

"When all you have is a hammer, everything looks like a nail" is the problem. Another new library / framework will not solve this.

Frontend dev should go for the bare minimum when designing web sites, only introduce libraries as they're needed -- and really think about whether they're really needed or not. They need to ask some questions like:

- Maybe you can roll your own specialized solution that'll be smaller and faster than a general solution? - Does the thing you're working on need to be an SPA or does a good old multi-page web site be enough for your use case? - If you need parts of your website to be interactive, can you only update that part (the "islands" approach) by hand instead of introducing a dependency to do that?

There is a lot of complexity going on in our field right now and not enough people seem to care about that. The complexity is needlessly inflating our application / web page sizes and reducing performance.


React isn't an SPA library.


The point is that React has become the IBM of web app frameworks, in the "nobody got fired for using IBM" sense.

Your CTO, PM, frontend dev and DevOps people likely all have React experience under their belt. So it isn't surprising if React ends up becoming the framework of choice, simply because it will likely be faster to start with.


This is an excellent point. It's a philosophy that I think Astro gets right.

https://docs.astro.build/en/concepts/islands/

Combined with Preact instead React I think it allows for server content first, and sprinkling in interactive components as needed without much extra page size.

At the end of the day a good component based js framework makes it a lot easier to implement a lot of things client side.


Resourcing for frontend is basically hiring for the superset of frontend tech. Once you have that team in place of course they'll use the tool that has the most options. It's a safe bet even if its not the best performance, and it's important on the resume.


What's the issue with using Next/React to build static and SSR web pages?

React doesn't only work for SPA.


If that requires full client side re-hydration in order for those SSR pages to become interactive, then the user receives a worse experience.


It's also a bit shocking how much javascript and json get loaded client side to facilitate this hydration for Next.js.


I am fortunate to have gone 10 years without having to use react in production ever.

I've used rails+turbolinks and included Vue or Stimulus or even jQuery to get on page interactivity done.

The current state of the react / jam world strikes me as an HR led solution where you can get humans who only know JS to be more productive without having to worry about servers, databases or infrastructure.


It’s not the most popular framework in the world for web apps because it pleases front end devs who are scared of the backend.


It is a cliche but probably true: Second rewrite will be better by those sort of metrics even without a tech change. Lessons learned get applied. Domain is fully known.

I definitely believe in fewer LOC with any not-react due to all the hooks, memo, etc. boilerplate though!


This wasn't a complete rewrite, it was a feature-for-feature rewrite of the front-end, moving from react to htmx, mainly as a proof of concept at first. Note that the amount of python actually increased, as logic was moved back to the server.

While I'm sure there were a few places where better decisions were made, I think it's reasonable to assume that the majority of the reduction in code is due to the different approach to front end development.


We have experienced similar results moving to htmx and a hypermedia approach for building dynamic web apps. It simplifies development greatly and has resulted in better productivity.

Like the results cited here, we too are extremely pleased that the majority of our coding now falls into our preferred language - Clojure (using Hiccup). Very pleasant experience!


As an aside, the author of this project is also the author of grugbrain.dev, the most wonderful and accurate software development site I've ever encountered.


Wow!! This site made me laugh a few times while collecting these great lessons. Very well written!


I am in a remote village in India, Assam, Barpeta, Maranadir par. htmx or hotwire or pjax works great here on my 4g mobile network (Jio, Airtel) and surprisingly 3g BSNL (govt. supported isp). I think at least in India internet speed problem is a solved problem.


Link to the talk mentioned if you were looking for it as I was: https://youtu.be/3GObi93tjZI


A note I’ve internalized lately: there is a middle ground.

You can write excellent, lightweight React apps by simply writing semantic HTML and taking advantage of the platform (declarative form validations, a simple CSS pipeline, etc). It doesn’t have to be madness in the front-end.

NextJS, in particular, keeps the build pipeline quite simple.


In the React ecosystem, Remix.run embraces a much simpler approach than Next.js.


This is a really beautiful approach IF you don't need mobile. If you do need mobile, then you may be better served (pointlessly) running React on web and re-using all of the logic/fetchers, etc for React-Native on mobile IMO.

Your other option would be to do something like 37 Signals and render web with htmx (or in their case Hotwire) and server-render mobile too, but wrap the web views with a thin native mobile navigation layer to make it feel more native.

Very jealous if your project is web-only - this can work wonderfully and be incredibly efficient! I've been thinking about this a lot recently and also just blogged about it (https://nikodunk.com/2022-05-10-the-tech-stack-for-maximum-e...).


If the hypermedia approach really grabs you, you could pair an htmx-based web application with a HyperView-based mobile application:

https://hyperview.org/


I can't decide if this is awesome or terrifying. Will try it out :)


:) why not both?


This is super interesting - thanks for sharing!


Efficient for dev time, not so sure about mobile app size, RAM usage, or response time.


Definitely! Wasn't considering the other 3 in this discussion. The most efficient for all of these is most likely a native app everywhere, and all the complexity/duplication that brings.


I'm just so burnt out with new things. How do I combat such burnout? I really don't care about 'Htmx', I think barely any apps need much beyond what was available in ~2006 in terms of web tech. I just feel tired knowing that moving from React to Htmx is an option. Yet another option which will very possibly bring zero commercial value (although it might be a nicer dev experience I guess) to any project I ever work on. Am I wrong?


Don't look at it. There's a million new things being invented and ignored every day but for some reason if it touches web dev it 1) gets a thread 2) gets upvoted and 3) you people are near the top of every comment thread.

Whatever led to you posting that you don't care about this rather than just not even noticing it like all the new things you see every single day that don't even penetrate your consciousness. Find and kill that.


I really like this comment.

Pointed my attention on how my mind anthromorphizes hacker news comments and perceived it as an amorphous invisible ultra smart conversation partner.

I wish I could have a meat space equivalent for that but given the amount of possible topics it’s hard to imagine a human equivalent.


That's understandable, the front end world is extremely churny.

If you don't want to deep dive on htmx-as-a-tool due to burn out, I completely understand. But, at some point, it might make sense to read up on the philosophy behind it (hypermedia as an archiecture) because that is an area where it is different than most front end frameworks today.

I have a collection of essays here:

https://htmx.org/essays


Thanks, I'll definitely take a look given your (and others') replies here.


Well in this particular instance, their codebase size was reduced 67% and JS dependencies went down from 255 to 9 with TTI cut in half.

I'm not one to always chase the new shiny, but that's interesting enough to explore & I think does bring commercial value.

Last note: HTMX really feels much more like 2006-style web tech in philosophy which is a refreshing counter to all the current js complexity.


Just use vanilla html/js/css. For most stuff that's fine. React is good for web applications that require reusable components and have a lot of state to manage, but if you are building a personal site just use vanilla.


Vanilla JS definitely needs some discipline. I just made an app in vanilla JS for the first time in a while, and OMG it’s a rat’s nest XD


Have you ever tried writing a native client? I think the native client path is something that a lot of developers don't even consider these days. I think it's an area of great opportunity.


What do you mean? Like writing one's own framework, such as devoutsalsa.js?


I have no idea what that is, but since it's javascript I assume it's still running in the browser.

I'm suggesting getting out of the browser all together and writing a native client. Before Evernote moved to Electron, it's what they did. Their Windows client was written in C++ and their Mac client was Objective C.


I’m a fan of building with vanilla JS and I agree 100%.

It can be simpler, but it can get out of hand just as easily as any front end framework.


Why not just limit your perusal of the new stuff? It's interesting enough if you're in the game to keep apprised, from a 10,000 ft pov, of what's around. A delimited once-a-day/week/whatever browse fulfils that. But stop there. Let all your further dives into new tech be driven purely by actual needs/uses (career or business or hobby etc).

The solution to the Paradox of Choice is to opt out. Human minds aren't soul-stuff magic - they're evolved systems whose history hasn't equipped them to deal fluently with unlimited choices. That's just physical reality, so we need to comport with it, not with the blandishments of the virtual business/tech world.

In many domains of contemporary life, one way to be free (and reduce anxiety) is to use volitional attention to restrict the range of choices we're presented with.


HTMX is much closer to 2006 web than React.


that's very true, it's an extension of HTML as a hypermedia, and you can achieve useful patterns with as few as one or two additional attributes (that are extremely symmetric with "normal" HTML).

a good example is lazy loading:

https://htmx.org/examples/lazy-load/

two plain HTML attributes that give you a nice tool for deferring expensive calculations so that users can get to interactive more quickly with the rest of the page


Just stick with web standards and life will be a lot simpler. All frameworks, including React, will eventually go by the wayside, but the standards will still be there. I've been building on standard Web Components exclusively since 2015 at multiple jobs and it has sustained a very successful career so far.


You can get by mostly ignoring the churn. I still haven't switched to hooks with React; I'll figure it out when I run into a codebase using them


> How do I combat such burnout?

What works for me: don't waste time with frameworks.

The web standards are simple to learn and are also wonderfully not opinionated.


I dunno, I took this advice and ended up rolling my own thing that looked an awful lot like a framework, but wasn’t as good because the framework authors are better at JS than I am.

I’ve landed on Svelte+Typescript lately, and It’s Really Doing It For Me (tm). I think the trick is to find a framework or library that gets you, and just run with it.


Not using frameworks means more than the tools you pick (or make).


For me, this feeling came from trying to stay on top of all the innovation happening out at the edge of problem domains. The thing is, none of those problems at the edges applied to me. I was chasing them for the sake of chasing them.

Once I stopped chasing the outer edge of what everyone else was trying to solve, and instead focused on the problem directly in front of me, the anxiety of being on the technical treadmill went away. When I have a problem to solve, I research it. As the outer edges get figured out, they start forming boring tech. I try to keep things boring when doing research - and avoid running up the treadmill unless necessary.


I am totally with you on this. I've been a React dev for 5 years. When they release new stuff (like hooks), I just wait until it becomes unavoidable, or I need to take interviews for a new job.

I was hesitant to switch to Hooks, or Context. I only studied them when I was preparing for a new job and interviews for it.

I still feel queasy when I encounter obscure Typescript features (e.g. Omit/Pick, Generic Type Templates <T>) used in a codebase. Frontend simply does not need this much complexity....But I just google and find out. I don't personally go out of my way to bring new features into my codebases. Perhaps, only time they become unavoidable is when you're interfacing with a library or building a library to be interfaced.

I have done seriously complex computer vision stuff, 3D game development, embedded electronics. I have seen complex code where it had to be. People make frontend development more complicated than it needs to be most of the time. They just end up plastering everything with newest/coolest tech.


Unless you need to hire. Then 2006 tech is not viable.

2006 tech in frontend = COBOL level stuff


This is not a correct interpretation. I'd argue the basics - HTML, CSS and Javascript hasn't changed as much since 2006. jQuery came out in 2006 and we were manipulating DOM back then. Now, we have frameworks like React, where we are manipulating DOM. The only differences are new browser APIs and a much better code maintainability.


Speeding up the build could be done in other ways, were they using esbuild or something fast previously? Probably not.

The memory usage saving is really not a big deal, load up your favourite news web page and look at the memory it chews up in comparison.

As for the preference implied for python over js, that's fine, but the choice is not binary, typescript is my preference for a React project, for example.

Reduction in LOC is slightly misleading, JSX creates a lot of lines.

It would be interesting to see a proper write up on the data React couldn't handle.


Agreed. More details are necessary to understand the potential benefit of such rewrites.

And to be honest 21K LOC is not a big project. I can imagine that as project grows even larger, there will be problems that htmx cannot or is difficult to handle, and I'll be curious about how that would work out.


I consider this an alternative development approach neither better or worse than what it is trying to replace. The winner in recent approach, imo, is Remix Run.


For literally every developer on the planet who is not an expert JavaScript developer—or who doesn't wish to be—this is undoubtedly a better development approach. Saying "Remix is the winner" doesn't really move the needle. It's still JavaScript, still React—just with some work pushed to the server instead of the client.

The contract of the web is that while the client-side language in use may not be up to the discretion of the developer (recent evolutions in WASM notwithstanding), the server-side language/framework/OS/etc. can be literally anything which speaks the protocol of HTTP/HTML/etc. JS frameworks for years have repeatedly broken that contract with the claim that we must accept it for…well, "reasons". If it turns out most of those reasons are bupkis—more a matter of preferences than requirements—then we must stop breaking the contract. JS as a server-side language should _get in line_ alongside all the other languages out there.

Why privilege it if we don't need to?


> It's still JavaScript, still React—just with some work pushed to the server instead of the client.

This is being too reductive of the design of Remix. It is closer to traditional web applications in handling form submissions. There's no client-side state management needed for those use cases.

Whether it's the "winner" or not, really depends on use case, but it is decently different than a traditional SPA.


Reducing a code base by 2/3rds isn't better than what it replaced?

At least in some sense?


IMHO this approach is better for most apps. Remix is bending the curve here. They make it extremely easy to progressively enhance a small part of your UI with all the power of React.

That said, using React on the server side and forcing you to run JS on the server is not my cup of tea...


If your React SPA can be replaced by htmx, I would say you've chosen the wrong technology from the start. Certain type of applications require a SPA architecture, but most simply don't.


Listing template dependencies as event names that can change underlying data seems very brittle (hx-trigger for favorite articles). You would need at least integration tests to get some confidence! Can’t we infer used state in template and setup triggers for developer? LiveView gives better DX in this case.


Deep inside the comments is a wise question. What happened to the user experience over high latency links? Across countries? Over high latency mobile links? Starlink latency anyone??

Not every situation has high latency, but it's worth asking what happens when moving from React to Htmx in those situations.


Conversely, a poorly optimized SPA may not even load in a reasonable time span.


That's great, I still remember how productive teams were in the jQuery times.

Angular and react slowed down development massively.

Nowadays teams are often busy fixing random stuff for days


I've used this approach a lot, although I prefer Unpoly to htmx. On my last project I paired it up with Alpinejs for the 100% client side interactions, leaving Unpoly for the use cases where I would have needed an API call in the SPA world.

I'm convinced 90% of the stuff we build out there should be built this way. It's so much easier and there's tht nice feeling about not being standing on tons and tons of overengineered tech. The last 10% on which I'd use react or similar are offline first apps or things such as figma, etc.


serve side rendering is ideal, but at the end of the day what matters these days is developer productivity. a better question is how long it takes to make something like gmail with htmx vs. say react. you might say most apps are not gmail, and you'd be correct. so then you say, ok what about something like wikipedia? easy enough. then you start adding all of this javascript and it becomes a mess.


Interesting you mention Gmail, that's one of the examples given in the Layers Tutorial for Unpoly, which is a batteries-included take on the basic ideas behind HTMX.

https://unpoly.com/tutorial


Use the right tool for the job. Obviously Gmail is a much better fit for React/Vue/etc than HTMX. Gmail is the perfect use case for an SPA but most web apps don't need to be an SPA.


> Gmail is the perfect use case for an SPA

Is it really, though?

It's a paginated list of emails, some drag-and-drop functionality, buttons for some ajax functionality.

It works reasonably well, but it doesn't need to be so high Javascript compared to Google Docs or maps.


Viewing your emails offline is pretty important.


And being able to use Gmail (search emails etc) while writing an email too


I may have missed it in the video, but where exactly do they store all the "client-side" state? Postgres? Redis? Sessions?


There is no client-side state, that’s the whole point.


There is always client-side state. The dropdowns, for example, have selection that affects the entire app. That selection must be stored in a state which will affect the reloads triggered by changes in the dropdowns.


Well, sure, the value of an input is in its value property, but you build your stuff in such a way that you don’t access it unless you absolutely have to.


Did you actually watch the video? The things are built in such a way that there is global state shared among components.


I'm the one who gave this talk, and I can assure you there is no such thing in our code. htmx just enables us to fire some JS events and react to them by triggering AJAX calls then replacing some <div> with some HTML fragment. No state management, just a hook system.


Ok, then I've explained myself poorly. I see that there are both facet filters and favorites on the page, both of which affect what the rest of the page shows. In my mind, that's client-side state. It doesn't have to mean that its managed with JavaScript, but the state does exist; its changed any time the user makes changes into any inputs in the browser. Furthermore, those changes together seem to affect the rest of the page, if I'm not mistaken?

My question was where is the favorites (and facet) state stored. Is it in "html inputs", in which case, I suppose they are included in the requests somehow later? (perhaps via `hx-include`). The answer could also be that e.g. favorites are permanently stored on the backend...

Additionally I was also wondering what htmx can do in more complex cases, like e.g. a "sort direction" button, where you need to set the sort column(s) and direction(s) of columns. It feels like its really easy to exit the htmx comfort zone, after which you have to resort to things like jquery (which is a nightmare). Or perhaps web-components, which would actually be a nice combination...


I don't see facet filters and favorites as "client-side state": to me it's "application state", changed by a user interaction. And you're right, it's related to how the state is stored.

As you anticipated, favorites are stored in a database on server-side, so that makes "show me my favorite items" or "show me items related to my favorite articles" the exact same feature as selecting an option in a facet filter.

The state of "I have selected options 1 and 2 in this facet filter, and option B in that other filter" is simply stored in... the URL. And this is why I think it's "application state" rather than "client-side state", and this is why the hypermedia is great IMO: this whole search+facets+favorites+sorting feature becomes nothing more than a <form> with hidden inputs, generating GET requests which URLs are put in the browser history (keywords search, selected options from facet filters and sorting are put into querystring parameters). And that's great, because it happens that one of our features is to send our users custom e-mails with deep links to this the UI, with facet filters pre-selected. All we have to do is generate links with querystring parameters pre-configured, and the user directly gets to a screen with pre-selected facet options, sorting, etc. To me, such behavior cannot be called "client-side state management".


I did watch the video, iirc he literally says “is this client-side state I have to worry about? No.” multiple times. When the user changes something that affects the UI in multiple places, all the necessary fragments are fetched from the server and swapped in by htmx.


You still have to consult the state of multiple facet dropdowns, as well as the user's personal list of favorites at the same time to get the correct response, don't you?


As I said in my other answer: our facet filters are nothing more than hidden inputs in a form. So nobody consults "the state of multiple facet dropdowns", except htmx when it generates the URL of its XHR call. Everything else (filtering items according to querystring parameters, fetching user favorites, etc.) is done on server-side.


I was expecting something that would blow me away... but honestly that UI is not super sophisticated. You could solve that with vanilla or jQuery. Obviously React (or any of the modern libs/frameworks) would be overkill for that.

IMO this video reinforces the idea that for personally simple stuff I'd rather just use jQuery (or rather CashJS) or even vanilla.


That's exactly why HTMX was a good fit for this: it covers exactly the kind of Interaction patterns you'd find yourself writing in vanilla JS where React is overkill, except without having to write all that JS.

Though I don't know why you'd bother with jQuery these days unless you had to deal with ancient browsers: venerable as it is, it's obsoleted itself as just about everything it did is now covered by browser APIs.


> is now covered by browser APIs

I've never understood that argument. You could always do what jQuery does with vanilla JS. How could it be otherwise?

The point of jQuery is productivity. Even today, a jQuery-style API saves you from writing a lot of code.


The biggest boon of JQuery back in the day was that it abstracted over all the browser variances, which were far more numerous at the time.

Also many APIs simply didn’t exist(e.g. querySelector).

Now that browsers are much mor standards compliant and there are much better APIs it feels like the rationale for jQuery has shrunken to pretty much zero


No it doesn't. Just about everything jQuery does is now a browser API these days, so you no longer save time using jQuery vs vanilla JS.

The only argument for continuing to use it is that you're used to it and not used to the modern APIs that have absorbed jQuery's functionality. It's just a slower way of doing the same thing.


htmx is deceptively powerful for how easy it is to use.


As I was reading https://htmx.org/, I kept thinking "Oh crap, yet another framework ripping off intercooler/alpine. Why?". Then I when I got to the bottom, I read that it's a rebrand/rename of intercooler, which gives me a lot more confidence/trust about the project. I would recommend putting the "htmx 1.0 = intercooler 2.0" at the top of the htmx website, similar to the intercooler website.

Note: while the htmx homepage is short, I spent about 30 seconds reading/thinking about each "motivation" bullet point, and my monkey brain is very content jumping to conclusions about the project at a much faster pace, :).


I believe HTMX is also a rewrite (at least to some degree) of Intercooler, as it uses vanilla JS instead of jQuery.


Loved the presentation, and prodded me to revisit the latest state of htmx and django.


One reason I prefer doing React is that retrofitting existing React stuff for React Native is better/easier than embarking on React Native from scratch.


I would argue that most “apps” built with React Native don’t need to be apps at all, they are better as websites.

The majority of these apps are about placing the existing product into the App Store for discoverability, and encouraging users to be “locked” into the app at the expense of a worse customer experience.

A web app, in a browser, automatically has support for opening multiple pages as multiple tabs, bookmarks, sharing pages. You can’t do that with 99.9% of apps.

I will also argue strongly that, again, react native is unnecessary even if you do want to package a native app. WebView based apps perform as well as a react native app, and most of the time with significantly less additional code. It also forces you to up your “mobile website” UX. In general the minuet you build a mobile “app” the mobile web experience gets neglected.


How would we even know what most React Native apps are like to… know this?


htmx is a great library. I can see a back-end-for-front-end pattern where a thin 'UI' layer emits the html/js/css based on orchestrating calls to REST/GraphQL APIs - where those already exist, or where having an API makes sense for native apps on mobile for example.

To serve html to htmx sites closer to the caller/user, a developer can run the backend on something like https://controlplane.com (disclosure - I run Control Plane) where a request from Australia is served from Sydney and a request from New York is served from aws-us-east-2 for example.

In short, htmx looks great because of the simplicity. I am sure it will be used by a lot of developers. Nice work!


Doesn't server rendering increase load on the server? I like to off-load as much processing to the user's browser as possible. We have ~1 million users and I like to think that when we off-load a lot of processing to the browser, we basically have 1 million servers for free.


Consider this: you're doing server side rendering whether it's JSON or HTML fragments you're sending across. They're just different serialisations of the same data. The problem that SPAs were meant to fix was that each update of the page meant a full rerender under the classic model rather than just rerendering those bits that had actually changed.

You're still going to be able to take advantage of those client machines to do rendering, but the difference is that with HTMX, you skip serialising JSON, so your clients aren't stuck doing unnecessary rendering. The load on the server is barely different, but the client has less to do.

Now, if you're doing an awful lot of data manipulation on the client, that's a different story.


Rendering html doesn't consume a lot more than rendering json, specially that with htmx you'll render small fragments... Anyway the bottleneck is most of the time on the database.


Hi! Sorry for the delay. I'm the one who gave this talk.

Yes, rendering HTML on server-side increases server load, for the simple reason that the server-side templates contain some display logic. But:

- Most SPAs I've seen generate useless server load: either by fetching too much information, or by fetching information too often. And that's not because SPAs are a bad idea per se, it's because many small companies have very small teams, and very small teams just don't have time to build a custom-tailored API ("backend-for-frontend"). We chose jsonapi with Django-Rest-Framework, which was crazy-easy to implement from the back-end perspective (as we had many other challenges), but which made the front-end developer implement twisted stuff on client-side, like prefetch-on-hover, or plugging react-query with crazy refresh settings generating hundreds of API calls when only 2 or 3 would have been enough. At the end of the day our servers load is not higher now. Each request costs a little bit more, but there are a lot less of them.

- Another thing is: the idea of delegating template processing to the client may seem good from a wallet perspective. But if you also think of the environmental impact of what we do as developers, you might notice that many people get a new laptop on a regular basis just because some applications are more and more CPU and memory greedy. And when you think that about 80% of environmental impact of the digital industry is generated by building and shipping new terminals, it might make you realize that being part of the solution implied reducing the amount of client load you ask your users. And yes, this implies that your company accepts to reduce gross margin to take a very small action in the battle for a cleaner industry.


It is quite nice to create on the browser, but as the number and variety of clients increase, the application may become slow for some. And the main problems start there.


Expected this to be more interesting than a maybe fancier version of turbolinks. Am I missing something?


Hi there ; I'm the author of the talk linked in the article.

Technically you're not missing anything: htmx is nothing more than another turbolinks, maybe more flexible and easier to learn.

What you might be missing is the non-technical implications of these new tools. The idea behind the talk (and behind htmx, and behind turbolinks, or unpoly) is to prove that the usual arguments for Javascript application frameworks are just not valid for 90% of use cases. And *that's* a complete game-changer, even an industry-changer.

Because since 2016, every small-and-not-super-rich company that wants to create a rich UX on the web is told to hire at least 2 developers: one "front-end" (i.e. "JS"), and one "backend-end" (i.e. everything else, from API to hosting through domain stuff and user data). Or one superman with both back-end and React skills, which is, IMO, almost impossible.

From what I've seen, what businesses need is, indeed, 2 devs: one "back-end" (i.e. workers, databases, user data, domain stuff, and even hosting), and one "front-end" (i.e. "the website", from DB queries to CSS). One person should be enough to address this second scope, even with complex UIs and rich UX. And as this is almost impossible with Javascript application frameworks (because they require a lot of work), it becomes possible again, like in 2008, with htmx/hotwired/unpoly (and without the spaghetti code we had in 2008).

One more thing: of course the idea was *never* to do JS-bashing, only people who are too tied to Javascript and not caring about tech cost-effectiveness would see htmx as a thing for JS-haters. In my talk I actually show some Javascript code, because it's useful for handling client-side-only stuff like a custom dropdown, a modal, etc. The whole idea is to put Javascript back at its place: pure client-side advanced interactions.


The htmx guys self promote here all the time. Their selling point is "no js" for slightly interactive demo apps.

10/10 times is just them openly bashing the rest of us for not using their tech.


He seems like a reasonable fellow, and I've never seen him bash anybody, even people openly hostile to his library.


I don't know the guy but every htmx thread is filled with js bashing. It's like their main selling point.

This technology is not great for complex GUI. You can downvote even more now.


I don't have enough karma to downvote, but thanks for your permission.


Is this and Turbo etc. inspired by what ASP.NET MVC did with Partial Views 10+ years ago?


What is the best framework to build a high performance web application on? Specifically for building a UI to work with large sets of tabular data similar to a spreadsheet, e.g thousands of rows. Is Htmx up for this task?


A spreadsheet is probably not amenable to the hypermedia approach because of the complicated inter-dependencies between arbitrary parts of the screen. You don't want to gate those updates on a hypermedia exchange.

I would look at writing either a custom solution or one of the big reactive libraries like vue.js for something like this.


Totally unfashionable answer:

I use 'Rails 6.x and datatables.net.

The system manages call-logs for a call centre with WFH staff. It's doing thousands of calls a day.

The tables are 'endless' pagination reporting tables.

It's hosted in Sydney, AU. And used by staff around the world. It has the advantage of being extremely robust on the dodgiest of old PCs with poor internet connections.

So not fashionable, but it's super reliable and has a well documented implementation path!


Your solution looks like the right way to do it to me. At least in your case.


this repo keeps track of various benchmarks on a data grid type app https://github.com/krausest/js-framework-benchmark eventually you may need to create "virtualized scrolling" (e.g. only rendering a subset of the total table to the screen at a time) which can hard to make seemless. possibly the benchmark is "too much info" to make a good choice, but it is an interesting resource.


Jane Street did something like that, incr-dom IIRC. They have very high performance needs since they do HFT.


If these rows are even mildly interactive, I’d use MobX + React.


Check AG Grid


Can anyone explain how htmx (or similar) would work for an electron app?


Wouldn't it be nice to have an actual link to an example of the same site written in React and Htmx?


back in the days, I really liked https://malsup.com/jquery/taconite/, which had a similar approach (but using xml snippets sent from server over ajax)


I can replace all the iframe from my legacy apps without changing any logic !


I recommend doing another pass and reducing it all to plain HTML.


I don’t understand why this effort was undertaken vs migrating to NextJS and accomplishing the same thing in half the time without needing to ramp up more Python.


Because migrating to NextJS would have meant doing a rewrite of the entire backend which was already written in Python/Django and running in production. Migrating to htmx was much less risky.


NextJS can use an external API? It doesn't need to be fully full-stack.


So then add Next.js when we're already using another backend service. The idea is to reduce complexity, not the other way round.


I don't think the point here was to show how quickly they could migrate to another framework, but rather to show that HTMX is powerful enough to use for things like this.

The point seems to have been to show how you don't need to use React to build applications like this, so migrating to NextJS wouldn't have made any sense from that perspective.


That would make sense if this was a hobby project but it’s not.


Perhaps they are better equipped to write more Python, less JavaScript.


The hoops people will jump through to avoid learning HTML/CSS/JS. I say this as a Python/Django dev who resisted learning web tech for 20 years: It's time to just learn this stuff. You can slap on a mound of band-aids and watch the wound bleed forever. Or, you can go in, get three stitches, suffer a little pain, and let the healing process begin.


I think this take does yourself a disservice: htmx is an extension of HTML and, in general, of the hypermedia model, and it is this model that should be contrasted with JSON data APIs.

I think that you should learn JavaScript, and I certainly think you should learn HTML(!!!) and CSS. But I also think you should consider how much more can be achieved in a pure hypermedia model with the (relatively small) extensions to HTML that htmx gives you.

I have a collection of essays on this topic here:

https://htmx.org/essays/

Including an essay on how I feel scripting should be done in Hypermedia Driven Application:

https://htmx.org/essays/hypermedia-driven-applications/

There is more to all this than simply avoiding JavaScript.


I’ve used htmx.


I'm sure you have.

Regardless of that, the idea that you learn htmx because don't want you to learn JavaScript is simply incorrect.

Rather, it's how you use JavaScript that should be considered.


JSON APIs are usable beyond the browser and using htmx feels like using Django+jQuery 14 years ago. I'd rather not go back to that.



Passive aggressively posting links to overlong tracts won't change my mind. I've read all of it and I fundamentally disagree with it. I'm happy to be done with this.. Unless, of course, you feel that you need the last word? Or, can we let this go and move on? Yeah?


All of it? ;)


Have I read what you’ve posted in this thread? Yes. And more when I was learning to use htmx. I gave it a good-faith effort and I’m completely unconvinced. If we were having this conversation ten years ago, I might be excited. Today, not so much.


The problem is that you don't "learn Javascript". The problem is that you need a whole ecosystem to do anything useful and that ecosystem is highly fractured and never seems to converge.

One of the bullet points was "They reduced their total JS dependencies by 96% (255 to 9)". That's an enormous support burden that simply vaporized.


'The problem is that you don't "learn (Python|Rust|Ruby|C++|Go)". The problem is that you need a whole ecosystem to do anything useful and that ecosystem is highly fractured and never seems to converge.'

My virtualenv has close to a hundred deps. Tried packaging anything in Python lately? How about type checking? It's the same everywhere, the grass just looks patchier when it's not a park you're used to going to.

There are over seven billion people on this planet, convergence of opinion is the exception.


If you are deploying to Django, you've likely converged to a specific set of Python dependencies that have been pretty static for close to a decade.

I'd be surprised if my Django virtualenv has ever had more than 20 dependencies.


The whole of the Django community has not converged on the same deps. Same with JavaScript. With Python/Django there's uWSGI vs. Gunicorn vs. etc, etc, etc. Then there's ASGI with Uvicorn vs. Daphne vs. Hypercorn. That's just running your code. Nothing is one-size fits all.


TLDR:

- Took 2 months (21K LoC, mostly JavaScript)

- No reduction in user experience

- Reduced LoC by 67% (21,500 LoC to 7200 LoC)

- They increased python by 140% (500 LoC to 1200 LoC), good if you prefer python to JS

- Reduced JS dependencies by 96% (255 to 9)

- Reduced web build time by 88% (40s to 5)

- First load time-to-interactive was reduced by 50-60% (from 2-6 seconds to 1-2 seconds)

- Much larger data sets were possible than react could handle

- Memory usage was reduced by 46% (75MB to 45MB)

These are spectacular numbers that reflect that the application in question is highly amenable to the hypermedia approach.

I wouldn't expect everyone to see this level of improvement, but at least some web apps would.


I think the value of markup-based templating approaches such as HTMX and SGML comes from enabling content authors to create dynamic document-oriented sites without having to deal with dangerous tools such as JS and the endless ways too shoot yourself and your visitors into the foot (through eg injections and other security issues). The difference between HTMX and SGML being that HTMX integrates templating as markup vocabulary extension while SGML brings syntactical templating at the markup declaration level. A developer, OTOH, might choose JS-heavy approaches to avoid limitations and learn new syntax/concepts, and to re-use existing knowledge from/to other projects. I don't know that speed of development is of primary concern.


> Much larger data sets were possible than react could handle

I find this one difficult to believe. I'm not calling BS necessarily, but I doubt it applies in the general case.

I have a Django app that's server-side rendered, the largest page is very large (~200kb of content). Django on a low-tier VPS took about 500ms just to render that page. Django templates aren't faster than React. And Python isn't faster than JS.

If they're seeing performance increases I'd guess it's either that they're being more judicious about their queries (pulling only what they need from the database instead of filtering client-side), or the React app had a lot of complexity and they simplified the UI when re-writing.


I’d be a little surprised if rendering that same Django page as a server-rendered React app wasn’t even slower.

Historically, server-rendered React has been painfully slow compared to Django (I’ve been doing SSR with React since 2014, using Django since 2006, experienced the pain first hand multiple times). Usually at least an order of magnitude slower. That said, I haven’t benchmarked in a while, perhaps the worst of it has since been addressed.


Speed is the big one for me. 2-6+ seconds is insanity for anything.


Speed is definitely a big one.

I also am glad to see that everyone on the dev team became full stack developers, because I think the back-end/front-end split is often detrimental to development velocity. It's often better when a developer can fully realize an entire feature, with no front-end/back-end friction.


Oh for sure. For me programming is a hobby, so I can only get something made if I do it all.


I hear you, but YouTube takes 6+ seconds for me to load and it does not seem to hold them back. For most, not all, optimizing page load time is time probably best spent elsewhere. This is is no way to impugn htmx, because with htmx you seem to kill many birds with one stone.


Youtube is kind of unique in that nothing (currently) even comes close to replacing it for the average youtube user.


Details matter, as they learned when making things faster made their metrics slower - https://blog.chriszacharias.com/page-weight-matters


I would argue that 1-2 seconds is not impressive either.

It's like seeing people breaking rocks with hand tools being impressed with a bigger mallet.

I used to aim for 15 millisecond cold load times, which is apparently unheard of these days even for front pages with entirely static content.


I'm with you on what's impressive and what isn't. I've done considerable work on page load performance engineering in the past, getting times down to the low single-digit milliseconds as you would like while maintaining high traffic levels. I know how to make every part of the system work together to minimise response and rendering latency, which is nice for people with suitably low-latency connections, and for APIs that respond nicely in client-side applications.

Unfortunately, for myself I can't even get a ping response from the ISP upstream router in 15ms, let alone a static page over HTTPS.

None of my internet connections has sufficiently low latency - neither home nor office.

HN takes 400-600ms to load, but that's understandable due to physics. Wrong country.

I just loaded news.bbc.co.uk, which is in my country and is also well connected, and saw that DNS resolution took up to 400ms, and TLS setup took up to 650ms (though not both at the same time in a single request). Total page load time was about 2s.

Those numbers seem unnecessarily high on this connection. But 15ms is too optimistic: the network latency isn't low enough, even for a small static page.

There are a lot of people in a similar situation, living with connection latencies you would consider high, but it's all we can get.


15ms to who? I’ve never had that kind of latency on a cold connection. My pages have an LCP of around 600ms, and it’s hard to push it much lower because even static pages on a CDN end up taking 400ms to connect and download.


15 ms to anyone in the same city or on the same local network in an office setting.

50 ms to anyone in the same country, ideally lower.

Global reach is a different problem because of physics.

However, front pages of web pages tend to be largely static and can be staged in various geo-distributed regions. In other words, distributed via a CDN..

> even static pages on a CDN end up taking 400ms to connect and download.

Only if you stuff them full of megabytes of Javascript and pull down megabytes of JSON in order to display that static content.

The fact that my comment -- a factual statement about real-world performance I've achieved regularly -- is voted down and your off-by-an-order-of-magnitude reply is voted up speaks volumes about the state of the industry.

It's like a bunch of fat people being flabbergasted about the mere concept of mountain climbing. With what... your legs!? Up there!? Madness!


My site is a static page hosted on a CDN with less than 80KB of JavaScript.

If you’re testing local network times, you’re just fooling yourself. None of your visitors are seeing that time, so it’s irrelevant.

What’s your URL? I’d love to throw it into webpagetest.org and take a look.


Well yea, but start trimming with the biggest wins first.


This again? From the first page of the documentation:

    With that in mind, consider the following bit of HTML:

    <button hx-post="/clicked"
      hx-trigger="click"
      hx-target="#parent-div"
      hx-swap="outerHTML"
    >
      Click Me!
    </button>
We've been through this back in the Angular days (and even before that: remember Knockout?). Implementing your template language as DOM attributes does not make it HTML. Not to mention the examples using "hyperscript", aka a "human readable" DSL for DOM manipulation and event handling also stuck in HTML attributes.

If you are a Python team, sure, avoiding JS is a good idea and tools like htmx which let you pretend you're not working in multiple languages while having to memorize a massive API implemented in DSLs bolted onto DOM elements are fine. I've worked with these kinds of libraries in the past and once you can get over the knowledge gap, they can be amazing 80% solutions and solve most of your problems.

If you are either a "full-stack" JS (i.e. server + client written in JS) team or have more than one person specifically working on the frontend, this can just as likely be a massive footgun because you're trading a widely used, well-understood tool with a massive community for a somewhat niche one with a much smaller (but hopefully more dedicated and helpful) community.

Sorry if my assessment of htmx comes across as sarcastic but TINSTAAFL and my criticism would equally apply if the talk was about moving from Angular or Vue. If having to write JS is a drain on your team's resources and you can't change that, yes, try moving away from JS even if it means going down a less travelled path.

EDIT: To be clear: one of the early selling points of React was "it's just JavaScript", referring to how most comparable tools would require you to learn their DSLs to do basic things like a loop. React allowing you to actually use a JavaScript `forEach` or build an array in your render function instead of having a `data-react-foreach` attribute or something like that on an element was a breath of fresh air as tools like AngularJS or Knockout required you to use those and were extremely inflexible because of it (e.g. if they wanted to support filtering they'd have to bake it into the DSL whereas in React you could just do an `if` or `.filter`).


Htmx is not simply angular-style directives with client-side templates again. it is, rather, an extension of HTML as hypermedia. That's the big difference between htmx and most other front end libraries.

There are a number of essays here on this difference and the ramifications of it here:

https://htmx.org/essays


Frankly, that page in itself is the reason my comment's tone is so hostile.

There are fewer "essays" on that page than "memes" which just seem to exist to ridicule frontend frameworks or smear JS in general (which is ironic given that JS itself is a "twenty years old technology", which at least one of the memes frames as superior). I put "essays" in scare quotes as the first "essay" is literally the OP article, which is primarily a video of a conference talk. The rest of the collection mostly makes philosophical arguments that wax poetical about web architecture and mostly don't reference htmx by name so I can't be bothered to dig through all of them to figure out what you're trying to tell me.

"An extension of HTML as hypermedia" is not a description of what it does, it's a grandiose sales pitch. I'm familiar with the original REST paper and a lot of the philosophizing around HATEOAS and hypermedia.

It's not an "extension of HTML", it's a browser runtime implemented as a JavaScript library (assuming they didn't use WASM) that executes a declarative DSL in the form of non-standard HTML attributes (like Knockout, AngularJS 1, etc) and performs XHR requests to substitute fragments (like PJAX).

I'm genuinely trying to understand what makes this novel or interesting but it's hard to tell between all the obnoxious dismissiveness towards frontend frameworks (and JS) and the grandiose use of academic buzzwords.

EDIT: Thank you for pointing to that page though because at least it demonstrates that HTMX is the successor to Intercooler.js, a JS framework, which has a much less grandiose description of itself that doesn't need to appeal to philosophical blog posts or shitting on JavaScript to make its point: https://intercoolerjs.org/docs.html#philosophy

In fact, this Intercooler essay explicitly names PJAX as one of the examples of the tradition it tries to build on (while trying to stay true to the principles of HATEOAS and REST).


it's always a tough balance: the reality is that htmx is hostile (or at least passive agressive :) towards JavaScript, as it is being used today.

in general, I try to keep things positive and emphasize hypermedia over htmx, since it's the conceptual idea that I think is more important than htmx's implementation of it (libraries like unpoly or hotwire are also great options.) I do think htmx is more of an "extension of HTML" than those two, in that it is lower level and requires more work (or, if you prefer, has less magic) than them.

the memes are what they are: it's the internet and i've gotten plenty of shit over the last decade from SPA partisans, so, at times, they are going to be a little punchy. I try not to take things personally and laugh about it all: the situation is hopeless, but not serious.

the philosophical turn after I created htmx (really, intercooler 2.0) reflects that I have, over the last decade, developed a better understanding of hypermedia/REST/HATEOAS and why intercooler.js/htmx is different than SPA-talking-to-JSON-APIs. The essays certainly aren't for everyone, but I enjoy writing them and giving an alternative (and, I would argue, originalist) view of how web development can be done.

tone is always hard on the internet and, broadly, I like Churchill's take: "I like a man who grins when he fights."


The other reply (I think by the creator of HTMX) is right, HTMX is not a DSL for DOM template manipulation, but I want to address your main point about DSLs for DOM templating.

I completely see the appeal of a "it's just JS" approach that JSX and React take. But there is a massive advantage that a DSL has, the compiler can statically analyse the template and compile it in such a way that it has an understanding of what will change, what won't. This allows the compiler to make massive optimisations that JSX templates cannot do (easily). To achieve, in general, the same level of optimisation with JSX you have to code them in yourself.

Whether you go for an Angular/Vue style attribute based DSL or a Handlebars-esq DSL such as with Svelt, I don't really mind, they both have advantages and disadvantages. But what they allow you to do over JSX is enormously misunderstood by the majority of React devs.

There is a good overview of this in the Vue docs here: https://vuejs.org/guide/extras/rendering-mechanism.html#comp...


I know, this isn't a novel concept. There was a good talk about the spectrum of abstractions and the tradeoffs of template languages vs React's approach at React Europe a couple of years ago (in fact, he goes into that this means there are some optimizations that are simply unavailable to React): https://www.youtube.com/watch?v=mVVNJKv9esE

The question isn't whether DSLs or "just JS" are more powerful, the question is which tradeoffs they represent and which tradeoffs you prefer. It seems that HTMX is primarily based in the Python ecosystem and as a former Pythonista I see the advantage of having a DSL over having to write JS, especially that the former is much narrower, which is what you want in this situation.


You are right about changingsupport for niche but...

if the tool is easier to understand probably it is also less massive and easier to interact/find workarounds when needed than if you work on top of several layers + huge dependencies. At least that is my experience and why, in the face of choice and when it makes sense, I would choose htmx over React.

It is easy to understand and a thin layer you can figure out even what it is doing under the hood. I cannot say the same about React, which is much more complicated. Sure it has good use cases. But it is more difficult to understand and potentially drags a lot of deps inside.

I see every dependency on an app as introducing some risk: the more of them, the worse. They can break, they can go unmaintained, you can have a trnsitive dep that is compatible with one package but incompatible with another (this happened a lot to me with node) and that propagates recursively. It can be a hell to the point that making things work can get difficult.

So at the end, if you can do something with a couple of deps and some htmx/standard web stuff and a bit of hyperscript or similar here and there, chances that things will keep working for the years coming are higher. Also, if something gets outdated there is a tiny layer to replace you can do it more easily.


Yes, it's a tradeoff. Picking up a DSL is probably faster and easier than learning an entire new language. But knowledge about that DSL is also not as readily transferrable and it's much harder to hire for someone already familiar with a DSL than one of the most widely used languages. Of course in turn you probably want a much higher level of expertise in that language than would be necessary in a DSL you can learn (and, to some degree, master) in a fraction of the time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: