Hacker News new | past | comments | ask | show | jobs | submit | pweissbrod's comments login

I'm curious if anyone else is advocating for HTMX as a mechanism to simplify bloated front end code bases


I’ve been using it to great effect for the last year or so. It really is a breath of fresh air. In my case, I’m using a Django backend, but there are lots of successful folks using other backend stacks as well so choose what you know best.

The book is a really quick and easy read. It’s pretty eye opening how productive, and effective, htmx can be with so little magic.

https://hypermedia.systems/


I won't try to guess the industry trajectory as a whole. But I can speak from personal experience. I've been on teams jump on the single page application bandwagon. React in particular.

It's been 5 years since initiating that investment. Reflecting on it I'm frankly amazed at the amount of technology that we don't need. The standard type of UI we build really hasn't changed but the time code and overall cost continues upward.

Looking back to 2018 we might have been collectively beguiled by the self-perpetuating marketing machine of single page application technology. We've decided unless there's a very clear and specific justification to write a thick client on a web page we're avoiding it going forward.

Tools such as HTMX are perfectly effective substitutes at far lower cost. At least for what we need


In more cases than not I've noticed the choice of single page app itself is pure overhead.

SPA technology brings some key advantages but also a whole new realm of cost and complexity. It's my experience that SPA popularity has convinced many folks to use it when they really don't have a practical reason to justify it.


Svelte is so insanely lightweight, I think it is a great counter argument to a lot of the SPA hate.

And honestly, most of the weight in modern websites comes from analytics and tracking tools. I've made insanely performant SPAs that work well on low budget hardware. My personal phone is 5 years old, if code I write doesn't perform flawlessly on it I'm not going to ship it to customers! Heck my personal dev laptop is a 4+ year old $999 Costco special.

Well made React sites can run just fine on devices like that, and Svelte runs even better.

Also SPAs scale better, I remember the days of website backends being slow, with page loads taking forever, massive amounts of HTML being generated server side using slow scripting languages.

Sending a JSON blob of some fields to the browser and having the browser update some nodes makes a lot more sense than the old school way of reloading the entire bloody page to change a couple fields.


By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application. You may already have this but you may not. In most cases you must choose a framework for routing. Also a framework for state management. You also dedicate to duplicating validation and security trimming logic both on the client side and the server side. More often than not you will find yourself including hundreds of NPM packages as dependencies which you must continually update and maintain. Also the requirement for unit testing on the front end is common. Which brings in the need for things like jest and enzyme. This complexity inevitably trickles into your build and deploy processes. Perhaps in larger teams this is a burden you can absorb. In smaller teams however you start to see division of responsibilities. One person only knows front end but not back end and vice versa. Common knowledge of the application as a whole can become fragmented. Perhaps the day comes where you want to take a partial of a user interface posted in a peripheral application and place it inside your web page. Because you have a virtual DOM this is now a security risk. You must build a component which duplicates the user interface which already exists. If the user interface needs to be shared among many applications you must build a commons code base to host your components. You start shouldering the burden of maintaining component libraries instead of just HTML and CSS. Again this is all very general and hypothetical but it feels worth pointing out some of the common implications that simply choosing an SPA can have in the longer run.

Plus this is not an all or nothing sort of choice. For decades we have used Ajax to perform partial updates on a web page. Consider alternatives like HTMX as a comparison.


> By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application.

No you don’t.

> In most cases you must choose a framework for routing. Also a framework for state management.

I don’t understand this argument. React gives the developer this freedom by design. If you want a framework that has all of these decisions made for you, they exist.

> You also dedicate to duplicating validation and security trimming logic both on the client side and the server side.

I’ve been validating on the frontend for 15 years, long before I worked on an SPA. It has never been necessary but it provides a better experience. If you don’t like this, you can still let the server do all the validation. There is nothing about an SPA that enforces client-side validation. And you’re wasting your time if you’re doing security filtering on the the frontend.

> Also the requirement for unit testing on the front end is common. Which brings in the need for things like jest and enzyme.

“Grr this paradigm allows me to test my code, I hate it!”. Seriously, we’re now able to write unit tests which were previously impossible. How is this a bad thing? Also Enzyme hasn’t worked since React 17, I now use RTL which asserts user behaviour - super nice.

> Because you have a virtual DOM this is now a security risk.

What?

> You must build a component which duplicates the user interface which already exists.

How is this any different to a non-SPA? Regardless of technology you can’t just arbitrarily lift interfaces from unrelated applications and inject them into your application without a bit of work.

> If the user interface needs to be shared among many applications you must build a commons code base to host your components.

Again, how is this any different from a non-SPA? You UI isn’t going to magically share itself between applications just because you don’t have an SPA.

I’ve worked on all types of applications and I don’t think SPAs should be the defacto approach, but I really feel you’re clutching at straws with all of your arguments.


Recently we went through an exercise where we built a to-do simple app using react and rewrote it using HTMX.

The functionality was identical between the two apps. The amount of tooling code and duplicative logic was massively higher because of SPA and all the fundamental things it demands.

Now if you really need an SPA for your requirements because you have an intrinsically complex front end and you've mastered the hoops to jump through good for you! There's nothing wrong with that. But there is something seriously wrong with building the same user interfaces we've needed for decades but the time code and complexity drastically increasing for no justifiable reason.


> Recently we went through an exercise where we built a to-do simple app using react and rewrote it using HTMX.

React is boilerplate madness.

Do the same in Svelte.

I did a form heavy app in Svelte, literally took 1/5th the time it would have taken in React.

SPA fundamentally means that instead of refreshing the page, just the data needed to update what is on screen is sent down to the user.

Ideally, "send data about products on next page of search results" is less than "send all HTML needed to render the next page of search results."

Also the backend ends up simpler, instead of trying to template strings together, the code can just worry about fetching and returning needed data.

I am legit confused why people think generating HTML in some other language (Python, Ruby, etc) is a good idea.

Keep HTML in the browser (easier to develop and debug!) and keep backend business logic someplace else.


When you have a team with predominantly back-end knowledge expertise using a templating language they are familiar with plays to their strengths. MVC applications were written for over a decade. Perhaps it is a subjective thing because I don't see any logical difficulty in a web page that exchanges partials instead of JSON. I was programming that way for over 10 years.

Svelte really sounds compelling from what you're telling me. I'll check it out. But unless it is a drastic simplification it brings with it the fundamentals of effectively writing a thick client in JavaScript or TypeScript and all the things that come with it. React and angular have left a very bad taste in my mouth. The time and code cost for building basic user interfaces should go down not up. We should be spending less time talking about how to do something and more time talking about what to do


> But unless it is a drastic simplification

95% of what you write in Svelte is just HTML. You then databind whatever you need using an obscenely lightweight syntax.

Svelte also has an optional SRS framework called SvelteKit that auto creates REST endpoints for you, and auto binds stuff to them, but all that is optional and not needed.

My issue with backend HTML templates is that essentially you always have to know HTML + CSS anyway because browsers suck and they still have differences, so I always end up spending a ton of time fixing CSS and HTML issues. Having to fix HTML issues by way of the backend that then generates HTML feels like an unneeded abstraction.

Instead I can just... write HTML and CSS.


I won't touch upon most of the points (because those are highly situational), but I'll offer my opinion on the following:

>> By choosing an SPA. You must choose a dedicated static site hosting which is separate from your web application.

> No you don’t.

It is true that you typically don't have to do that: you can just package the built assets of your SPA into whatever serves the rest of it, provided that you don't mind a monolithic deployment. I've seen many applications like that, both in Java (Spring Boot), PHP (Laravel), Ruby (on Rails) and so on, it generally works okay.

However, I'll also say that it's pretty great to have two separate folders in your project: "back-end" and "front-end" (or separate repos altogether) and to be able to open those in different IDEs, as well as to be able to deploy them separately, especially if you want to create a new front end (say, migrating over from OLD_TECH to NEW_TECH while maintaining the old one in parallel) or something like that. Otherwise you have to mess around with excluded folders for your IDEs, or if you open everything in a single one everything kind of mushes together which can be annoying, and your build also cannot be parallelized as nicely, unless you separate the back end and front end compile steps from the package step, where you shove everything together.

Personally, having a container (or a different type of package) with Nginx/Apache/Caddy/... with all of the static assets and a separate one with the back end API feels like a really nice approach to me. In addition, your back end framework doesn't need a whole bunch of configuration to successfully serve the static assets and your back end access logs won't be as full of boring stuff. If you want to throw your SPA assets into an S3 bucket, or use one of the CDNs you can find online, that's fine, too. Whatever feels comfortable and manageable.

Now, whether SPA is a good fit for what you're trying to build in the first place, that's another story. Personally, I like the simplicity of server side rendering, but also like the self-contained nature and tooling/architectures behind SPAs (though personally I'm more of a Vue + Pinia person, than a React/Angular user, though they're passable too).


What backend tech do you like to pair HTMX with?


That's the thing. It doesn't really matter. It's sort of like asking what backend tech you pair with jQuery


I wish there was a "works for Pentium III" label that would help indicate that the app's usability hits necessary minimums on a 1Ghz Pentium III computer. IMO that would be a good optimization floor for avoiding the hidden monstrosity of electron apps and that type of stuff.

If your McCrud app can't be responsive on a baseline 1Ghz PIII with 1GB of RAM, then there needs to be some sort of shame pushback. Moore's law is effectively coming to a close, there will need to be more optimization in the future.


Why Pentium III? That's nearly 25 years old. You couldn't run Windows 10 on such a processor, let alone a modern browser, and a $200 mobile phone would beat it in benchmarks. Surely you can have a higher floor than that.


The Pentium III was the around a half a gigahertz, and we were starting to get into multi-hundreds of gigabytes.

... that sounds small compared to today's specs, but IMO this is when PCs had plenty of horsepower to run "real" operating systems (32-bit preemptive multitasking), "real" browsers, 3D gaming was into it's fifth year or so, etc.

So this wouldn't be a badge where you say "wow we fit it into this impossibly limited device". The dirty secret of the PC business is that this hardware spec is more than enough for practically all productivity and browsing (and video with hardware acceleration). Now, high polygon high res high antialiased games... but that has actual hardware horsepower needs you can quantify.

The amount of wasted resources from the year 2000 to now is stupefying. Intel and AMD love it! DRAM makers love it! But as an industry we have squandered the last two decades (and the last two decades of CPU improvement), right as gigahertz scaling disappeared, Moore's law is probably going to collapse under its economic weight, Amdahl's law says parallelism won't save us forever.

So if I look at some software and wonder why this relatively straightforward app is hogging along on a PC that is effectively 10-50x faster than a Pentium III 500Mhz (8x-10x in clock speed, then massive improvements to cache, branch prediction, multiple ALUs, speculative execution)... something is wrong.


Google Maps ran like a dream on Pentium M systems back in 2005. Gmail was also smooth as butter.

Pentium M was a derivative of the Pentium III design.

Ignoring high resolution image assets, there is no reason a website shouldn't degrade and be able run on any machine faster than 300mhz.


Works on a KaiOS feature phone support would be a relevant metric today with similar goals that you mention. They explicitly state in their docs that React will be too heavy for your app.


Is there some VM that allows limiting the CPU to a performance similar to a P4? (A PIII is way too bad for my tastes.)

I imagine Linux would have a bad time running in it.


Pretty sure PIII's beat P4s at a lot of benchmarks. :-D Thus why AMD is around today.


Well, I was using AMD at the time.

AFAIK, the P4 faced badly on jump-happy code, but this was not common enough to be a problem on the real world when compared to a PIII. It was also a power hog, that could barely outrun a snail if you didn't have proper thermal management, but that also doesn't means the processor is slow.


I should have specified perf / watt. Pentium M's came and cleaned up compared to P4s, there was a fair bit of time there when, excluding massive power hungry desktop monsters, a beefy laptop with a Pentium M could easily beat an average desktop with a P4.

And IIRC pipeline stalls on the P4 hurt, badly.

Oh and RAMBUS, I had forgotten about RAMBUS. That also hindered the platform.


It's actually the opposite. MPAs are pure overhead. In theory SPAs are faster because they only require a minimum of 1 user blocking network request, while MPAs need at least 1 for each page. Everything else is up to the implementation. So if you are doing heavy performance optimizations, SPAs will always end up faster. However that's not the full picture, and in practice there is a lot of nuance, but SPAs definitely have a higher performance ceiling.


Not for large DOMs. And for websites which Don't require support for low internet bandwidth this is optimizing for the wrong problem


Network latency is your no.1 bottleneck for every modern device, everything else is a distant second. Also you can optimize everything, but you can't make MPAs navigate without a network roundtrip.


Mpa still is faster so spa must have another type of bottleneck


This is very true. It's also why we have svelte-kit, remix, astrojs, and other frameworks that take a transitional app approach. They are server-rendered where it makes sense and client-rendered where necessary. Before developers had to choose between a server-rendered website and full on single-page application, now there are great options that blend the two depending on needs.


Totally agree. Definitely a reason I avoided any SPA on findthca.com. I've never had a user complain that it needed one.


How would this work from a fault tolerance perspective? For example the listening application happens to be offline but the database is inserting records. How would the application catch up?


If the listener is offline, the Postgres will see that a subscriber is behind and mark more of internal data as needing to be retained - this will keep certain maintenance tasks from being run (i.e. VACUUM). If VACUUM is not run for long enough, it will cause a catastrophic failure in your DB.

The application can catch up when restarted, if it retains the last WAL position. When it restarts, can asks Postgres to start replaying from that point.


It would catch up by reading the WAL files when it restarted - using this method, the WAL files will remain on disk until they've been read.


beware of subscribers being down. wal file will fill-up and then you’ll loose messages.


Won't it just... write more WAL files?


yes, until you run out of disk - it's happy to write as much as you can handle.

but, disk isn't usually infinite.


Well of course, but I don't feel like this is a noteworthy limitation here; it applies to any form of queue that persists messages.

I had imagined the GP might have been insinuating something like: a configurable number of WAL files would be written, then they'll be overwritten once all full.


I mean running out of disk is a danger for any persistence, that's not specific to using WAL


Select N+1 is a fundamental anti-pattern of relational database usage. Reducing the latency per round trip doesn't change this fact.


I disagree. I think it makes a big difference.

https://sqlite.org/np1queryprob.html#the_need_for_over_200_s... talks about this in the context of Fossil, which uses hundreds of queries per page and loads extremely fast.

It turns out the N+1 thing really is only an anti-pattern if you're dealing with significant overhead per query. If you don't need to worry about that you can write code that's much easier to write and maintain just by putting a few SQL queries in a loop!

Related: the N+1 problem is notorious in GraphQL world as one of the reasons building a high performance GraphQL API is really difficult.

With SQLite you don't have to worry about that! I built https://datasette.io/plugins/datasette-graphql on SQLite and was delighted at how well it can handle deeply nested queries.


Perhaps the overhead of chatty queries is diminished in this special use case.

But it doesn't change the fact that standalone database server processes are designed to support specific queries at lower frequencies. This is one of the main points of The SQL language is to load precisely the data that is needed in a single statement.

Relying on this is a design pattern would only scale in specific use cases and would hit hard walls in changing scenarios


What's the point of this article? An attention getting headline followed by some bombastic claims followed by walking them back later in the next paragraph. It's like generating debate for the sake of debate.

I get the impression these things exist just to keep engagement numbers up


Same reaction. This stuff is catnip for curmudgeons.


Author has 3 years in the industry, probably at the peak of the Dunning-Kruger curve


I look at it differently. If vim is a useful tool in your daily workflow then it's probably worthwhile to invest time into customizing it and enhancing it with plugins.

The open source landscape of plug-in development is under constant evolution and it can be quite a time sink staying on top of the latest and best tooling.

I see SpaceVim as something similar to a Linux distribution. The maintainers offer the work of selecting various packages inversion combinations that harmonize into a consistent system.

And just like any Linux distribution you can agree or disagree with the packages they choose or the default configurations it comes with.

I end up using a somewhat heavily customized SpaceVim that I personally like but it's my opinion the distribution is a very productive foundation.


I used to think the same thing until I considered that single page app development is really just a reinvention of thick clients.

When you look at an SPA as a thick client state management is a natural thing as it was in Java swing and WPF and Windows forms and other stacks beyond my knowledge


Agreed! I'm surprised at the comprehensive work going into researching the history of state management in this article while completely missing MobX.

What problem does MobX not already solve?


What problem does MobX not already solve?

I’d say the two biggest hazards with the reactive/declarative style are cyclic dependencies in the data model and remembering history.

Tools like MobX let you write quite elegant code in the right circumstances but they are less helpful if, for example, you have a complicated set of constraints and the effects of changing one value in your state can propagate in different directions depending on what first changed to start the cascade.

This style also tends to emphasise observing the current state of the system, so if you need a history as well (for undo, syncing with other user actions via a server, etc.) then you probably have to write a whole extra layer on top, which is more work with this kind of architecture than for example if you are using a persistent data structure and reducer-style updates (as popularised by Redux within the React ecosystem).


For a project that was basically an after effect like application I needed a history, but also very fast updates, settled with mobx-state-tree which gave me a way to preserve history. Worked great for that application, no regrets.

So there are some solutions available if normal mobx is not enough.


This is a really great explanation - I love MobX in many ways but the “observing the current state of the system” definitely has its downsides, which you’ve expressed very clearly!


All cited frameworks solve the problems. Article is more of a survey. It’s a pity to miss Mobx while discussing proxy/mutation API in Valtio.


came here to say the same thing.


If your team is comfortable writing in pure python and you're familiar with the concept of a makefile you might find Luigi a much lighter and less opinionated alternative to workflows.

Luigi doesn't force you into using a central orchestrator for executing and tracking the workflows. Tracking and updating tasks state is open functions left to the programmer to fill in.

It's probably geared for more expert programmers who work close to the metal that don't care about GUIs as much as high degrees of control and flexibility.

It's one of those frameworks where the code that is not written is sort of a killer feature in itself. But definitely not for everyone.


It’s worth noting that Luigi is no longer actively maintained and hasn’t had a major release in a year.


Toil is pure Python, but I'm not sure how the feature set compares https://github.com/DataBiosphere/toil


Really interesting to see a bioinformatics tool be proposed. I've worked in bioinformatics for over 20 years, written several workflow system for execution on compute clusters, used several other people's and been underwhelmed by most. I was hoping that AirFlow might be better, since it was written by real software engineers rather than people who do systems design as a means to their ends, but AirFlow was completely underwhelming.

The other orchestrator besides Toil to check out is Cromwell, but that uses WDL instead of Python for defining the DAG, and it's not a super powerful language, even if it hits exactly the needs for 99% of uses and does exactly the right sort of environment containment.

I'm also hugely underwhelmed by k8s and Mesos and all those "cloud" allocation schemes. I think that a big, dynamically sized Slurm cluster would probably serve a lot of people far better.


I did a proof of concept in luigi pretty early on and really liked it. Our main concerns were that we would have needed to bolt on a lot of extra functionality to make it easy to re-run workflows or specific steps in the workflows when necessary (manual intervention is unavoidable IME). The fact that airflow also had a functional UI out of the box made it hard to justify luigi when we were just getting off the ground.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: