Hacker News new | past | comments | ask | show | jobs | submit login
If Not SPAs, What? (macwright.com)
364 points by todsacerdoti on Oct 28, 2020 | hide | past | favorite | 447 comments



I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.

When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinioated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

I am really happy that common sense is starting to return and more developers are starting to realize that integrated end to end frameworks are very useful for a lot of real life application development scenario's.


When SPA's became the norm and even static web pages needed to be build with React

I'm in a weird situation where I'm contracting into one organisation and they've contracted me out to another. The first organisation know me as a senior dev/architect with 15 years experience in a niche domain. The second organisation see me as brand new to them and despite paying an embarrassing day rate are giving me noddy UI tweaks to do. Extracting myself is proving to be slow somehow.

Anyway, they wanted a webapp with a couple of APIs and nothing on the page than a button, the authenticated username and a line of text. Clicking the button opts you in/out of a service and the text changes depending on the state. The sort of thing people go to once, maybe twice.

I used a mustache template on the server side to populate the values and I didn't even bother with any javascript, just did an old school form submission to the API when the button was clicked and a redirect back to the page.

It was tiny but, obviously it was decided "we should be using a more modern framework" - code for React. It was the more word that got to me, as if there was an equivalent, dated framework I'd used. I didn't put up a fight partly because I was new to the team and figured they were hot on React and I wasn't. Somehow, they made a complete hash of it, they couldn't even figure out how to get all their inline styles (the only styles they used) working without help.

I guess it's just those classics; people want to learn the hot new thing as they see it, their managers are happy that they've heard a buzzword they recognise and then everything becomes a nail for their new hammer.


It's interesting to contrast this kind of organizational behavior with the type where "management won't give us time to deal with technical debt". Though arguably using an over-complicated framework is creating more technical debt, on a certain perspective, this is on the other end of this scale.

It seems to me what we want is some kind of "Platonic ideal" where the extremes are bad for development:

  Management won't give us time to deal with technical debt/incorporate best practices.
  |
  |
  THE IDEAL
  |
  |
  Management wants the Hot New Thing all the time. uservices are the state-of-the-art in best practices so uservices it is!
The best advice (IMO) in dealing with the top end of this spectrum is to frame technical debt and best practices in terms of whatever economic metric the manager cares about (e.g., "TDD will lead to less bugs, ergo happier customers, ergo greater retention"). But I wonder if the same framing can be used to encourage temperance in managers who dwell on the bottom of my spectrum.

I somehow suspect it won't work. The problem with those at the top is that they see the dev's proposition as an investment that will either not bear fruit or, worse, slow the team down; so they are incentivized to keep status quo. Those at the bottom, however, start from a perspective that their idea will add value to the team so they keep pushing for it no matter what. And telling them "Let's not do what Google does; we're not Google" is definitely seen as a devaluation of the team.

This has been a weird headspace to explore. I'd love to hear from others' experience on dealing with this.


IMHO the problem is that with reusing your scale I typically see:

  Management that don't really understand tech and is afraid to try things
  |
  |
  THE IDEAL
  |
  |
  Management that don't really understand business (and sometimes tech too) and is focused on tech fashion
Surprise the ideal is hard because it requires both tech and business experience to recognize how best practices and new tech could bring value in a specific context. The job is to make clients happy by solving their problems, with apps that are nice to use, bug free, performant, maintainable, evolvable, with a usually short time to market and obviously for the best cost. Sometimes that equation is solved with a complex stack, architecture and practices with hundreds of engineers, and sometimes with a few web pages, inlined CSS, a bit of vanilla JS, and a solo dev.


Every time that sort of thing has happened to me it's been because there's some grand plan to build out more features that the people on the frontline don't know about. The plan rarely materializes but the idea that the foundation should be built in a way that supports it isn't completely stupid.


It’s not stupid, no, but a “supporting foundation” is a largely just a seductive metaphor. It says, “Clearly software is like a building. Every building needs a solid foundation.” It doesn’t inspire engagement with other metaphors, like considering software to be a tree that must be grown incrementally and as a product of dynamic forces. It doesn’t map knowledge from the building domain to knowledge in the software domain.


Or that, with software, you can always rip out the foundation and replace it. And you're working on it as you work on the rest of the "building" anyway.

The difficulty of working on lower abstraction layers doesn't scale with the amount of higher layers. Unlike with buildings or bridges, there's no gravity in software, no loads and stresses that need to be collected and routed through foundations and into the ground, or balanced out at the core. In software, you can just redo the foundation, and usually it only affects things immediately connected to it.

A set of analogies for software that are better than civil engineering:

- Assembling puzzles.

- Painting.

- Working on a car that's been cut in half through the middle along its symmetry plane.

- Working on buildings and bridges as a Matrix Lord who lives in the 4th dimension.

All these examples share a crucial characteristic also shared by software: your view into it and construction work is being done in a dimension orthogonal to the dimension along which the artifact does its work. You can see and access (and modify, and replace) any part of it at any time.


The real "foundations" of a software system are probably its data structures rather than the infrastructure/backend. It's still an iffy metaphor though for the reasons you've given.


I love this insight, I just recently learned about the hidden HN feature to favorite comments and used it for the first time to favorite your comment. It's always a pleasure to read your comments on HN, I noticed your handle popping up here and there and would like to thank you for your contributions. If you had a collection of all your comments on HN printed in a book I think I would buy it:)


Biggest problem with templating libraries like mustache is they aren’t context aware, so it is up to the programmer to remember the proper way to escape based on where a variable is used.


Honestly when they come with decisions like that, I'd like them to spend some time on a formal writeup - have them prove they understand the problem, the existing solution, the issues with it, and why React or technology X would solve it. Have them explain why they / their employer should spend thousands on changing it.

I mean it'll only take them an hour or two to write it up, better to spend that than the 20 hours it would take you (for example) to spin up a new stack.


> developing became more and more inefficient

Anecdotally I find the opposite to be true. I've been writing frontend code for over a decade, but I've never moved faster and wrote less buggy code than now. Is that because I've become a better developer? Sure, a little bit. But by and large, I don't believe that ultimately is the reason. I think it's the maturity in the technology. My growth as a programmer is hardly linear and the past 5 years have not matched the growth I achieved in my first 5. Frontend tooling has never been better than it is today.

What I believe, is that the bar to build web applications has been lowered, and there are more programmers than today than ever before. You have people who are not experts in frontend development and javascript trying to build complex UIs and applications. So you take this person who doesn't have the requisite experience and put them to work on a paradigm with a lot of depth (frontend) using frameworks that are really simple and easy to get started with, but compound problems as they are misused.

Another factor is that since SPAs are stateful, complexity mounts aggressively. Instead of a refresh on a stateless page every few seconds, one page causes bugs that rear their head for the duration of the session. These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti. But when designed properly, these problems are largely negated.

I'm not advocating that SPAs are the solution to all problems. I think there's gross overuse of SPAs across the industry, but that is not an indictment of SPAs themselves. That is someone choosing the wrong technology to solve the active problem.

With respect to angular (1, I never touched 2) specifically, I always found it extremely overengineered, poorly designed, with terrible APIs. But that's a problem with that specific framework and says nothing at all about SPAs at all.


> Frontend tooling has never been better than it is today.

What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks? The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.

I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.

In contrast, Django REST Framework handles all that boilerplate for me and allows me to jump right into writing the business logic. It's insane that ~30 lines of code with DRF (https://www.django-rest-framework.org/#example) gives me a way to expose RESTful endpoints for a database model to the web with authentication, pagination, validation, filtering, etc in a reusable way (these are just Python classes after all) but the modern front-end doesn't have the client equivalent of this.


> I'm asking because in all the projects I've been involved with, consuming the backend API always felt like a mess with lots of reinventing the wheel (poorly) and duplication of code. I can't believe in 2020 there's not some kind of library I can call that will give me my backend resources as JSON and transparently handle all the caching, pagination, error handling (translate error responses to exceptions), etc and people have to do all this by hand when calling something like Axios.

If you look at 20 REST API's you'll probably see 30 different patterns for pagination, search/sort, error responses, etc. There have been a couple attempts to standardize REST such as OData but I think it's safe to say that they haven't been very successful. It's kind of challenging to build standard reusable front end tools when everyone builds back ends differently.


Ember has somewhat solved that problem though.

You have the concept of data adapters which would be clients for your API (you can make a custom one if extending the existing ones isn't an option) and the rest of the application just interacts with the equivalent of database models without ever having to worry about fetching the data. You could swap the data adapter without having to change the rest of the code.

We seem to have lost this with the move to React though, and even the hodgepodge of libraries doesn't provide a comparable replacement.


Haven’t used it but aren’t there things that can connect to a swagger API spec and do some of the heavy lifting for you? I agree that the network layer in frontend is tedious to implement, things like GraphQL and Apollo attempt raise the abstraction level. What I would really like to see is something even more abstracted, e.g a wrapper around indexdb you can write to that syncs periodically over websockets to your server, more similar to the patterns we use on mobile.


It seems that you are describing pouchdb: https://pouchdb.com/


You're right Pouch completely slipped my mind, it's a great solution. But what about something more generic on the backend that wasn't database specfic, some sync engine you could put in front of whatever database you wanted. Can you do something like this with Pouch?


> What's the library or design pattern to consume a REST API in React or any of the mainstream front-end frameworks?

For React, its out of scope; anything that you can use in JS for this can be used. If you are using a state management library, that's probably more relevant to your selection here than React is.

REST is also too open-ended for a complete low-friction solution, but, e.g., if its swagger/openapi, there's tools that will do almost the entire thing for you with little more than the spec.

> The only thing I'm aware of is Ember Data but Ember is apparently not cool anymore, and I couldn't find a suitable replacement.

Ember Data is definitely a valid choice. It may not be hyped right now, but that has little to do with utility or use in the real world.


The graphql frameworks - like Apollo - give you that. I haven't used it. For basic caching the state management frameworks work pretty well, but it is a lot of layers when you add Redux or Vuex to your stack. It works well for us though and I find it much easier to reason about than the old jquery spaghetti code style.


I hear you. I find myself needing to reinvent the wheel far too often to traverse the boundary between the client and server. I also feel that it shouldn’t be this hard. Apollo client and relay solve this problem for GraphQL APIs (quite nicely IMO). What’s missing is an Apollo client for non-GraphQL APIs.


For what it's worth GRPC-Web is a pretty nice solution here.

My team generates backend stubs from our GRPC spec which allows us to jump right to implementing our business logic.

Frontend projects make use of the GRPC-Web client codegen to make calling the API simple and type safe (we use typescript).

We mostly use all the official GRPC tooling for this. We write backends in golang and dotnet core so GRPC-Web is supported quite well out of the box.

I wrote a slightly modified Typescript codegenerator to make client code simpler as well: https://github.com/Place1/protoc-gen-grpc-ts-web


Yeah, after experiencing type safe APIs + editor integration with TypeScript I don’t think I can go back.

There are, of course, other solutions besides GRPC.


React Query.


React-query.


> These inexperienced people are put in charge of designing codebases that don't scale and become spaghetti.

I think this is one area where front end tooling can be painful for the average dev. The bar to writing idiomatic JS for a given framework can get pretty high quickly, especially when you look at some of the really popular tools out there (i.e. redux).

Front end work has become so much harder to grok because the patterns around things like state management still have a lot of warts. The terminology of redux drives me crazy because it’s really difficult to explain things like reducers.


What most people have in mind as "idiomatic JS" isn't that. It's usually meant to refer to some patterns that appeared and started getting popular around 8 years ago. And often, code written in this not-idiomatic way works _against_ the language and/or the underpinnings of the Web in general. It's just that the circles promoting the pseudo-idioms have outsized and seemingly inescapable influence.


This is very vague. Can you give some examples?


The question asking for clarification is itself vague. Examples of which part?

Look at JS that's written for serious applications today, identify the stuff that you'd label as "idiomatic", and then look at code that was written 10 years ago for serious applications, and see if it matches what your conception of "idiomatic JS" is. Good references for the way JS was written for high-quality applications without the negative influence of the new idioms (because they didn't exist yet): the JS implementing Firefox and the JS implementing the Safari Web Inspector.

Examples of how "idiomatic JS" is often written by people who are working against the language instead of with it:

- insistence on overusing triple equals despite the problems that come with it

- similarly, the lengths people go to to treat null and undefined as if they're synonymous

- config parameter hacks and hacks to approximate multiple return values

- `require`, NodeJS modules, and every bundler (a la webpack) written, ever

- `let self = this` and all the effort people go through not to understand `this` in general (and on that note, not strictly pure JS, but notice how often the `self` hack is used for DOM event handlers because people refuse to understand the DOM EventListener interface)

- every time people end up with bloated GC graphs with thousands of unique objects, because they're creating bespoke methods tightly coupled via closure to the objects that they're meant for because lol what are prototypes

These "idioms" essentially all follow the same "maturation" period: 1. A problem is encountered by someone who doesn't have a solid foundation 2. They cross-check notes with other people in the same boat, and the problem is deemed to have occurred because of a problem in the language 3. A pattern is adopted that "solves" this "problem" 4. Now you have N problems

People think of this stuff as "idiomatic JS" because pretty much any package that ends up on NPM is written this way, since they're all created by people who were at the time trying to write code like someone else who was trying to write code like the NodeJS influencers who are considered heroes within that particular cultural bubble, so it ends up being monkey-see-monkey-do almost all the way down.


Hi, I'm also a new JS coder, but I'd like to avoid becoming one of "those people" you're talking about. I've been struggling with exactly what you mention - how to find out the "correct" way to apply patterns/do relatively complex things, but all I get on search results are Medium articles written by bootcamp grads.

Can you recommend any sources of truth/books that can guide down the right path? Of course I'll be going through all the things you mention but I'm just curious if there's somewhere I can get the right information besides just reading through Firefox code, for example.

Thanks!!!


I'd say Eloquent Javascript (available for free online, I think) is a good book to read. "You Don't Know JS" is also a good one!

Basically, go for anything that teaches you non-js-specific approaches as well as as a solid understanding of the fundamentals.


Thank you! I'll check both of those out


And Crockford’s JavaScript: The Good Parts. Although and older book, JavaScript fundamentals never change and it describes a lot of those forgotten foundations.


A good starting point to explaining reducers is that you are reducing two things into one.

A redux reducer: The action and the current state reduce into the new state. And of course it doesn't matter how many reducers or combined reducers your state uses - they're all ultimately just doing this.

This also works for Array.prototype.reduce(). You're reducing two things into one.


The concept of a reducer isn’t the hard part of Redux... it’s designing your state, organizing actions/reducers/selectors, reducing (no pun intended...) boilerplate, dealing with side effects and asynchrony, etc.


Redux is not idiomatic JavaScript though. It’s trying to make JavaScript immutable and have adt which it doesn’t. If you use elm, reasonml, rescript, etc this pattern is a lot easier to implement than with JavaScript.


I wasn't readily familiar with the acronym ADT. It's "Algebraic Data Types" for anyone else in the same boat.

https://en.wikipedia.org/wiki/Algebraic_data_type


In CS it also refers to Abstract Data Types


Ah, good call. Thank you! Here's a reference I found that explains the differences between those two concepts, as applied to Scala:

https://stackoverflow.com/questions/42833270/what-is-the-dif...


Yup, it’s beautiful in Elm and makes no sense in JavaScript, which doesn’t need it.


I definitely agree you, F.E Tools has gotten a lot more mature and we have a lot further to go as well.

I'm primarily a backend developer and I think in general backend developers makes for "poor fronted devs". I'm talking about those "occasional" times the backend-dev needs to do some f.e dev work. Just because they don't know the tech as well, best practises and spend as much time with it as a dedicated F.E Dev. jQuery code written by the "occasional front-end dev" is kinda horrific in many cases.

Now please internet hear me. I'm not saying you can't write bad code in a JS-Framework. I'm saying it's usually less often and less bad - especially for non-dedicated f.e devs

Like crossing a street, just looking left and right won't guarantee you to be safe in your crossing, but it damn near makes it less probable.

If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.

*Svelte is always good start very small and bare bones.


> If you are a shop with mostly backend-devs and don't want to invest in a F.E dev, you definitely should look into a js-framework.

That matches in my experience.

I worked in a shop with only backend developers and the frontend was an absolute buggy mess of jQuery on top of bootstrap. After migrating most of it to Vue I taught it to the team and all the experienced-but-frontend-shy developers started producing great frontend code by themselves.


> Svelte is always good start very small and bare bones

I second this

The barrier to entry is lower than for React, and the results are great


> Frontend tooling has never been better than it is today.

eh. Swing in its golden age run circle around what we have now. Granted it's old tech now that we settled for in-browser delivery, but still:

- look and feels could do theming you can only dream of with css variables/scss

- serialize and restore gui states partially or whole, including listeners

- value binding systems vue can only dream of

- native scrollers everywhere you could style and listen to without the rendered complaining about passive handlers. - layout that didn't threw a fit about forced reflows

- unhappy with the standard layouts? build your own one as needed

- debug everything, including the actual render call that put an image where it is

- works the same in all supported os

browsers are an insufferable environment to work within compared to that, css is powerful and all but you get a system you can only work by inference, and were everything interferes with everything else by default, which works great to style a document and is a tragedy in an app with many reusable nested components.


Not to be mean, but I worked with Swing for ten years and it was absolute crap. Constantly dealing with resizing “collapsars”, GBLs, poor visual compatibility with the host OS, a fragile threading model and piles and piles of unfixed bugs and glitches was a nightmare. It might have worked if you had a specific end user environment but it was a PITA for anything else, and deployment was even harder.

There are a few things I definitely miss from my 20 years as a Java dev, but the half assed and under funded Swing UI is not among them.

Give me HTML+CSS+JS any time.


well we're talking about the tooling, but point taken, swing wasn't perfect (totally grid bag 2007 short https://www.youtube.com/watch?v=UuLaxbFKAcc )

but it's not like it was that worse than flex bugs ( https://codepen.io/sandrosc/pen/YWQAQO still does render differently in chrome than firefox - which one is wrong is immaterial)


Give me HTML+CSS+JS any time.

Sure! And Chrome is superbly tested.

But HTML/CSS/JS aren't anywhere near good enough to build GUIs of any complexity by themselves, so everyone layers tons of stuff on top. And then those ... those, people have plenty of complaints about too. But if they didn't use them those complaints would migrate to the underlying framework.

I mean, Swing may have had a fragile threading model (not sure what you mean by that really), but HTML doesn't have one at all. Not great!


I agree with you 100%, and was really just addressing the “swing is awesome” statement in the GP. I’ll take HTML etc over Swing any day, but I’m sure there are nicer alternatives if your deployment environment is native, e.g. SwiftUI (which I have no experience with)

There are plenty of things wrong with HTML & friends, await/async and webpack being my personal hair removers, but if we set that aside and just talk about the DOM as the API for the UI, it’s very robust, well documented and widely available. I don’t love it, but it works.


"swing tooling" thank you, don't misrepresent my argument.


HTML was never meant for building apps. We took a square peg (document markup language) and jammed it into a round hole (app development.) Most of the problems and frustrations with web development go back to this.

We've been using the wrong tool for the job for over 2 decades. Now it's everywhere and nobody knows any better. It's probably too late now.


>What I believe, is that the bar to build web applications has been lowered,

Yes, the bar to build web applications has been lowered. We can all build something on the level of GMail now.

The ability to build websites has been crippled, because you are often forced to build the sites using the tools suited to applications. As you and both the parent comment seem to agree on.


Yeah, there is some overuse of SPAs I agree, but havent anyone in this thread worked in older java monoliths with JSP or even good old Struts framework?? THEN you can see what inefficient development looks like.


> Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

This is a major problem with our industry. Unfortunately, the people with the power to curb this trend have their paycheck depend on it continuing.

As a company, you are incentivised to have a large tech team to appear credible and raise funding, so you hire a CTO and maybe some engineering managers. Their career in turn benefits from managing large amounts of people and solving complex technical problems (even if self-inflicted), so they’ll hire 10x the amount of engineers the task at hands truly requires, organise them in separate teams and build an engineering playground that guarantees their employment and gives them talking points (for conferences or the seemingly-mandatory engineering blog or in interviews for their next role) about how they solve complex problems (self-inflicted, as a side-effect of an extremely complex stack with lots of moving parts). Developers themselves need to constantly keep up to date, so they won’t usually push back on having to use the latest frontend framework, and even if they do, that decision is out of their hands and they’ll just get replaced or not hired to begin with.

In the end, AWS and the cloud providers are laughing all the way to the bank to collect their (already generous) profits, now even more inflated by having their clients use 10x the amount of compute power that the business problem would normally require.

Maybe the issue is the seemingly-infinite amounts of money being invested into tech companies of dubious value, and the solution would be to get back to Earth as to have some financial pressure coming from up top that incentivises using the simplest solution to the problem at hand?


This is the main reason I refuse to entertain going perm in the tech sector. The amount of superfluous infrastructure and unquestioned use of SPA's is just an overwhelming time sink. I would honestly rather work with some 2-bit company's legacy PHP than this mountain of crap.


For what it's worth, once you are proficient in the full end-to-end, navigating it is pretty easy, IMHO.

It just takes years and lots of room to do basically nothing, and if something meaningfully shifts, you need a while to get back up to speed.

I'm not saying it's efficient, or that you should dive in, but I did want to throw out there that there is a light at the end of the tunnel. People using React.js aren't flailing about in the dark the whole time.


> It just takes years and lots of room to do basically nothing, and if something meaningfully shifts, you need a while to get back up to speed.

If true, that's a damning indictment of the industry and the whole SPA pattern.


Food for thought: the tech sector is much larger than the trendy dumpster fire of web development. You don't have to work at some startup on some website. There is still lots of real programming to be done.


What are the real growth areas? I'd welcome an exit from web development but as a freelancer it seems to be all there is.


Kubernetes is the biggest joke. I remember working with a sysadmin who worked for The Guardian provisioning servers remotely as demand spiked. This is pre-AWS. He used Puppet and remarked that you would only ever need what he was using for managing massive fleets of servers. Then Kubernetes and Docker arrived, which were intended for even bigger deployments in data centres. Before you knew it, just as with SPA's, Kubernetes and Docker became the new requirements for web devs working on simple apps.


Also never underestimate the power of a single bare-metal server. Today everyone seems to be in the clouds (pun intended) and has seemingly accepted the performance of terrible, underprovisioned VMs as the new normal.


Stackoverflow -- the website that every developer uses probably all the time -- is an example of a site running on a very small number of machines efficiently.

I'd rather have their architecture than 100's of VMs.


It’s remarkably efficient and simple: https://stackexchange.com/performance

For those who are discouraged by the massive complexity of Kubernetes/Terraform and various daunting system design examples of big sites, remember you can scale to a ridiculous levels (barring video or heavy processing apps) with just vertical scaling.

Before you need fancy Instagram scale frameworks, you’ll have other things to worry about like appearing in front of congress for a testimony :-)


This is indeed the standard example I refer to to prove my point, and all my personal projects follow this model whenever possible. The huge advantage in addition to performance is that the entire stack is simple enough to fit in your mind, unlike Kubernetes and its infinite amount of moving parts and failure modes.


Wow. Stack Exchange is a curious case study.

I share the general HN sentiment over microservices complexity but just to play devil's advocate...

I suspect that server cost in this case is asymptotic. If the (monetary) cost of SE's architecture is F(n) and your typical K8s cluster is G(n), where n is number of users or requests per second, F(n) < G(n) only for very large values of n. As in very large.

In essence, the devil's advocate point I'm making is that maybe development converges towards microservices because cloud providers make this option cheaper than traditional servers. We would gladly stay with our monoliths otherwise.

I tried to contrive a usage scenario to illustrate this but you know the problem with hypotheticals. And without even a concrete problem domain to theorize on, I can't even ballpark estimate compute requirements. Would love to see someone else's analysis, if anyone can come up with one.


Microservices will add latency because network calls are much slower than in-process calls.

Microservices, as an architectural choice, are most properly chosen to manage complexity - product and organizational - almost by brute force, since you really have to work to violate abstraction boundaries when you only have some kind of RPC to work with. To the degree that they can improve performance, it's by removing confounding factors; one service won't slow down another by competing for limited CPU or database bandwidth if they've got their own stack. If you're paying attention, you'll notice that this is going to cost more, not less, because you're allocating excess capacity to prevent noisy neighbour effects.

Breaking up a monolith into parts which can scale independently can be done in a way that doesn't require a microservice architecture. For example, use some kind of sharding for the data layer (I'm a fan of Vitess), and two scaling groups, one for processing external API requests (your web server layer), and another for asynchronous background job processing (whether it's a job queue or workers pulling from a message queue or possibly both, depends on the type of app), with dynamic allocation of compute when load increases - this is something where k8s autoscale possibly combined with cluster autoscaling shines. This kind of split doesn't do much for product complexity, or giving different teams the ability to release parts of the product on their own schedule, use heterogeneous technology or have the flexibility to choose their own tech stack for their corner of the big picture, etc.


Not to mention, you need an infra team to manage all this complexity - much larger team than maintaining a few vertically scaled servers.

A salary of 3 infra engineers per year $300k, cost to company probably $450k.

For $450k a year, you can get about 500 servers, each one with 128 GB RAM and 32 vCPUs.

Has anyone done this type of a ROI?


I'm not sure if we're on the same page here. When I said "cloud providers make this option cheaper than traditional servers" I meant it as in the pricing structure/plans of cloud providers. That's why I tried to contrive a scenario to make a better point. Meanwhile your definition of cost seems to center on performance and org overheads a team might incur.

You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you. Something you already pay your provider for. So my DA point now is, is it cheaper to pay them to handle this or is it cheaper to shell out your own and manage manually?


> You say that serverless will cost more "to prevent noisy neighbor effects"...but that is an abstraction most cloud providers will already give you

I actually wasn't talking about serverless at any point - I understand that term to mostly mean FaaS and don't map it to things like k8s without extra stuff on top, which is closer to where I'd position microservices - a service is a combo of data + compute, not a stateless serverless function. But I agree we're not quite talking about the same things. And unfortunately I don't care enough to figure out how to line it up. :)

Org factors rather than cloud compute costs are why you go microservice rather than monolith was my main point, I think.


I can't recall reading much on how going for 'the cloud' or 'serverless' saved anyone money. On the other hand, I've read my fair share of horror stories about how costs ballooned and going for the old-fashioned server/VPS ended up being much, much cheaper.

The main argument in favor of the 'cloud' is that it's easier to manage (and even that is often questioned).



I haven't looked for a while but Plenty Of Fish (POF) also ran on the same infrastructure and the same framework - ASP.Net. Maybe ASP.Net is particularly suited to this approach?


What about interpreted languages? I was taught a Python web server can do $NUMCPUS+1 concurrent requests and therefore 32 1 CPU VM will perform as well as a 32 CPU VM.


You still have the overhead of the OS. In the first case you’re running 32 instances each with their own OS, the latter you have a single OS to run.

Unless high availability is the concern, I’d always recommend a big machine with lots of CPUs than lots of small ones.


Kubernetes is overkill for most applications that's true, but Docker is awesome because it solves almost all of the "it works on my machine, doesn't work in prod" and "it worked yesterday, doesn't work today" issues and isn't that hard to adopt.


Kubernetes has been great for us and is much easier to manage over time than servers. There’s an adoption cliff, but I’d take kube over spinning your servers with puppet any day.

Hell I might even run kube if I was running bare metal. Declarative workloads are amazing.


> I've always felt this problem from the first time I touched Angular. It was just so much more complex and fragile without actually a lot of benefit unless you wanted to make a really interactive async application like Google Docs or Facebook Chat.

It sounds crazy to say that now but Angular became big because it was actually quite lightweight compared to other JS frameworks of this era, declarative 2 way databinding was cool, it was compatible with jQuery (thus its widget ecosystem) and it was also developed with testing in mind. So it was easy to move jQuery projects to Angular ones, and developers cared about this aspect and it helped organize code quite a bit. Angular 2 on the other hand never made sense and it was a solution looking for problem.

React and JSX came along and allowed developers to use JS classes when a lot of browsers didn't support them. And unidirectional dataflow was all at rage. It was always the right solution of course, but I never heard about DOM diffing before that which to me is the main appeal to React. To this date, HTML API still do not have a native(thus efficient) DOM diffing API which is a shame.

> When SPA's became the norm and even static web pages needed to be build with React, developing became more and more inefficient. I saw whole teams struggling to build simple applications and wasting months of time, while these used to be developed within a couple of weeks with just 1 or 2 developers, on proven server side opinionated frameworks. But that was no longer according to best practices and industry standards. Everything needed to be SPA, micro services, distributed databases, Kubernetes etc. These components and layers needed to be glued together by trial and error.

IMHO the problem isn't React and co or even SPA. In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...). The problem is the horrible and complex NodeJS/NPM backed asset compilation pipelines and build tools that these framework often require in a professional setting, which incur a lot of complexity for very little gain.


In fact writing an REST/Web API should be easier than writing a server-side generated HTML website (no need for templating language, ugly form frameworks,...).

Why is that easier? It's more work, you are now rendering two views instead of one, a JSON one(server) and HTML one(in the client) with all the JSON encoding/decoding that it entails. You are still using a templating language and, dealing with forms in React is more cumbersome than doing it server-side.


To be fair, they only said writing an API should be easier than writing a server-rendered HTML form (1:1).


separation of concerns, easier testability, easier mocking..., the thing is especially in more complex applications the code that generates/validates the data and the code that displays them are usually written by two different people.

nowadays once the json schema design is settled, they can work in parallel, each of them can test their parts without needing the other and the merges can be simpler, because the parts do work more or less stand-alone.


> React and JSX came along and allowed developers to use JS classes

Nitpick, but I doubt that was the reason developers are flocking to React back then. In the beginning browsers didn't support JavaScript classes and neither did React. You fake them by using a function known as React.createClass instead. There was also no transpilation required, as JSX was optional. In fact React was always about unidirectional data flow, and reasoning about state -> DOM elements rather than reasoning about changes to the DOM.


> The problem is the horrible and complex NodeJS/NPM backed asset compilation pipelines and build tools

Would like to hear more.


I don't think it's developers tbh. Or rather, it's another set of perverse incentives in the industry.

To get a job, devs need experience in relevant tech. No company is willing to train their devs - they all have to hit the ground running. So devs have to have demonstrable experience in the tech that lots of companies use. Companies need to hire devs, and don't really care what tech is used. But using what everyone else uses makes their hiring easier because it's easier to find devs who want to work on that tech. So they advertise for devs with experience in a hot tech. The devs see this and try and move their internal projects to use the hot tech so that if/when they look for their next job they'll have experirence in it.

The devs are just trying to stay relevant in a rapidly changing tech scene so they can get their next job.

The companies who employ them don't care what tech is used, but find recruiting devs to be easier if they're working in the latest hot tech.

The key point that could change all this is if companies were willing to train their devs in the tech stack they're using.


Everyone looks to the bigger companies for tech trends, not realising that they have none of the problems ultra scale companies are trying to solve.


I also wonder how much of the SPA trend by mega corps was about shifting compute “client side” to save money on infrastructure. It’s kinda like modern data warehouses where storage/compute is now so cheap you do ELT and not ETL anymore. I probably wouldn’t do an SPA today unless I really had to.


The problem that nobody knows whether something will become the next Google Docs. Transitioning to an SPA from something like jQuery is basically a complete rewrite.

To be willing to not use an SPA, you need to be willing to exclude certain options from day 1. Find me a product manager willing to do that.


> nobody knows whether something will become the next Google Docs

How many times has it actually happened that some scrappy startup has 1) became the next big thing and 2) not being at the edge of over-engineering actually killed it or significantly impacted its revenue? This just feels like wishful thinking.

Also keep in mind that even if you were on track to become the next Google Docs, this means your current product is usually good enough as-is and gives you time (and $$$) to improve it.


I'm not sure that using React or another JS framework counts as 'being at the edge of over-engineering'.

I agree with the rest of your point - the value of the product to end-users has little to no correlation with the underlying technology choices, which is a pretty controversial statement, but one that I think is true. A customer doesn't care if you built it in React, in one Perl file, or if you're sacrificing goats to retain the minimum requisite levels of dark magic to keep the system running. If it solves their problem they'll keep giving you money for goats.


It depends on what the objective is. I've seen plenty of project where React was used just to have it as a buzzword, but otherwise provided no functionality and actually slowed development down and ended up being less reliable (we had to - poorly - reimplement behaviors like validation, pagination, etc that our backend framework already had for free).


Another view of this problem, evolving a SPA of CRUD app into Google Docs may also be a complete rewrite.

IMO when the time comes and your product is used well, you may be ready financially and technically to do a complete rewrite. Otherwise maybe the current functioning application is better if the rewrite isn't justified.


React isn't so bad. It's fairly straightforward and the components are contained within the page. And it's more of a library than a framework. The core is small and easy to learn

Angular is a giant confusing pile of magic. It's so complex you've gotta be a core developer to even understand how an app comes together. Stay the hell away if you can


I think Phoenix Live View is maybe the most compelling story around this (https://github.com/phoenixframework/phoenix_live_view). I'm moving a side-project from React/SPA to Phoenix live view and it's kind of amazing to get the dev ergonomics of a server-rendered page with the UX benefits of a SPA.

This course is a pretty great intro: https://pragmaticstudio.com/phoenix-liveview


I've been playing with LiveView myself for personal projects and it is very nice.

One interesting thing is that it makes you consider memory issues again since all your rendered data structures are held in memory in the LiveView process unless they are explicitly cleared after the render with "temporary_assigns". For apps that would have to hold and transfer a lot of data up and down the channel I've ended up using a hybrid of LiveView with divs set to phx-update="ignore" in which I mount React components.


I'd say that's the main (potential) 'problem' with LiveView, alongside latency issues.

In practice, I've usually found that the advantages outweigh the disadvantages when it comes to the former. Beefing up my servers seems like a worthwhile trade-off.

When it comes to latency, or related issues, I find that I'm still so much better off using LV as a basis and 'dropping down' into plain JS or (p)react when I need it.

We recently delivered a project that involved a whole bunch of stuff that LV was a perfect solution for, but also a core bit of functionality that required various kinds of animations. We ended up using LV wherever we could, and piggy-backed on the LV channel (websockets) to handle synchronizing the animation stuff. The actual code to animate things was just plain old JS. Worked like a charm!


I'm still using LiveView in a lot of the other parts of the app - places like signup/signin where the having the form validation seamlessly done server side using changsets is very, very nice. I'm also using in places where I would normally have to expose an api endpoint for CRUD and instead I'm using events and handling it in the LiveView. Again, very nice.


Agree completely. My only concern is that so many imitations are attempting to pop up in other languages and you just can’t do it as effectively.

There are so many tradeoffs present that happen to result in the necessary set of functionality to do this efficiently that aren’t easily present outside of the BEAM.


What about this can't be done with async/await?


Everything can be done in everything else. The only question is how well it fits, how contorted does it have to be, what comes naturally.

With BEAM, robustness is a special feature. In the BEAM you can kill, restart and replace processes all over the place and everything stays working pretty well, because its structure means everything written for it is designed with that in mind.

In a typical async/await server side application sharing state across clients, killing, restarting and replacing usually means the whole single process containing all the async/await coroutines, and the fancy per-client state you were maintaining is lost.

You can of course serialise state, as well as coordinating it among multiple processes, but that takes more effort than just using async/await in a web app, and often people don't bother. Doing that right can get tricky and it requires more testing for something that doesn't happen often compared with normal requests. So instead, they let the occasional client session see a glitch or need reloading, and deem that no big deal.


It can be done, but you’ll see a lot of weak points. It’s probably worth a blog post to explain it. There are several layers in the language that combine to make this work.

At a high level, is the combination of process isolation, memory isolation, template delivery, websocket capability and resilience on top of all the standard web bits.

It will be really difficult to pull off with a good developer experience and minus several deficiencies outside of the BEAM. Anything’s possible though.



It's still not a silver bullet. There's lots of things in a LV driven app where you're still wondering what front-end library to use with it.

Typically you wouldn't use LV to handle:

Menu dropdowns, tooltips, popovers, modals (in some cases), tabs (when you don't want to load the content from the server) or things that change the client side state of something but don't adjust server state. That could be things like a "select all" checkbox toggle where that doesn't do anything on its own other than select or de-select client side check boxes or toggling the visibility of something. There's also things like wanting to copy to the clipboard or initiating stuff to happen on drag / drop (like animations).

Basically you'll still find yourself wanting to use JS with LV. Whether that's Stimulus, Alpine, Vue, jQuery, vanilla JS or something else that's up to you. But I do find most of the above necessary in a lot of web apps I develop.


> It's still not a silver bullet. There's lots of things in a LV driven app where you're still wondering what front-end library to use with it.

> Typically you wouldn't use LV to handle:

> Menu drop dropdowns, tooltips, popovers, modals (in some cases), tabs (when you don't want to load the content from the server) or things that change the client side state of something but don't adjust server state.

My experience is that LiveView is fine for all but the last use case. And while in practice I often don't really need to keep things client-side-only, when I do it's often pretty easy to just write a bit of js and, if necessary, use hooks and events to communicate with the surrounding LiveView(s).

In fact, I vaguely recall that in the early days of LV, the creator himself argued that it should be used for just 'smaller interactive stuff'. Over time, we all discovered that LV does surprisingly well for SPA-type use cases (and as a result we now have stuff like router-level LiveViews (that take over the whole page), live_redirects, url updating, and so on).

> That could be things like a "select all" checkbox toggle where that doesn't do anything on its own other than select or de-select client side check boxes.

Why wouldn't you just keep that within the server-side state LV paradigm? I've done just that in a project I'm working on.

> There's also things like wanting to copy to the clipboard or initiating stuff to happen on drag / drop (like animations).

For those things you write js, yes.

> Basically you'll still find yourself wanting to use JS with LV. Whether that's Stimulus, Alpine, jQuery, vanilla JS or something else that's up to you. But I do find most of the above necessary in a lot of web apps I develop.

Absolutely, but I'm continuously surprised how little of it I need, and how often I /think/ I do just because I haven't quite wrapped my head around the different paradigm.


> My experience is that LiveView is fine for all but the last use case.

I wouldn't want to impose a 50-500ms+ delay on someone to show a menu drop down or a tooltip or most of the other things listed out.

With LV everything involves a server round trip. That's great for when you need to make a round trip no matter what (which is often the case, such as updating your database based on a user interaction), but it creates for very unnaturally sluggish feeling UIs when you use LV for things that you expect to be instant.

Even a 100ms delay on a menu feels off and with a good internet connection if you have a server in NY, you'll get 80-100ms ping times to the west coast of the US or the west coast of Europe.

LV feels amazing on localhost but the internet is global. I still think it's worth minimizing round trips to the server when you can, not because Phoenix and LV can't handle it but because I want my users to have a good experience using the sites I develop.


> My experience is that LiveView is fine for all but the last use case. >> I wouldn't want to impose a 50-500ms+ delay on someone to show a menu drop down or a tooltip or most of the other things listed out.

It's basically a UX standard that a tooltip only shows up after a few seconds, so that strikes me as a particularly bad example. That said, sure, if instant tooltips are important, a tiny bit of js and a specific class name in your markup would solve it.

> With LV everything involves a server round trip. That's great for when you need to make a round trip no matter what (which is often the case, such as updating your database based on a user interaction), but it creates for very unnaturally sluggish feeling UIs when you use LV for things that you expect to be instant.

Yeah, I do agree on that. While I feel using a tooltip is a bad example, in practice I wouldn't implement tooltips or menus in LiveView. Those would just be solved via some CSS trick or some plain old JavaScript.

> LV feels amazing on localhost but the internet is global. I still think it's worth minimizing round trips to the server when you can, not because Phoenix and LV can't handle it but because I want my users to have a good experience using the sites I develop.

I'll give you that generally I wouldn't use LV for tooltips and popups. But in part because those are really easy to solve without it.

But for /so/ much of the stuff involved in a SPA the latency has not been a problem in practice.

Consider tabbed content. Sure, I could make it all 'instant' by preloading the various bits of content and writing js to switch between these bits. But I can avoid that entirely by preloading those bits in my templates and using LV to switch/update classes. The tiny latency downside is worth the upsides: being able to update the content in those tabs live with no extra code (no API calls, no client-side frameworks, and server-side rendered as a nice bonus!).

My general approach is that I use LV as a default, and then use the 'Hook' system and some custom JS when latency is a concern. In practice that doesn't amount to much. So it's not a silver bullet, but it simplifies so much of what a typical SPA does.


If you preload, where does LV introduce latency in that tab example?


The click would be sent as an event to the server, where the state is changed (setting "active_tab" or something like that). Then the view would be re-rendered (probably only changing a few class names) and the diff sent back down to the client.


Gosh, that feels so inefficient (as a js dev here). Then again... React had it's naysayers for sometime because 1. JSX, 2. nobody saw dom diffing as truly fast enough. But dom diffing is absolutely faster than asking the server to update a classname.


True, in this particular case it does seem inefficient. And of course there's nothing stopping one from just doing this with a bit of js.

But in the bigger picture, the advantages of this approach are huge:

1. no need to maintain state, routing, and so on on the front- and backend, which removes a huge source of complexity. It's all in one place. And if something in the DB is updated, it's trivial to make it live-update the client state. And because of websockets, such an update is almost instant.

2. being able to use the same language (and templating) on server and client (for the most part).

3. the ability to just use regular function calls to retrieve data, and selectively display what you want by using it in templates. No need to set up endpoints for the client, and no need to worry that perhaps accidentally the endpoint might send data down the wire that shouldn't be there (and that you might not notice because the JSX doesn't display it). I think in just the past year I've read about a number of serious data leaks that were basically a result of this.

4. no need (or not as much need) to keep an eye on the js payload. Want to format dates in particular way? Just add the dependency and use it however you like. It's only diff in the output that gets sent to the client!

5. little to no need to deal with a complicated build process.

6. server-side rendering out of the box, and in a simple manner!

7. less taxing on the client. No need for processing templates and a lot of code. Of course, the downside is that the server has to do more work.

Now obviously latency can be a downside, as is (potentially) increased memory and processor usage on the server. It's not a magic solution to everything :). Hell, my last project still needed quite a bit of javascript for some heavy interactivity where latency had to be avoided. But it's still astounding to me how many projects have become drastically simpler with the LiveView-approach!


We’ve gone all in and are using it a lot in production. There is one big trade off to consider though: point of presence and global availability. We’re ok, as we’re a UK based website and hosted in GCP europe-west2, but we had devs in NZ for a while, and they said that, understandably, the site was really slow for them due to the latency. Beam has the ability to link instances and have them cluster, so you could do something like that across multiple regions globally, but there is a trade off to consider if you run a global service: you’re trading an over engineering and sync problem, for a potential distributed systems problem if you try and make the same liveview site available performantly in multiple global regions.


as a user, this is the kind of experience I want on the web. any spa like page should always degrade back to web standards. and it should be lightening fast. and not spin my cpu fan or warm my desk.


Yes, and please no more load screens for a page of text a few photos, (looking at you Blogger), just send the text and links to the photos.


Blazor Server (https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor) is similar. It's a very productive environment.


Can't believe this didn't get mentioned in the article.


I started this course and the animations/presentation are amazing. some really clear explanations. But the fake cute-sy dialogue and the extra fluff that says how wonderful and amazing and great and FUN liveview is means I didn't get past the first video


I liked their Tetris videos, which I watched the first several of.

That said, if you want a different approach then check out my series at https://alchemist.camp/tagged/Reactor

I covered the whole process of building the site for my podcast https://reactor.am in the series.

Note that LiveView still isn't at v1.0 and breaking updates have been common. You'll have a much better with my LV tutorial or anyone else's if you use the exact same library versions we do and upgrade at the same point the tutorial does.


Having deployed liveview for an admin dashboard, I gotta say, it really is great FUN, and no-fuss, even if you're slinging together a system that customers never see so you don't care if the code gets a bit knotty, and your datastructures are abjectly awful.


What is Live View not good at?


It's not suitable for using the client's computer to mine crypto currencies, doing graphics processing on the client, doing numerical processing on the client or doing anything purely on the client without server interaction.

For things where you generally need a trip to the server anyway, like validations, it's great.


Yup, exactly. If you’re mostly building a client application (or a p2p client-server application!) but it happens to live on the web platform, PLV does not seem like a great fit...at least not this year. Honestly BEAM seems great it ought to be great for that generally!

Maybe if you ran the server portion of PLV in the browser...but then you’re just back at React anyway I suppose.


You wouldn't use it for an offline app, but that's not what it's being marketed towards.


Is this similar to Vaadin?


IMO, TurboLinks + service workers are the way to go.

Not many people know this, but a service worker (previously called "local server") allows you to run a little web server in the user's browser that intercepts requests to your own web site. (There's no open IP port.)

The service-worker web server can proxy requests to the remote server, and even build/store entire pages on the client side, enabling offline support. Service workers also have access to a local database, IndexedDB, running in the user's browser.

You can build a very fast web experience this way. You can easily cache individual pieces of a web page and glue them all together.

For example, you can easily implement server-side includes https://en.wikipedia.org/wiki/Server_Side_Includes or edge-side includes https://en.wikipedia.org/wiki/Edge_Side_Includes but running on a server that's running on the client.


I have literally never had a worse experience developing than with serviceworkers.

Accidentally cache your index.js file? You are now stuck with your serviceworker forever (unless you do some chrome voodoo).

The promise is there, but it’s just a gigantic footgun.


If you build your SPA to an index.js file and serve it without a timestamp you're screwed when it gets cached too.


Not nearly as much since every browser has a variety of ways to perform a cache refresh.

Any issue with webworkers is almost by necessity opaque.


Strongly agreed that service workers have a ton of potential here. But I’m still waiting for there to be some kind of killer framework that crosses the bridge between worker and window, saving lots of main thread processing, etc... MessageChannels are quite low level so I imagine it would need some abstraction. But still, very powerful. I’m imagining some kind of Svelte-like thing that creates the whole page worker-side, then generates minimal window-side JS to hydrate the components that’ll actually change. Of course I’d make it myself, but... oh, look over there...

(IMO they should still be called local servers, or server workers perhaps. Service worker is too vague)


I'm working on something like this. Not really a framework though. More of a hodgepodge of JS that I've written and the front end logic would be something like HTMX/Behavior.js style of coding. It will be a progressively enhanced approach to writing a SPA so you could have no JS and it would still work! Or, if you have a modern browser it would work offline. We'll see if I ever finish it :-)

I'm doing a bit of a rewrite right now so this flexibility will come soon.

https://github.com/jon49/MealPlanner

Right now it only works offline.


Is ESI in Service Workers anything you’ve tried? A bit mind blown, never thought of that. Been putting of SW due to all horror stories of people bricking their sites basically


Is this the same model meteor.js uses? I seem to recall their system maintains a mini-database on the client side and syncs it with the main server through a pub-sub model.


Yes. At least it can use service workers. Meteor uses MongoDB on the backend, and minimongo client side. Those two are synced over their DDP protocol IIRC.

I miss meteor. It was such a great framework and promise. Not for big sites really, but for mock-ups, internal sites ect. A while ago I started to look at it again. The drivers behind the project was essentially asking for input on what was preventing people from using the framework. The current state of the project is/was OK, except for the fact that it was hold down by all the guides and howtos referring to previous versions. It was not well documented what current best practices to follow, what to use as replacements for deprecated dependencies ect.


Not quite. It's just a client-side cache of query results matching a MongoDB query on the server (pub/sub).


That sounds incredibly complicated just to avoid using react


Any good tutorials you’d recommend?


Author seems to think the goal of SPAs was to simplify web dev, but it’s actually to allow you to build fully featured, highly interactive, apps in a browser.

What the author is really getting at, I would guess, is that front end dev is awful, due to this weird combination of the Blub issue and a historical trajectory that has caused many problems.

The blub issues is mostly simple enough to pin down. Experienced programmers know that JS is an awful language. But there’s also a tooling or SDK “blub problem”. For example, compare npm and webpack vs gradle and javac (not to defend the Java ecosystem, but it does get some things right, or more right than others).

More idiosyncratically, there’s this historical arc of encountering fundamental problems, and trying to solve them within the current constraints of the web, rather than perhaps waiting for the web to standardise and evolve. This seems to be a mixture of lack of experience outside of this ecosystem (a bit like Blub) and, for this and other reasons, fixing problems “in your app” that should be fixed in the fundamental infrastructure of the web. It feels like technical solutions to social problems or, to use another metaphor, we are patching downstream what should be fixed upstream... if you build a house on sand, it will never be robust, no matter how many layers of infrastructure you add. That’s where the complexity arises.

It’s an enlightening exercise to step back and ask how you would build SPA infrastructure if starting anew. You certainly wouldn’t use a language like JS, you certainly would want to provide visual design tools as far as possible, APIs would be replaced with standard protocols, and probably you’d use a relatively small XML for layout. So perhaps only the HTML is anything like what you’d use. There’d be no transpilation, no webpack, no polyfills, no CSS, no JS.

In fact, what you’d end up with would look remarkably similar to the dev process for a Java applet!


> You certainly wouldn’t use a language like JS,

I disagree with this. You might not want to use JS but a language “like” JS such as TS or Lua would definitely be on the table. Or just JS without the biggest warts.

> you certainly would want to provide visual design tools as far as possible,

I feel that the promise of visual design tools fell quite short. Issues with version control and general traceability of changes and the ultimate non feature parity with code make me think that code first interfaces are the future.

> APIs would be replaced with standard protocols,

Could you elaborate what do you mean? I assume REST apis but that is basically just HTTP.

> and probably you’d use a relatively small XML for layout.

As long as the language is robust enough to not move stuff around when the UI slightly changes. I feel that XML (and to an extent html) is too lax to express a programmatically created interface.

> So perhaps only the HTML is anything like what you’d use. There’d be no transpilation, no webpack, no polyfills, no CSS, no JS.

In a different world you could replace these steps by compilation, compiler, backwards compatibility libraries, a styling framework and a language of choice.

I think you have shown that JS ecosystem has grown very organically. I think this is because the nature of web developers was to put stuff out rather than really think about how do make it the correct way. I believe this is because of constraints, on a native platform you had the option to go down to assembly or create a new language or paradigm. On web only the browser vendor has this power, all the dev had was JavaScript.


> You might not want to use JS but a language “like” JS such as TS or Lua would definitely be on the table. Or just JS without the biggest warts.

When you remove the warts from JS there's not much left. And I'd be pretty skeptical of someone starting a new project in Lua today. I think the mainstream choice for a "blank slate" language today would look something like Swift or Kotlin; Typescript can gets close, but it still has a lot of JavaScript baggage you'd want to strip out.

> I feel that the promise of visual design tools fell quite short. Issues with version control and general traceability of changes and the ultimate non feature parity with code make me think that code first interfaces are the future.

All the big UI libraries end up offering some kind of markup/constraint-based interface - which is ultimately data rather than code. And for editing that, a visual form designer makes a lot of sense. I like Qt's approach - you visually edit a markup form that's compiled into a class you can subclass, so you don't have to deal with the problems of code generation, and the markup is relatively version-control-friendly.

> Could you elaborate what do you mean? I assume REST apis but that is basically just HTTP.

I'm not the person you replied to, but thrift/gRPC are a lot nicer to work with than REST APIs. Standardised protocol definitions that let you understand what kind of changes are or aren't forward/backward compatible, and no need to write a bunch of boilerplate by hand.

> I think you have shown that JS ecosystem has grown very organically. I think this is because the nature of web developers was to put stuff out rather than really think about how do make it the correct way. I believe this is because of constraints, on a native platform you had the option to go down to assembly or create a new language or paradigm. On web only the browser vendor has this power, all the dev had was JavaScript.

It's the same story as "no-code" tools: IT departments won't let anyone install an application runtime, but they're happy to install a "document browser" and let it run arbitrary code. It's understandable, but depressing.


> When you remove the warts from JS there's not much left

Modern JavaScript is pretty sweet to write compared to pre 2015. It sure is fun to join in on the "JavaScript bad" circle jerk though.


When the bar are languages like F#, Haskell, Elm, Reason... How can JS be considered good? All languages have warts, but JS is a language full of them, ambiguity is the name of the game, mutating everything is encouraged and global mutable state is everywhere. Very little has changed for JS outside of syntax, the core is still rotten.


> Modern JavaScript is pretty sweet to write compared to pre 2015.

That's damning with faint praise if I ever heard it. JavaScript has more or less caught up with the lowest common denominator of other languages. But I've yet to hear anyone make a good case for actively choosing to use it.


I'm a long-time Javascript hater. I recently did some vanilla ES6, now that browser support is finally at a point where you don't need to transpire to ES5 - I admit, ES6 is much nicer than things used to be, and not having to transpile is wonderful.

But realistically you still want a JS build pipeline of some kind, for minification, compiling SASS to CSS, and other things. And of course if you're doing "modern" frontend work, you're not using vanilla JS like I was - you're using a complicated framework like Angular, React etc, and having to deal with the likes of webpack.

And then there's the anaemic standard library - honestly, barely any better than it was 10 years ago. And then, largely as a result, you've got NPM-dependency hell, with hundreds or thousands of deps for just about anything. Want to trim a string - there's a package for that.

And then there are other languages and ecosystems - when you compare JavaScript to those... well, then you really have to admit it's a turd.

Hating on JavaScript isn't a bandwagon - there are many genuine reasons why people dislike it and the ecosystem. IMO, comparing it to a "circle jerk" is like that meme where some character is surrounded by fire saying "this is fine".


There is a lot to be unhappy about but minification, sass, webpack, these are all just compilers for the language.

People sometimes complain about CMake and g++, but it seems that for JS people criticise even the mere existence of these tools.

Since web "apps" have become a thing, JavaScript and CSS are basically a more readable form of assembly.


I agree that plenty has been done to improve JS in recent years. That does not make it a good language.


> And I'd be pretty skeptical of someone starting a new project in Lua today.

Well, there is the upcoming play.date SDK for example and a lot of jam games.

The biggest reason why these languages are used is because they are fun.

Don’t get me wrong, I really like swift and SwiftUI. But there is something quite liberating when throwing away the type system completely and just hacking at the code. This brings in new people who get motivated by seeing a thing shaping up rather than staring at weird error codes.

> but thrift/gRPC are a lot nicer to work with than REST APIs.

I agree with this, protobufs are nicer to push around rather than JSON. Ultimately though one can do pretty arbitrary stuff with either.

For your last point I think that is a separate issue. The ubiquity of web is certainly reason for its popularity. But I was mostly talking about why people hacked around issues with upstream rather than trying to find more sound solutions.


> In fact, what you’d end up with would look remarkably similar to the dev process for a Java applet!

Yeah, it's increasingly clear to me that Java was just 20 years ahead of its time. Java really would make a great front end language.

People lament the complexity and size of the JVM... But these days V8 is just as bad. The complexity is a trade-off for runtime performance.

It's compiled into a compact easy to parse bitcode format similar to WASM. It's faster than JS and shares many basic design decisions. The packaging system is and always was basically a better version of NPM. It has pretty good cross-platform UI, probably the best there is outside of QT.

WASM apps are eventually going to be built identically to Java cross-platform desktop apps. Probably using Java, Go, C#


> Probably using Java, Go, C#

I hope not. The world is finally waking up to the need for sum types. A big part of the reason Java is so hated for doing simple things in is that lacking sum types meant it had to implement a horrible "checked exception" system.


Not even Java applets, but Silverlight.

Hoo boy it was awesome to work with. But. It came out at the wrong time. Linux was coming up as a desktop, MS was still evil. Mono was mostly a hobby project.

The backlash of trying to use "proprietary M$ crap" for web was too big of a hurdle to cross, even if the technology behind Silverlight was lightyears ahead of the drek that was Java Applets.


I feel like the mistake Java made was ceding the DOM to Javascript. It turned out that users like the browser and didn't particularly want either native widgets or a new system. The browser is familiar and good enough for a vast majority of tasks.

If the JVM had access to the DOM, you could write SPAs in Java and everybody would be happy. Instead Java sealed itself off separately from the browser, and Javascript went from little toys to full-blown UI applications. And then developers wanted to use the same code on both client and server so they made Node, working in the space where Java is so much clearly better.

So we end up with the worse language running the world. Fortunately, JS has finally become a mediocre language (or even a decent one with TS), and here we are.


I think it was just that shipping native apps on a web page was too slow back then for the internet and the machines. For the web to grow, it had to be simple. It took almost 20 years before we looped back to running full applications in the browser


> Experienced programmers know that JS is an awful language

BS blanket statement.

Some - many, maybe - programmers that in all likelihood multiply your experience and talent by orders of magnitude believe it's a great language.

Not everyone working with JS is doing so with a gun to their heads.


I don't know any polyglot programmers who would consider javascript better than at least one of the other languages they use, and would ditch js for those if they had the option.

TypesScript, maybe.


Note how I didn't say that many polyglots may think it's the best language, just that it's great as opposed to awful.

I know as a fact from watching tech conferences and talking to people that some elite-level polyglot programmers make it their main working language out of pure choice. Surely one can't assume that everyone doing bleeding-edge Javascript at places like Google is programming every day in absolute misery, or that it's the only thing that they know.

Me personally, I've possibly enjoyed programming in C, OCaml, Swift or Python more at times, but I think ES6+ Javascript is great and I'm happy to use it.

Javascript appears as the 11th most loved language in the SO dev surveys, with 66% of devs working with it reporting to love it, while it appears quite far down in the dreaded list. Typescript indeed fares much better. [1]

Of course, there is the question of whether respondents are True Programmers™ or posers, but I think that's another debate.

[1] https://insights.stackoverflow.com/survey/2019#most-loved-dr...


> Note how I didn't say that many polyglots may think it's the best language, just that it's great as opposed to awful.

Fair enough :). I think 'great' is a bit of a vague concept, but I definitely would agree that it's far from awful.

I remember when I wrote a lot of JS back in the day ("Javascript: The Good Parts"), I did feel that stripped from the bad parts, I often preferred the elegance of the basic good parts over, say, Ruby and its "blocks, procs and lambda's" (for example).


I’m a polyglot programmer and I find quite a lot to like about JS. The concurrency model, for one, and also its multi-paradigm nature.

I’d choose JS over Java for back-end glue code, and JS+HTML+CSS over Java+Swing for front-end UI and app distribution, any day of the week.


Agreed, but to be fair I suspect a polyglot programmer will still choose many other languages, outside of Java, over JavaScript.


You said, "I don't know any polyglot programmers who would consider javascript better than at least one of the other languages they use, and would [not] ditch js for those if they had the option."

Now you do.


Ha, true. Nice to meet you!


> not to defend the Java ecosystem, but it does get some things right, or more right than others

Out of curiosity, as a Java enthusiast, I was wondering if you could give examples of what you feel is wrong?

In my HN browsing I find Java is rarely actually discussed here, though often dismissed. I don't know why.


My impression is that the design of Java the language, plus its runtime, are appreciated even by the harshest critics.

However the culture around complex frameworks and over-engineering is what most people really dislike about it.

IMO pretty much all the advantages touted by Java proponents (such as: good language design, easy of use by heterogenous teams, speed, etc) are correct, but are negated by a large part of the culture and ecosystem. The memes about humongous class names and 200-method stack traces are true when you use the popular frameworks and techniques.

Of course there are exceptions to this and this can creep into other languages too, of course.


> However the culture around complex frameworks and over-engineering is what most people really dislike about it.

I think this is it. I remember back in the day being absolutely floored when I started learning J2EE by the, it seemed, unnecessary complexity (for most use cases) of EJB. It was incredibly offputting: if you were starting a project from scratch it felt like you had to do a ridiculous amount of work just to get to "hello world". I'm sure it wasn't that bad but the memory has slightly scarred me.

I haven't worked in Java for ages, mostly working with .NET for the last 16 years and, unfortunately, the same problem has to some extent bled into the .NET ecosystem too.

A few years ago I contracted at a place where the "micro"-service team I was assigned to had this codebase where they'd clearly taken the OSI 7 layer reference model to heart and applied it to a domain where customer details were collected and recorded. I've nothing against layered architectures, and have made use of them many times in appropriate circumstances, but this was awful: one of the most needlessly complex codebases I've ever worked with, and incredibly discouraging to work on because it was so hard to actually achieve anything. There were fully three or four layers in the middle that did nothing but call the next layer down. The quantity of boilerplate was extraordinary. To add one method that did anything of substance you'd actually have to add between five and seven methods, most of which did nothing but call the next layer. Ridiculous.

Still, that doesn't change the fact that the .NET languages, runtime, and base framework are excellent, and that sadly being excellent is no antidote to misuse. Same applies to Java.


That's true. I also used to be a .NET guy in the past, but I started doing more games (and then frontend) when the movement from Rails-ish to Java-ish MVC started.

The thing about the multiple "layers" that don't do anything really bothers me too, because they are a misconception of how those complex architectures (Clean/Hexagonal/Onion) really work...

Instead of having mandatory layers, those should be pluggable. Just having a layer calling the next one is unnecessary, and some people implement it by having the next layer as a transitive dependency, which makes testing harder and has zero benefits!


> The thing about the multiple "layers" that don't do anything really bothers me too, because they are a misconception of how those complex architectures (Clean/Hexagonal/Onion) really work...

> Instead of having mandatory layers...

C# guy here.

I don't think things were ever as bad in the dotnet world as they are in Java, but l do still come across a lot of what you're describing here. Thankfully though, a lot of devs do seem to have "awakened" - it feels like there is a lot less cargo-culting of "best practises" such as layers, interfaces and abstract classes for everything, tests so full of mocks you can't see anything being tested etc.

C# is a fantastic language, but as with any OO language there are lots of abstraction-related traps to fall into.


For me, Java’s ties with Oracle and the nightmare stories about complicated `MetaAbstractBaseClassFactoryClassFactory` are why I seek alternatives, or would be dismissive.


Just noting that the abstraction stuff is mostly a consequence of the CORBA-derived, over-engineered "Enterprise Java" space, and provided you stay out of that tar-pit, and choose your libraries/dependencies wisely, Java is really nice to work with.

Even if you need to implement some kind of "Enterprise Java" app, you can do so with much better libraries and tools than back then, that do not suffer from the excessive abstraction problem.


Hey you leave CORBA out of this!

My first programming job was a pilot study for porting a platform from old and busted CORBA to the new hotness, J2EE. It was embarrassing how much worse than CORBA J2EE was.


I still see factories on a daily basis. They are a useful design pattern that is utilized in Java.

My anecdotal evidence is that I have never seen the over-engineered "Enterprise Java horrors" OP is talking about despite working in the Java EE (now Jakarta EE) space.

I suspect it's a story from the times of J2EE, or something similar.


> I still see factories on a daily basis. They are a useful design pattern that is utilized in Java.

A separate factory type means you have to write twice as much code for no real benefit. In most languages you'd just use a first-class function (and in post-8 Java you can do the same: rather than a FooFactory you accept a Supplier<Foo> and people can pass ::Foo . It's still more cumbersome than in most languages though). Or, in a lot of other cases, the factory is just a clunky way to achieve named arguments.

> My anecdotal evidence is that I have never seen the over-engineered "Enterprise Java horrors" OP is talking about despite working in the Java EE (now Jakarta EE) space.

Have you worked on a reputable codebase in a low-overhead language like Python or Ruby? If you don't recognise factories as bloat then you may well miss the other cases (famously, the majority of the Gang of Four patterns can just be replaced by passing a function).


> A separate factory type means you have to write twice as much code for no real benefit.

Ah! There's the confusion. What I meant was I see factory methods in code we consume on a daily basis, not that we write the full factory objects. A number of Java projects have static factory methods that provide the interface implementation instance based on your configuration.

Would you still object to this kind of design?


If you're actually using that configurability (i.e. your method actually instantiates different implementations in different cases) then no - that's the same thing you'd do in any language. If you're pre-emptively defining factory methods that actually just call the normal constructor then yes (a lot of Java bloat is like that - see also getters and setters on every field for the sake of the 0.01% where you actually want to do something other than just read/write the field).


It's not Javas fault it got bought by Oracle (and I don't agree that is is bad, as Oracle advanced it quite).

And I do like 'MetaAbstractBaseClassFactoryClassFactory' type names because that allows me in an application with hundreds if not thousands of classes to find the class I'm looking for very fast by just typing a few keywords into my IDE.


> Out of curiosity, as a Java enthusiast, I was wondering if you could give examples of what you feel is wrong?

I think there's a lot of criticism for Java's language design. It involves an awful lot of boiler plate, and is generally very verbose. It's also a language that forces you to use OO, and OO has received a lot of pushback over recent years - so that approach has become very unpopular.

Personally, I also think the use of so many design patterns is an attempt to compensate for what the language lacks, its reflection capabilities are flawed etc.

I don't hate on Java. For many years it was my main language. It has awesome tooling and the JVM is incredible. But I do agree with most of the criticism.

I've been learning Clojure recently and Rich Hickey's talks often begin with some motivation including criticism of Java and OO more generally, here's one such video: https://www.youtube.com/watch?v=VSdnJDO-xdg


> In my HN browsing I find Java is rarely actually discussed here, though often dismissed. I don't know why.

Personally, I feel like Java is not really as hated as some people make it out to be. It's a stable language that very few people choose for their "cool side project". At the same time, it has excellent tooling, mature and prod-ready open source frameworks, and backing of some giant companies.

This is not to say that Java is perfect. I think most dissatisfaction comes from students or junior people who are baffled by the complexity of Maven/Gradle configurations, strict project structure (where a class can be 10 directories deep), and Java's insistence on boilerplate (which is often challenged by new Java releases. Those, however, are quite rare in production; I'm starting to see Java 11 here and there but the majority of projects I've seen run 1.8).


What is a "blub issue". I tried a slang dictionary but couldn't find anything. Thanks.


https://wiki.c2.com/?BlubParadox

The basic idea being that you know a tool that's obviously superior to a lot of alternatives, but you're blind to how different alternatives are superior to it because you can't grok the power beyond their initial "weirdness".


What's the blub problem?


It is described here, specifically in the "The Blub Paradox" section:

http://www.paulgraham.com/avg.html

My understanding is that everyone understands the features and ecosystem benefits of languages they work with, but not necessarily understands those of other languages. As a result, the value of their opinions on other languages may be mixed.


Thanks!


>Author seems to think the goal of SPAs was to simplify web dev, but it’s actually to allow you to build fully featured, highly interactive, apps in a browser.

Typical hipster web dev now days! Go back to the basic.


One entry in this space that doesn't get a lot of attention is ASP.NET Blazor [1]. Blazor gives you the option of writing views in C# that will actually compile to WebAssembly and run in the browser, or run on the server and send DOM updates over a SignalR connection, a lot like LiveView.

[1] https://docs.microsoft.com/en-us/aspnet/core/blazor/


Yup. Been using it for a very complex data app. Its awesome.

You can choose Wasm for LOB apps or on fast connections.

You can choose Server hosting for apps that have about 100,000 users at a time. Mind you, that is 100,000 users concurrently. Which should be more than enough for most apps.

There are plans to reduce bundle size and server side resources, coming in .NET 5.

As per Microsoft Benchmarks[1], it should cost about 100 USD per month (3 years, reserved instances, paid upfront) to handle 20,000 concurrent users. (This price is not including database, storage, etc). That comes to about 0.18 USD per user, for 3 years. (cost of serving app to 1 user for 3 years in total). Which seems pretty reasonable, especially if you have a decent, paid app. The cost comes down even more, if only a %age of your users are online concurrently.

On Digital Ocean, a similar machine, would cost about 40 USD a month. For 3 years, total cost for 1 users will be 0.072 USD

[1] - https://devblogs.microsoft.com/aspnet/blazor-server-in-net-c...


Blazor is certainly a very interesting and promising piece of technology. However, Blazor Server hosting mode is prone to latency issues and requires always-on connection to run an applications (so, no offline mode). On the other hand, the alternative Blazor WebAssembly mode requires clients to download a sizeable mix of .NET runtime and other system DLLs on the first use of the application (even a lightweight demo application with almost zero app-specific resources requires a download of 6+ MB of data in DEBUG mode and 2+ MB in RELEASE mode). Of course, relevant Microsoft teams work hard on further minimizing the size of the system bundle, but there are obvious limits to efforts in this regard.


Is offline mode widely used? I remember it being released to great excitement and then I never heard about it again. I assumed it died out when being offline became too much of an edge-case for your average user to be worth dealing with.


I have no idea about how widely used the offline mode is, but I'm curious to find out, should people run across relevant statistics.


I would describe it as more of a "tail end" case, a small percentage of use cases, but a long list of users.


I believe the runtime is under 1 MB before compression. As you stated, they're working hard to reduce that size.

At the same time, while not condoning it, I just scrolled to the bottom of Amazon's front page and downloaded over 30 MB.


I think that "under 1 MB" represents the size of .NET runtime proper. However, additional required system DLLs increase the download size of a minimal application to the numbers I cited above [1]. Anyway, your example of Amazon's front page is interesting and is a good point (even though, in my quick test out of curiosity, relevant download size resulted in 2.2 MB [not authenticated] and 9.2 MB [authenticated] - quite a bit less, but still ...).

[1] https://blog.ndepend.com/blazor-internals-you-need-to-know


2 MB should not be an issue nowadays. Considering 5G roll out & substantial coverage in most mature markets, in say, 2 - 3 years, 2 MB is acceptable.


Remember, that 2+ MB is the download size of a bare-bones application. Relevant sizes for real-world applications, obviously, would be bigger (though, depending on the total size, the Blazor part might or might not be essential).


Download sizes are definitely a problem but they’re not really a Blazor-specific issue these days. Also I would be curious to learn about the relative density of JavaScript bloat vs Blazor download sizes because the .NET class library is much more feature rich before adding dependencies and I could imagine bundle sizes actually being smaller for similar functionality above a certain point. But you’re right that it’s probably never going to get as small as a pure HTML + CSS app with progressive enhancement and minimal JavaScript.


Fair enough. Thank you for sharing your thoughts.


While this may be true in most first world countries, 2mb are still a lot in development regions (and sadly also in Germany)


Are the latency issues specific to Blazor Server or are they inherent in any framework that uses this pattern, like LiveView?


It doesn't seem like there's any major difference so it's interesting that latency is perceived as being a reason not to use blazor server in production but not as such with liveview.


The audience for Blazor Server is probably an order of magnitude larger than the audience for LiveView, and it's mostly not the Hacker News crowd that needs to be won over, but the large enterprise with half a million lines of Web Forms that has grown into an unmaintainable behemoth.


Inherent in any framework that uses a connection to transmit diffed DOM nodes.


yeah, that's what I thought. it's certainly the achilles heel of this technique.


Last time I looked it couldn't run in the browser, and there was no ETA, has that changed?


Yes it runs in the browser. I have several clients with Blazor webassembly apps running in production right now.


Nice, if you don't mind me asking, what sort of apps are they? I'd like to use Blazor for my next project


And best of all, both Server and WebAssembly has a lot in common, so it is possible to make that movment as needed. (I often prototype in server, as it's a lot easier to debug, too!)


If you are using Blazor Server, you don't have to even think about REST APIs. Depending on your app, that might be good or bad. But if you primarily target browsers on Desktop, you can tremendously improve productivity, because you can directly use C# POCO Models for UI and backend data services.


Blazor currently has to ship its runtime over the wire so it's not an option for anything bandwidth constrained.


Not if you use Blazor Server.


ASP.NET core is still all the same over-engineered enterprise baggage you see in the Java world.


It's not. You can literally have a simple "hello world" app in a handful of lines or code if you want.

I have to wonder if you actually use ASP.NET Core yourself, because ASP.NET Core 3 is, IMO, fantastic. ASP.NET used to be a bit clunky, and lacking in extensibility points, and a lot of people used alternatives like Nancy instead. Nancy officially stopped development around a year back, largely because dotnet devs just don't need it anymore. I was a long-time Nancy fan myself, ASP.NET Core 3 has all the best bits and more.

And you don't need to stick with the typical paradigm on controllers in one folder, views in another etc - feature folders work great. Hell, you don't even really need to use controllers, if Razor Pages are your thing.


it smokes Spring in benchmarks, often by a factor of 10x or more, and on plaintext it's 50x faster than spring and even with the fastest Rust/C++ frameworks, so must be some awfully light baggage


Admittedly I don't have a lot of experience with enterprise web development in Java but I love working with ASP.NET Core and I think it's come a long way since the mess of Web Forms and classic ASP.


What makes you think so? I'm actually curious


On the spectrum of client-server rendering, I am leaning very far into the server-side philosophy.

Right now, we use Blazor with server-side rendering for our internal dashboards. This feels almost perfect to me. There are still some rough edges and I still have to ultimately deal in terms of HTML/JS/CSS.

I've got a side project that takes the Blazor server-side concept to the absolute extreme. But, I haven't had much time to work on it this year considering... Hoping to get back into the crusade in early 2021. Looking to pilot it as a replacement for one of our Blazor server-side apps in 2H 2021. One of the bigger objectives is to develop a web application that is perfectly auditable and secure as possible. If you render final client views on the server, you can DVR precisely what each client is experiencing. The client source can be ridiculously lightweight and betrays nothing regarding the business. My prototype currently serves a ~50kb single-file HTML payload that runs the entire show for each client. I am looking at things as deep as server-side rendering of the mouse cursor. Looking at each client as a simple event stream from the server's perspective is a really neat way to organize this problem.


My experience with LiveView has been similar. I can highly recommend looking into Tailwind (and Tailwind UI). For me, it solved some of the pain of still having to deal with CSS and various preprocessors for it.

I can now create entire working apps using just Elixir code and the utility-style Tailwind classes within my server-side templates. I still need to write some js and css, but where LiveView reduced the need for custom js to a minimum, Tailwind did the same for css.


> This feels almost perfect to me

This. This is the sentence I am going to use to describe Blazor Server from now on.


My goal is to write a demo app with:

- Rails

- StimilusReflex

- view_component (https://github.com/github/view_component)

- Web components

With these 4, it divides responsibilities very cleanly/pragmatically: Rails is your app framework. view_component is how you divide your view into an organized/flexible structure. StimilusReflex is the "reflexive" bridge between the 2.

What happens when you need just a sprinkle of javascript to re-render a component when websockets/StimilusReflex are too slow (ie: user interacting with a color picker or something)? You could use Stimilus.js to sprinkle this interactivity... BUT you might end up with duplication between your view_components and your Stimilus.js controller. If you use webcomponents, then you can follow the open/closed rule: Your view_component can only interact with your custom webcomponent and then your webcomponent knows how to render/re-render those custom bits. So it doesn't matter if that webcomponent is rendered from: initial page load, StimilusReflex, or rapid js events (before they are throttled to StimilusReflex), it's all goes through the webcomponent "front door".


If you’re going to use view components, maybe check this out: https://github.com/unabridged/motion


I've been thinking about this setup a lot but never have time to try it. Is there somewhere I can follow to be notified when you accomplish this?


I've had great success with the Turbolinks + Stimulus approach. There are a couple of common patterns that you'll reach for, namely, lazy loading content (basically a <div> with a URL attribute that you have Stimulus load via AJAX) and really leaning into Rails remote-link / server javascript responses for modals and little page updates.

It's so great to still be super productive and be able to crank out several pages of an app in a few hours vs most of the React / SPA codebases where you might send the whole day on one little component.


This is my go-to as well, and it really has worked out quite well for me. Everything feels super responsive and I find it very easy to create reusable stimulus controllers.

I come from the "js sprinkles" approach that rails has always favored, and this feels like a logical next iteration. I sometimes wonder why Basecamp doesn't publicize Stimulus a bit more; I really only learned about it at rails conf. It feels almost like it could be a part of rails itself, and it's the kind of thing that is useful for almost any full stack rails app that does server side rendering.


I am using the combination too. You really get a very near SPA feeling with a lot less effort.

BUT basecamp libraries (turbolinks, stimulus) are horrible open source libraries. There are not any changes or bug fixes for months now (for both) and nobody does know whats the actual state, whether they are abandoned or basecamp is working on something new. Then hey.com released and people found new features in both frameworks, so one day they will release (again) completely reworked versions of these libraries they have been working on in their private repositories.

Both libraries are really good together, but they would be better if developed more open by the community. They are basically abandonware the day they are released, until one day a complete rewrite and major version bump is released.


curious how you would implement refresh of a single row in a table after a job has completed? have enjoyed this combination too, but I feel like you'd have to subscribe to multiple ActionCable channels to do this?


StimulusReflex can do something like this quite easily. It re-renders the entire page (suggesting that you do a lot of fragment caching so this is fast) and then diffs it on client side with morphdom (IRC). I believe you can do partial rendering now, but I haven't tried it.


The fundamental problem is not that the SPA pattern is bad, is that it takes a lot of skill and effort to make a proper SPA. Obviously, skill and time are scarce resources and the result is that most SPAs are crap.

OTOH all these component based frameworks have definitely brought us a much better way to produce interactive experiences compared to the jQuery days. This is not related to SPAs at all. You can use React/Vue/etc in a multipage application. The problem is that to hydrate (make interactive) the server rendered HTML you now need to duplicate your markup between your server language and front end framework. The solution is to use the same framework in the backend and the front and write the components just once.


No, it doesn’t take that much skill. I think this is the source of the problem.

The stupidity and hostile fear of originality I detect in this these comments is a very real reflection of my professional experience. This, dependence on frameworks and inability to imagine anything beyond the common SPA, really scares the shit out of me knowing I will be coming home from a military deployment soon and returning to the corporate world are scared to write code.

Whenever non programmers ask me what programming is like I tell them it’s full of the most insecure people you can ever meet. People are afraid to do their jobs, need a framework for everything, and always talk about how hard it is (when it isn’t).


> No, it doesn’t take that much skill.

That is a pretty bold statement to make.


Not at all. It would only be a bold statement if it weren't so immediately validated in practice. A developer can write code or they can't.


> A developer can write code or they can't.

Uhh that is not how it works. I can write code but if you tell me go and write in assembly or C, I cant because I think its outside my circle of competence.

Maybe you find SPA's easy others dont.


I have been writing software for 20 years. It's exactly how it works. You can, after a necessary onboarding period, accomplish the task you were hired to accomplish or you can't.


Agreed. This is indeed how it works. Changes in company culture trends somewhat lend to the perception of newer programmers that they hold competency beyond what they do, which contributes to the overall problem of software being hard and an unstable industry.

I used to not be able to code, and now I can. By that I mean I used to not grok the purely abstract domain of encoding meaningful computation and only the concrete domain of when I type a certain sequence of keys in ${languageX} and press some other stuff, stuff happens.

Now that I better understand the abstract domain the concrete domain of programming languages translate, I am much better able to pick new tools up and understand whether or not they should be picked up.


> No, it doesn’t take that much skill. I think this is the source of the problem.

It does take skill to make a proper SPA not to make an SPA.


At what point should a developer be expected to develop? How much incompetence is acceptable before it becomes too much in a high paying job?


Recently I got to work with hybrids.js (webcomponents) which makes the hydrate part nice. Because it is normal markup with whatever data you have on your SSR-pages. It simplified quite a bit.


Very interesting! Thanks for the recommendation.


> The fundamental problem is not that the SPA pattern is bad

While it's not bad, SPA forced you to do view validation twice, one on backend and another on frontend. (Generally)

Let's say that in HN not everyone can view upvote score of comment. With classic server rendering, you can put the check on view directly (though not recommended, it's sometimes useful to cut corners) and it's guaranteed that those users won't see the score.

OTOH you can't do that with SPA, since you can easily sniff the content via API response.


This discussion has happened a million times on HN and this is the most succinct explanation I’ve seen so far. Bravo


Well, we could go back to the original model for the web, using REST & HATEOAS without arguing or even really thinking about it, by building hypertext-based applications.

Of course, the hypertext we have, HTML, wasn't completed, and it leaves a lot to be desired. I have tried to fix that with htmx:

https://htmx.org


Hey, this is super cool. Thanks for building it!

One quick question: How should I be handling server side errors with this? Like, normally I’d check the status code coming from the server and throw up an error modal if it’s an error - what’s the equivalent mechanism here?


htmx triggers a bunch of different events based on various request life cycle events.

For server errors you'd use htmx:responseError:

https://htmx.org/events/#htmx:responseError

You could write some code like so:

   htmx.on("htmx:responseError", function(evt) {
     GrowlNotification.notify({
       title: 'Request Error',
       description: evt.detail.xhr.responseText
     })
   })
And that would show a growl notification in the event of an request error. Or however you wanted to display it.


I haven't looked into htmx much yet, but am using its predecessor intercooler-js, and this is my code for handling server responses and displaying an error message when it's needed: https://gitlab.com/tildes/tildes/-/blob/master/tildes/static...

I wouldn't consider it particularly elegant, but it's straightforward and has worked well. It uses the "complete.ic" event to trigger whenever an intercooler request finishes, and then uses the response status/code/text to display an appropriate message.



Stimulus Reflex https://docs.stimulusreflex.com/

Laravel Livewire https://laravel-livewire.com/

Phoenix LiveView https://github.com/phoenixframework/phoenix_live_view

That's where I see a lot of potential. With Stimulus Reflex I have the convenience and development speed of Rails and I can enhance it with StimulusJS sprinkles and reflexes to create great experiences for the user. The interactivity I get is enough for 90%+ of the webapps out there and the complexity to do it is far less.


I am quite a fan of HTMX and am using it for multiple projects now. I hope it continues to gain traction.


How do you handle things like menu pop-ups and toggling/hiding content? I keep wanting to use htmx and it's like, but there always seem to be very common tasks like having an expandable menu on mobile that don't have a great solution in these libraries and I have to write vanilla JS.

I've settled on Alpine instead for the time being because it has some data management built in. I'm thinking I'm going to have to switch to Vue on my current project because I feel so much less productive with Alpine.


htmx is focused on increasing the network-oriented expressiveness of HTML, rather than on pure front end enhancements.

For something like menu popups or toggling content on the front end, I would expect an application to use a front end framework like bootstrap, or perhaps WebComponents, or a scripting solution like Alpine, in conjunction with htmx.

Alpine and htmx complement one another nicely, particularly since htmx 0.2.0, when we started firing kebab-style event names.


Ah for whatever reason I always saw this as an either-or scenario, not in tandem.

That gives me hope as I’ve wanted to use htmx on a project. Gonna start playing around with it tonight. Thanks!


Basically, you can toggle visibility on your pop up menus or modals with standard CSS/HTML. The visibility class/attribute can be controlled declaratively or imperatively without much JS. You can even use CSS transitions to add nice smooth animation to your menu/overlay.

#UseThePlatform


Me too! As a developer mostly with desktop software development experiences, htmx is so much straightforward to understand and easy to code, it's a perfect chose for me to implement pages such as 'License Upgrade', 'License Renew', and so on: https://docxmanager.com/miscpages/upgrade-to-standard-from-b...


Do you have any examples or favorite articles about using it? I'm exploring the space, but have really limited time to sit down with all of the different options today. HTMX is one that is very intriguing.


I'm building an app with a TALL stack now (Tailwind, Alpine.js, Laravel, and Livewire) and I am incredibly productive. Very little build step required (to compile Tailwind to reduce the size based on which classes are used in .blade.php files). CRUD, Image uploads etc are so easily done I am such a fan. I was skeptical at first, but now I love this way of building web apps. No idea how well it scales, but for a simple MVP I couldn't have asked for a better stack.


Seriously, after all these years of JavaScript,SPA,react,redux craze, we're back at PHP, css and minimal js all over again. None of those new js frameworks allow you to build custom web apps faster than Laravel or Rails. I'm really curious about what kind of side projects all these people are building with e.g. next.js alone. Is there any kind of web app that doesn't need authentication, authorization and database access?


I write full SPAs at work, professionally. We use React, Apollo, GraphQL, Webpack etc. the TALL stack is such a breath of fresh air. I can't even begin to explain my joy. Just joy.

I now dread the time I have to write those infinite lines of JS, conflicting dependencies, slow build times, React hook state management etc.

When I speak with my colleagues about tech, they all seem to love the entangled mess of JS dev and willing to jump on any new framework that gets released. I always push for good old reliable SSR. Often get called the weird one for being young, yet favoring old school tech, though.

I guess I prefer a better developer experience and shorter time to production/market, rather than spending days trying to setup a project and figure out weird quirks and issues with such a complicated mess of a "serverless, modern day web application".


Regarding auth and db, the ones I've spoken with that prefer JS way of doing things like to combine a bunch of existing offerings into one, eg Auth0 for Auth, Prisma for DB and so on. The more potential points of failure, the more attractive it seems to them.

When saying that Laravel/RoR gives you all that by running one simple command, I get blank stares. Hard to believe, I know.


We've come to the point that being able to run code and render html on a server is considered a new feature (aka SSR and serverless functions). I recently watched the Next.js conf and i couldn't help but giggle.

-Do you want functions?

Use our proprietary platform

-Do you want to store content?

Use cloudinary, aws

- Authentication?

Auth0,firebase

- Database?

Use FaunaDb and our super cool new query language that nobody knows and cares about.

> Congratulations. You've built your new webapp on Jamstack. Now you have to manage large bills across hundreds of 3rd party services, vendor lock-ins. Also good luck trying to reproduce all that on a development machine or organize your code.

On the other hand you can just: laravel new project-name --jet and deploy on a single linux machine or heroku and you get:

-Robust and customizable Auth, password reset, 2fa

-A serious db like PostgreSQL and an orm

-SSR by default with 0kb bundle size!

-Any css tool you need

-Easy APIs, tokens and permissions

-Truly open source.You have full control of your code and data

So yeah it's just a command but yikes, who uses PHP in 2020, right?


Exactly! FaunaDB was the recent topic of discussion and I was like oh here we go again...

> So yeah it's just a command but yikes, who uses PHP in 2020, right?

Lol yeah, but PHP 8 is looking really nice


> Now you have to manage large bills across hundreds of 3rd party services, vendor lock-ins.

A nightmare.

That said, I don't see any major problems in using Next with a monolithic BE. It's a viable tool to get things done.


The problem is then you'd have to use PHP or Ruby. Much as people say they've improved, they're not better than TypeScript. I wish someone made something like Laravel for TS. Sometimes I look at Laravel and think, sure it's great that they did all of that and are even making a bunch of money, but why did it have to be PHP of all languages?


PHP and Ruby both are better than a half-ass typing on a single-thread language.

(I only write Elm and Elixir nowaday)


There is AdonisJS which is a Laravel clone but in JS. I'm not sure about TS support, though.


Rails doesn't give you auth* though.


Ah, sorry. I don't have much experience with RoR. Laravel, mostly, and Laravel auth is one command away. I assume RoR won't be too far away from that, too.


I can imagine people thinking Devise is Rails' own auth, given how popular it is.


well, whenever I did rails, that was pretty standard. Laravel's baked in auth, is probably why it's so much better than rails, that and queues, telescope, etc... all the nice to haves that come standard in laravel that are extra in rails apps.


I agree on this. But, you could add a few gems to Rails and it comes at par with Laravel.


For all the speed bumps PHP 7 delivered Laravel typically scores lower than Django and Rails on Techempower benchmarks. This and the reality that PHP roles typically pay 20% less than Ruby, Python or Node has led me to ignore Laravel.


Techempower is neat, but there are plenty of companies making tens of millions with less than 100 rps...


I haven't built anything that requires significant performance tweaking other than some caching and SQL optimisations. Maybe PHP 8 will bring an even better performance when it gets released

> HP roles typically pay 20% less

Yeah, true. Hence why I am a React dev professionally


I'm not saying any of Django, Rails or Laravel are fast compared with Node, ASP.Net or Spring but what surprised me was how PHP 7, which is a lot faster than Ruby or Python, somehow managed to fall behind when Laravel was added into the mix. It's as if PHP's performance gains only really apply to raw PHP or lightweight frameworks.


That's surprising to hear. Do you have any links that go into this (or show benchmarks)? I generally avoid PHP, but I've been thinking of looking into Laravel for when I do need to use PHP.


Techempower benchmarks.


Just to chime in here. Laravel is definitely not "known" for their performance.


But they never advertised being fast in terms of performance. More like fast in terms of development


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: