Similar experience but without GraphQL. We had server side rendering with a Node server. Our production server became a Node farm with Phoenix + PostgreSQL requiring less than 1 GB of RAM and Node using at least 8 extra GBs. We eventually ditched SSR, send the React app and wait for it to render. We're back to 1 core and (mostly unused) 4 GB. It's a business application with complicated UI, customers don't mind waiting a couple of seconds of they want to start from a bookmarked screen.
For a simple UI I'd generate HTML server side with eex and spare us the cost of front end development. It's also a productivity nightmare. The amount of work needed to add a single form field with React/Redux is insane.
Just a quick one: why would you need redux for forms? This is in my opinion a total overkill.
I have forms either having their own state or (preferred) just use Formik for all of this. In my stack, this then allows to just add a field in the GraphQL schema (backend), add it in the query, add the formik field + yup validation and done.
Some people would argue that if using Redux, also having local state logic is an anti pattern.
That would mean that if you use Redux, a form also requires actions for form update/submit/success/error and the form data should be stored in the redux store.
That is one of the main issues I have with Redux, which I feel adds automatic complexity for simple things, but at the same time I'm not sure if it's very good to have a mix of tings happening from store/actions/reducers and others from local state/ajax.
> Some people would argue that if using Redux, also having local state logic is an anti pattern.
I won't disagree that this is a popular opinion, but there's little practical benefit to storing state that's truly local to a single component (or a very small tree) in Redux just because it's there.
I don't see how you can blame something for adding complexity based on what other people _think_ is an anti-pattern.
In fact, it's most of the times not desired to update your store before you know the data has been validated anyway. The store should always be the source of truth, but that also means that it should be valid.
That's the approach I am going with in any case when working with some kind of global state.
Any idea why SSR used so much RAM? I wonder if the virtual DOM approach of React contributed substantially to it, and whether something like Svelte (https://svelte.dev/) would do much better.
Each Node instance has a 1 GB memory limit (the heap? It seems Java like). We failed to find a way to raise it but, again, we didn't invest too much into it except some googling. It seems there used to be a command line option for that but it doesn't work anymore.
Each hit to Node raises the memory usage until it gets to 1 GB and throws an error and gets recycled, which unfortunately translates to a 502/503 to the client. We can intercept those errors in Elixir and try again but it's far from ideal.
To have less errors we naively decided to increase the number of workers but we also had to increase the RAM of the server. The first hit for each client gets served by Node so eventually Node's resource usage dwarfed Elixir's. We felt like we were doing it wrong (I'm sure there is a way to get a saner setup) and decided to turn off server side rendering. Nobody complained and we're saving some $40/50 per month on that single server, plus our time which is worth more than that.
I think that projects with little load should run on low tech uncomplicated solutions: a reverse proxy and an application server were enough in the 90's for the same scenario and are still OK now.
For a simple UI I'd generate HTML server side with eex and spare us the cost of front end development. It's also a productivity nightmare. The amount of work needed to add a single form field with React/Redux is insane.