Obviously there is infinite room for optimization of this problem, but this was a fun blog post. I’m interested to see where the author goes with the series.
It is interesting concept for a blog post but ruined by very inefficient implementation.
I would expect orders of magnitude more work done on ~20W.
Also parsing a compressed 10MB JSON seems like an unusual request. It would maybe be more fun to put a Hello World or Pet Store and get some numbers that will be more relatable to a regular developer.
I mean, everyone's gotta start somewhere. I'll give the author props for coming up with an idea for an experiment, documenting it, and sharing their results. Not a whole lot of people do that out in the open, and it's great to see a bunch of responses with suggestions for further improvements. :)
You must have a very low level of expectations of your fellow HN reader if you think that anything above a Hello World tutorial is relatable to a regular developer.
How frequently do you test a new web framework with "10MB compressed JSONs"?
On the other hand you can find a lot of benchmarks that use basically Hello World just to test your request response or some rather small request/response sizes, because this is what most applications actually do. You can add a simple database query to it for more realistic load.
So, yes, this is more relatable to me as a backend developer because I can compare results more easily.
I see the issue with your premise. I don't test web frameworks. I do real work <ducks>
I very much routinely look at large data sets. They just happen to be wrapped up in a different container than ZIP. Typically, they are delivered in MOV, MP4, WAV, etc. I look at a 10MB file and try to remember the last time I counted that small.