Hacker News new | past | comments | ask | show | jobs | submit login

Don't know your use case, but C++ for a REST api seems extreme overkill. Any performance benefits would most likely be nullified by network latency



> "Don't know your use case, but C++ for a REST api seems extreme overkill"

It used to be the other way around: spinning up a VM for handling a simple HTTP request, wasting many megabytes of storage, memory and burning needless CPU cycles on bloated abstractions along the way? That used to be the very definition of "overkill".


Sure, but my dart implementation was much slower to start than I'd like for iterative development and also was consuming a chunk of memory.

My current version (admittedly still very early in development) starts in under a second and occupies 300MB after doing some unrepresentatively large transfers (200k records when the application will be doing more like 200 at once). The downside is that compile times are a bit higher.

I could have used something else, but like I said, I'm very familiar with C++ as it's what I've used day-in, day-out for the the last 15 years.

restinio is a pretty good library - the code I'm writing in C++ isn't much more verbose than the flutter code, in fact in some places it's got less boilerplate as it's heavily templated, so e.g. json_dto just requires a single definition to support JSON serialisation and deserialisation.


Unless you want to serve thousands of requests per second on the toaster you just installed NetBSD on. In that case a well written C++ socket multiplexer is a good choice.

Remember that network latency only affects, well, latency. Throughput is another matter.


> Don't know your use case, but C++ for a REST api seems extreme overkill. Any performance benefits would most likely be nullified by network latency

It seems you're mistakenly confusing the time a request takes to be fulfilled with performance.

In the server, performance means throughput. If means nothing if a task handler stays ages waiting for IO. What matters is that once a task is ready to run, the code that needs to run is as lean as you'd like.

This means you can handle more requests with less hardware.

I have no idea how we reached a time when people think it's a good idea to download over 100MB worth of dependencies just to have a service that can handle a HTTP request, and multicore computers packed to the rim with RAM just to be able to handle a few tens of HTTP requests per second.


Latency can still matter on servers, it’s just that the trade off is less obvious given that requests may spend time in a queue waiting to be serviced, and the depth of that queue will depend on throughput more than latency.


Have a look at https://okws.org




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: