Hacker News new | past | comments | ask | show | jobs | submit login

> But even for 4500 connections idle you'd still want 500 cores and a monster IO subsystem?

Entirely depends on your workload:

These days it's very common to have the hot working set fit into the database server's buffer cache - in which case there will be little IO. And even in cases where you have a lot of IO - a decent NVMe SSD can do several 100k IOPS (with some doing more than 1M IOPS), with per IO latencies in the mid two digit microsecond range.

And because common queries can be processed within fractions of a millisecond, often 500 busy connections can be handled by a few cores.




> And because common queries can be processed within fractions of a millisecond, often 500 busy connections can be handled by a few cores.

If even 500 (out of 4500) connections are busy, what's the catch? Isn't this just a lot of scheduled CPU work trying to run concurrently when it would be faster if queued? Fast NVMe IO, or the fact that the queries are a few ms each, are just factors that makes the workload more CPU intensive (vs IO bound), the scenario would be more plausible if you had slow disks :)

I guess being bottlenecked by the network can be one scenario if your DB and app aren't close and/or your queries involve bulk data like GIS shapes...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: