Hacker News new | past | comments | ask | show | jobs | submit login

Do we have a performance estimate? I can eat 20 or 30%, but I can't eat 90%.



This comment further down thread mentions it's 20% in Postgres. https://news.ycombinator.com/item?id=16061926


...when running SELECT 1 over a loopback socket.

The reply to that comment is accurate: that's a pathological case. Probably an order of magnitude off.


We're still learning, but it looks like pgbench is 7% to 15% off:

https://www.postgresql.org/message-id/20180102222354.qikjmf7...


I've seen that message. It acknowledges the same problems: do-nothing problems over a local unix socket.

Real-world use cases introduce much more latency from other sources in the first place.

I'm sticking with an expectation in the 2%-5% range.


Yep, this is getting blown way out of proportion by all of these tiny scripts that just sit around connecting to themselves. Even pgbench is theoretical and intended for tuning; you're not going to hit your max tps in your Real Code that is doing Real Work.

In the real world, where code is doing real things besides just entering/exiting itself all day, I think it's going to be a stretch to see even a 5% performance impact, let alone 10%.


I think 5% is a reasonable guess for a database. Even a well-designed database does have to do a lot of IO, both network and disk. It's just not a "fixable" thing.

But overall, yeah.


The claim is that it's 2% to 5% in most general uses on systems that have PCID support. If that's the case then I'm willing to bet that databases on fast flash storage are lot more impacted then this and pure CPU bound tasks (such as encoding video) are less impacted.

The reality is that OLTP databases execution time is not dominated by CPU computation but instead of IO time. Most transactions in OLTP systems fetch a handful of tuples. Most time is dedicated to fetching the tuples (and maybe indices) from disk and then sending them over network.

New disk devices lowered the latency significantly while syscall time has barely gotten better.

So in OLTP databases I expect the impact to be closer to 10% to 15%. So up to 3x over the base case.


> I've seen that message. It acknowledges the same problems: do-nothing problems over a local unix socket.

The first set of numbers isn't actually unrealistic. Doing lots of primary key lookups over low latency links is fairly common.

The "SELECT 1" benchmark obviously was just to show something close to the worst case.


> The first set of numbers isn't actually unrealistic. Doing lots of primary key lookups over low latency links is fairly common.

Latency through loopback on my machine takes 0.07ms. Latency to the machine sitting next to me is 5ms.

We're actually (and to think, today I trotted out that joke about what you call a group of nerds--a well, actually) talking multiple orders of magnitude through which kernel traps are being amplified.


> Latency through loopback on my machine takes 0.07ms. Latency to the machine sitting next to me is 5ms.

Uh, latency in local gigabit net is a LOT lower than 5ms.

> We're actually (and to think, today I trotted out that joke about what you call a group of nerds--a well, actually) talking multiple orders of magnitude through which kernel traps are being amplified.

I've measured it through network as well, and the impact is smaller, but still large if you just increase the number of connections a bit.


If so, this definitely moves the needle on the EPYC vs Xenon price/performance ratio.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: