Hacker News new | past | comments | ask | show | jobs | submit login

> Unless you are doing really heavy IO or CPU across multiple VMs on the same host which is likely if you have any load worth mentioning. There is a 20% to 30% difference if you run one process per VM or 8 processes on bare metal in an 8 core machine. We benchmarked it on known good configs optimised to bits. > > Either the hypervisor scheduler is shit or the abstraction is costly. I reckon its down to the reduction in CPU cache availability and the IO mux.

We were running heavy loads and there was nothing like a 20-30% hit. I'm not saying you didn't see one but this isn't magic or a black box: we had a few spots where we needed to tune (e.g. configuring our virtual networking to distribute traffic across all of the host NICs) but it performed very similarly to the bare metal benchmarks in equivalent configs.

What precisely was slower, anyway - network, local disk, SAN? Each of those have significant potential confounds for benchmarking.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: