Hacker News new | past | comments | ask | show | jobs | submit login

You are running a benchmark in a virtualized environment with no guarantees about access to the underlying hardware from one moment to the next.

It's also the smallest instance type. At least if you had chosen the largest you'd possibly have the entire box. With the smallest instance type however anyone with a larger instance than you is going to steal cycles and I/O away the second they have any load.

What makes people think benchmarking on virtualized hardware is at all worthwhile? (Serious question) It's like writing a blog post about which way the wind was blowing for the last five minutes. This is all nonsense.




I ran the benchmark multiple times and I got very consistent results. I know that doesn't eliminate the VM factor, but it certainly minimizes it.

Also, this test wasn't meant to be scientific. I'm not selling docker, postgres or digital ocean, just satisfying a personal curiosity.

Benchmarks are touchy things in general, I understand that.

I published the results and I'd love to see someone publish conflicting results, or verify them. I've tried to be as transparent as possible.


It is not nonsense.

It is not nonsense because people really run their applications (e.g. PostgreSQL) in virtualized environment these days (i.e. AWS, DO, Linode, Google Cloud etc.) People still need to estimate the performance they will have under these situations - and these benchmarks are relevant to them.


> What makes people think benchmarking on virtualized hardware is at all worthwhile?

Benchmarking is worthwile. Blogging about any perceived objective results is indeed questionable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: