Hacker News new | past | comments | ask | show | jobs | submit login

This seems to be the common advise given, but I don't fully agree. There have been many times in my career where a DB was on a VM with the storage attached via a cloud providers block storage. When asked if we should move it to k8s, people are quick to mention k8s doesn't do well with persistent storage. However all of the big cloud providers offer the ability to easily create a persistent volume in k8s that then just creates a block device, attaches it to the k8s host and makes it available to the pod.

So in both situations you have the same IO limits of block storage. The question is does k8s persistent volume api add enough of an IO bottleneck to cause issues, IME that isn't the case.

Now if you want direct attached NVMe drives for higher IO than a network attached block storage will give, then it might be easier with a VM vs k8s, but I can't speak to that much.




Given k8s origin as bare-metal oriented system, attaching physical SAN volumes was there pretty early on, and now it's only more capable in that area (including passing devices for running exotic stuff)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: