> I've not done stuff with kubernetes yet though, so I have no idea how it's done there.
Essentially the same, except that K8s gives you a wide variety of storage backend integrations (Storage classes + storage providers) which can attach "anything" (local volumes on the node, NFS, NAS, Cloud Volumes, ...) depending on your local environment and needs.
A lot of people running on prem k8s clusters have block storage. When I worked on open shift it wasn't uncommon for people to run databases in the cluster, backed by their block storage.
If you're running in the cloud, say on AWS EKS, it makes sense to use in-cluster databases for development environments, and only use RDS DB's for production/integration to save on hosting costs.
There is a huge push for doing that. Whether it is abstract the right thing can be questioned, but many IT departments decided to standardize around Kubernetes for all datacenter management and push that way and in some environments (5G networking) it's part of the specified stack.
Saving it on one host's local filesystem doesn't feel particularly production-ready. There is a distributed store system for Kubernetes called "Longhorn" that I've heard good things about, but I haven't really looked into it much myself. I just run a pair of VMs with a manual primary/replica setup and have never needed to fail over to the replica yet, but I can imagine some sort of fully orchestrated container solution in the future.
I'm just pointing out how it's commonly done. Of course people add things like replication, distributed filesystems, (etc) to the mix to suit their needs. :)
Yep, it seems like the most common answer is “pay exorbitant prices to your cloud provider for a managed SQL database”, but we’ve managed to save a chunk of money running it on our own. I’ve always said that between three engineers(me being one of them), we can form one competent DBA, but our needs are also pretty modest.
I've not done stuff with kubernetes yet though, so I have no idea how it's done there.