* Run Ceph in separate nodes and connect it to the cluster. With Juju, you can do that from the bundle, as Ceph is also a supported workloads. This gives you scale for storage
* Run Ceph within the cluster with a Helm chart. We see that for openstack-helm for example. Also gives you scale, but the lack of device discovery makes it less interesting in my opinion
* Run an NFS server, plain easy but not very scalable.
* Use hostpath, which is the default but doesn't get you scale.
Ceph is not a good solution at all for databases. Your performance is going to be terrible.
Ceph is object-based SDS solution designed to take servers with local drives and create a SAN out of them. In order to do this, they take each LUN (Ceph volume) and scatter the data across all nodes in the cluster. They do not assume that applications will run on these servers themselves... they assume compute is elsewhere, like a traditional SAN. The goal is to replace a SAN with servers, not create a converged platform. Also, Ceph was designed during an age where an Intel server did NOT have tier-1 capacity (8 - 20 TB), which is why they shard a volume across so many servers.
This causes a problem for modern applications like Cassandra, Mongo, Kafka etc, where they like to scaleout themselves and want a converged system, where data is not scattered, but on the node where an instance of that cluster runs. Ceph also disrupts (undo) the HA capabilities that these scaleout applications have (For example, a Cassandra instances data will not be on a node on which it thinks it is).
Gluster is a good alternative, Ceph can be prickly if you don't want a block device and instead just need a filesystem. CephFS fills sort of the same role, but does so on top of Ceph rbd.
Since GlusterFS /can use/ NFSv4 as a client, it should work with the stuff @samco_23 uses
* Run Ceph in separate nodes and connect it to the cluster. With Juju, you can do that from the bundle, as Ceph is also a supported workloads. This gives you scale for storage
* Run Ceph within the cluster with a Helm chart. We see that for openstack-helm for example. Also gives you scale, but the lack of device discovery makes it less interesting in my opinion
* Run an NFS server, plain easy but not very scalable.
* Use hostpath, which is the default but doesn't get you scale.