Hacker News new | past | comments | ask | show | jobs | submit login

I tried a few incarnations of self-hosted k8s a few years ago, and the biggest problem I had was persistent storage. If you are using a cloud service they will integrate k8s into whatever persistent storage they offer, but if you are self-hosting you are left on your own, it seems most people end up using something like nfs or hostPath - but that ends up being a single point of failure. Have there been any developments on this recently, aimed at people wanted to run k8s on a few RaspberryPi nodes?



Have you tried using a CSI driver to help you do this? https://kubernetes-csi.github.io/docs/drivers.html

A brief description of what CSI is - https://kubernetes.io/blog/2019/01/15/container-storage-inte...


I've had good experiences using the Rook operator for creating a CephFS cluster. I know that you can run it on k3s, but I don't know whether RaspberryPi nodes are sufficient. Maybe the high RAM Raspi 4 ones.


We do this at Twilio SendGrid


I've had good experiences with Rook on k3s in production. Not on raspis though.


I'm a bit biased but Rook[0] or OpenEBS[1] are the best solutions that scale from hobbyist to enterprise IMO.

A few reasons:

- Rook is "just" managed Ceph[2], and Ceph is good enough for CERN[3]. But it does need raw disks (nothing saying these can't be loopback drives but there is a performance cost)

- OpenEBS has a lot of choices (Jiva is the simplest and is Longhorn[4] underneath, cStor is based on uZFS, Mayastor is their new thing with lots of interesting features like NVMe-oF, there's localpv-zfs which might be nice for your projects that want ZFS, regular host provisioning as well.

Another option which I rate slightly less is LINSTOR (via piraeus-operator or kube-linstor[6]). In my production environment I run Ceph -- it's almost certainly the best off the shelf option due to the features, support, and ecosystem around Ceph.

I've done some experiments with a reproducible repo (Hetzner dedicated hardware) attached as well[7]. I think the results might be somewhat scuffed but worth a look maybe anyways. I also have some older experiments comparing OpenEBS Jiva (AKA Longhorn) and HostPath [8].

[0]: https://github.com/rook/rook

[1]: https://openebs.io/

[2]: https://docs.ceph.com/

[3]: https://www.youtube.com/watch?v=OopRMUYiY5E

[4]: https://longhorn.io/docs

[5]: https://github.com/piraeusdatastore/piraeus-operator

[6]: https://github.com/kvaps/kube-linstor

[7]: https://vadosware.io/post/k8s-storage-provider-benchmarks-ro...

[8]: https://vadosware.io/post/comparing-openebs-and-hostpath/


Distributed minio[1] maybe? Assuming you can get by with S3-like object storage.

[1] https://docs.min.io/docs/distributed-minio-quickstart-guide....


I'm using longhorn, but it's been cpu-heavy.


I really liked longhorn but the CPU usage was ultimately too high for our use case.


seaweedfs seems pretty great for a cloud storage: http://seaweedfs.github.io


Thanks! I am working on SeaweedFS. https://github.com/chrislusf/seaweedfs

There are also SeaweedFS CSI Driver: https://github.com/seaweedfs/seaweedfs-csi-driver


I guess easiest would be Longhorn on top of k3s


I've found Ceph is more tolerant to failures and staying available. Longhorn was certainly easier to setup and has lower operating requirements, but we encountered outages.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: