Hacker News new | past | comments | ask | show | jobs | submit login

can it handle networking (including load balancing and reverse proxies with automatic TLS) or virtualized persistent storage? Make it easy to integrate common logging system?

Cause those are the parts that I miss probably the most when dealing with non-k8s deployment, and I haven't had the occasion to use Nomad.




For load balancing you can just run one of the common LB solutions (nginx, haproxy, Traefik) and pick up the services from the Consul service catalog. Traefik makes it quite nice since it integrates with LetsEncrypt and you can setup the routing with tags in your Nomad jobs: https://learn.hashicorp.com/nomad/load-balancing/traefik

What Nomad doesn’t do is setup a cloud provider load balancer for you.

For persistent storage, Nomad uses CSI which is the same technology K8s does: https://learn.hashicorp.com/nomad/stateful-workloads/csi-vol...

Logging should be very similar to K8S. Both Nomad and K8S log to a file and a logging agent tails and ships the logs.

Disclosure, I am a HashiCorp employee.


Thanks, definitely widened my understanding of Nomad in pretty short time :)

Kinda feels bad that I don't have anything to use it on right now.


Does the Nomad WebUI support any kind of auth or just the Nomad-Bearer thing?

Thinking about completing my Hashicorp Bingo card.


It is on the roadmap to support JWT/OIDC!


Those are the advantage and the problem of nomad. We're using it a lot by now.

Nomad, or rather, a Nomad/Consul/Vault stack doesn't have these things included. You need to go and pick a consul-aware loadbalancer like traefik, figure out a CSI volume provider or a consul-aware database clustering like postgres with patroni, think about logging sidecars or logging instances on container hosts. Lots of fiddly, fiddly things to figure out from an operative perspective until you have a platform your development can just use. Certainly less of an out-of-the-box experience than K8.

However, I would like to mention that K8 can be an evil half-truth. "Just self-hosting a K8 cluster" basically means doing all of the shit above, except its "just self-hosting k8". Nomad allows you to delay certain choices and implementations, or glue together existing infrastructure.

K8 requires you do redo everything, pretty much.


I count the "glue it with existing infrastructure" to be higher cost than doing it from scratch. It was one feature that I definitely knew regarding Nomad, as one or two people who used it did chime in years ago in discussion, but for various reasons that might not be applicable to everyone I consider those unnecessary complication :)


Depending on how big the infrastructure is and how long you want to migrate over... usually there's not enough resources to "redo all from scratch", millions of LoC are already in production, people who owned key services are no longer in the company, other priorities for business exists other than have what you have already working in k8s..


The context was rather different (home setup), but all that you mention can be used as arguments Noth for and against redo, basing on situation in company, future needs, etc.

I have actually done a "lift and shift" where we moved code that had no support or directly antagonistic one to k8s because various problems reached situation where CEO said "replace the old vendor completely" - we ended up using k8s to wrestle with the amount of code to redeploy.


- Yes (Traefik, Fabio, consul connect/envoy)

- Yes, just added CSI plugin support. Previously had ephemeral_disk and host_volume configuration options, as well as the ability to use docker storage plugins (portworx)

- I haven’t personally played with it, but apparently nomad does export some metrics, and they’re working on making it better


nomad is strictly a job scheduler. If you want networking you add consul to it and they integrate nicely. Logging is handled similarly to Kubernetes. Cool thing about Nomad that it's less prescriptive




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: