I will give example of just few things that literally bought me lots and lots of savings in hours spent on working, that are all in use on "single server cluster":
1. Ingress and Service objects vs. Nomad/Consul Service Discovery + Templating
This one is big, as in really big thing. Ingress and Service API let me easily declaratively connect things with multiple implementations involved, and it's all handled cleanly with type-safe API.
For comparison, Nomad's own documentation tells you how to majorly use text templating to generate configuration files for whatever load balancer you decide to use, or use one of two they point to that have specific nomad/consul integration. And even for those, configuring specific application's connectivity happens though cumbersome K/V tags for apparently everything except port name itself.
You might consider it silly, but Ingress API with it's easy way to route different path prefixes to different services, or specify multiple external hosts and TLS, especially given how easily that integrates (regardless of used load balancer) with LetsEncrypt and other automated solutions, is an ability you're going to pick out from my cold dead hands.
Similarly the more pluggable nature of Service objects turns out critical when redirecting traffic to appropriate proxy, or doing things like exposing some services using one subsystem and others with another (example: servicelb + tailscale).
In comparison Nomad is like going back to Kubernetes 1.2 if not worse. Sure, I can use service discovery. It's very primitive service discovery where I have to guide the system by hand with custom glue logic. Meanwhile the very first kubernetes in production I set up had something like 60 Ingress objects setting up 250 domains which totaled about 1000 host/path -> service rules. And it was a puny two node cluster.
2. Persistent Storage handling
As far as I could figure out from Nomad docs, you can at best reuse CSI drivers to mount existing volumes to docker containers - you can't automate storage handling within Nomad, more or less you're being told to manually create necessary storage, maybe using terraform, then register it with Nomad.
Compared to this, Kubernetes' PersistentVolumeClaim system is a breeze - I specify what kinds of storage I provide through StorageClasses, then can just throw a PVC into definitions of whatever I am actually deploying. Setting up a new workload with persistent storage is reduced to me saying "I want 50G generic file storage and 10G database-oriented storage" (two different storage classes with real impact of performance/buck for both).
Could I just point to a directory? Sure, but then I'd have to keep track of those directories. OpenEBS-ZFS handles it for me and I can spend time on other tasks.
3. Extensibility, the dark horse of kubernetes.
As far as I know none of the "simpler" alternatives have anything like CustomResourceDefinition, or the very simple API model of Kubernetes that makes it easy to extend. As far as I understand Nomad's plugins are nowhere close to the same level of capability.
The smallest cluster I have currently uses following "operators" or other components usind CRDs: openebs-zfs (storage provisioning), traefik (easy trackable middleware configuration beyond unreadable tags approach), tailscale (also provides alternative Ingress and Service implementation), CloudNative PG (automated Postgres setup with backups, restores, easy access with psql, etc.), cert-manager (LetsEncrypt et all, in more flexible ways than embedded into traefik), external-dns (let's me integrate global DNS updates with my service definitions), k3s' helm controller (makes life easier in loading external software sometimes).
There's more but I kept to things I'm directly interacting with instead of all CRDs currently deployed. All of them significantly reduce my workload, all of them have either no alternative under Nomad or very annoying options (stuffing configuration for traefik inside service tags)
And last, some stats from my cluster:
4, soon to be 5 or 6, "tenants" (separate namespaces), without counting system ones or ones that provide services like OpenEBS
Runs 2 VPN services with headscale, 3 SSOs, one big java issue tracker, 1 Git forge (gitea, soon to get another one with gerrit), one nextcloud instance, one dumb webserver (using Caddy). Additionally runs 7 separate postgres instances providing SQL database for aforementioned services, postfix relays connecting cluster services with sendgrid, one vpn relay connecting gitea with VPN, some dashboards, etc.
And because its kubernetes, my configuration to setup for example new Postgres looks like this:
local k = import "kube.libsonnet";
local pg = import "postgres.libsonnet";
local secret = k.core.v1.secret;
{
local app = self,
local cfg = app.cfg,
local labels = app.labels,
labels:: {
"app.kubernetes.io/name": "gitea-db",
"app.kubernetes.io/instance": "gitea-db",
"app.kubernetes.io/component": "gitea"
},
dbCluster: pg.cluster.new("gitea-db", storage="20Gi") +
pg.cluster.metadata.withNamespace("foo") +
pg.cluster.metadata.withLabels(app.labels) +
pg.cluster.withInitDb("gitea", "gitea-db") +
pg.cluster.withBackupBucket("gs://foo-backups/databases/gitea", "gitea-db") +
pg.cluster.withBackupRetention("30d"),
secret: secret.new("gitea-db", null) +
secret.metadata.withNamespace("foo") +
secret.withStringData({
username: "gitea",
password: "FooBarBazQuux",
"credentials.json": importstr "foo-backup-gcp-key.json"
})
}
And this is older version that I haven't updated (because it still works) - if I were to setup the specific instance that it's taken from it would have even less writing.
1. Ingress and Service objects vs. Nomad/Consul Service Discovery + Templating
This one is big, as in really big thing. Ingress and Service API let me easily declaratively connect things with multiple implementations involved, and it's all handled cleanly with type-safe API.
For comparison, Nomad's own documentation tells you how to majorly use text templating to generate configuration files for whatever load balancer you decide to use, or use one of two they point to that have specific nomad/consul integration. And even for those, configuring specific application's connectivity happens though cumbersome K/V tags for apparently everything except port name itself.
You might consider it silly, but Ingress API with it's easy way to route different path prefixes to different services, or specify multiple external hosts and TLS, especially given how easily that integrates (regardless of used load balancer) with LetsEncrypt and other automated solutions, is an ability you're going to pick out from my cold dead hands.
Similarly the more pluggable nature of Service objects turns out critical when redirecting traffic to appropriate proxy, or doing things like exposing some services using one subsystem and others with another (example: servicelb + tailscale).
In comparison Nomad is like going back to Kubernetes 1.2 if not worse. Sure, I can use service discovery. It's very primitive service discovery where I have to guide the system by hand with custom glue logic. Meanwhile the very first kubernetes in production I set up had something like 60 Ingress objects setting up 250 domains which totaled about 1000 host/path -> service rules. And it was a puny two node cluster.
2. Persistent Storage handling
As far as I could figure out from Nomad docs, you can at best reuse CSI drivers to mount existing volumes to docker containers - you can't automate storage handling within Nomad, more or less you're being told to manually create necessary storage, maybe using terraform, then register it with Nomad.
Compared to this, Kubernetes' PersistentVolumeClaim system is a breeze - I specify what kinds of storage I provide through StorageClasses, then can just throw a PVC into definitions of whatever I am actually deploying. Setting up a new workload with persistent storage is reduced to me saying "I want 50G generic file storage and 10G database-oriented storage" (two different storage classes with real impact of performance/buck for both).
Could I just point to a directory? Sure, but then I'd have to keep track of those directories. OpenEBS-ZFS handles it for me and I can spend time on other tasks.
3. Extensibility, the dark horse of kubernetes.
As far as I know none of the "simpler" alternatives have anything like CustomResourceDefinition, or the very simple API model of Kubernetes that makes it easy to extend. As far as I understand Nomad's plugins are nowhere close to the same level of capability.
The smallest cluster I have currently uses following "operators" or other components usind CRDs: openebs-zfs (storage provisioning), traefik (easy trackable middleware configuration beyond unreadable tags approach), tailscale (also provides alternative Ingress and Service implementation), CloudNative PG (automated Postgres setup with backups, restores, easy access with psql, etc.), cert-manager (LetsEncrypt et all, in more flexible ways than embedded into traefik), external-dns (let's me integrate global DNS updates with my service definitions), k3s' helm controller (makes life easier in loading external software sometimes).
There's more but I kept to things I'm directly interacting with instead of all CRDs currently deployed. All of them significantly reduce my workload, all of them have either no alternative under Nomad or very annoying options (stuffing configuration for traefik inside service tags)
And last, some stats from my cluster:
And because its kubernetes, my configuration to setup for example new Postgres looks like this: And this is older version that I haven't updated (because it still works) - if I were to setup the specific instance that it's taken from it would have even less writing.