Hacker News new | past | comments | ask | show | jobs | submit login

We had to build the orchestration stuff (it was originally a Nomad driver, but has outgrown that) because the tooling to run OCI containers as Firecracker VMs didn't exist in a deployable form when we started doing this stuff.

Most of the big CDNs seem to start with an existing traffic server like Nginx, Varnish, or ATS. One way to look at what we did with our "CDN" layer is that rather than building on top of something like Nginx, we built on top of Tokio and Hyper and its whole ecosystem. We have more control this way, and our routing needs are fussy.

By comparison, we use VictoriaMetrics and ElasticSearch (I don't know about "as-is" --- lots of tooling! --- but we don't muck with the cores of these packages), because our needs are straightforwardly addressed by what's already there.

Lots of companies doing stuff similar to what we're doing have elaborate SDN and "service mesh" layers that they built. We get away with the Linux kernel networking stack and a couple hundred lines of eBPF.

We definitely don't have a specific process for this stuff; it's much more an intuition, and is more about our constraints as a startup than about a coherent worldview.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: