Hacker News new | past | comments | ask | show | jobs | submit login

So presumably this will also open up avenues for doing QUIC and thus HTTP/3 on Fly?



Yep! We have a "Firecracker that accepts QUIC" running with this.

People usually want HTTP + TLS handled for them, though. So when we ship QUIC + HTTP3 as a first class feature, we'll terminate QUIC and give people whatever their app process can accept.


Unrelated, but could you please expand on how firecracker fits within your stack?


You could describe our job as "taking Dockerfiles from customers and running them globally"; the way we actually "run" Docker images is to convert them to root filesystems and run them inside Firecracker. Firecracker is the core of our stack.


I find it rather curious that the cloud-native crowd tries to sell us containers, but cloud providers themselves use VMs.

Like, if not even AWS, Cloudflare, and fly.io use containers, how can K8s be native in any way?

I mean, that makes even Lambda, which runs on Firecracker VMs, more native than K8s.


Most organizations don't have the multi tenant problem we do, and end up just using Docker when they do containers.

But I also think it's fair to call "Firecracker VMs" containers. Most of what those people are talking about is application packaging and deployment, not necessarily what actually runs.

For what it's worth, I am also cynical about "cloud native".


Fwiw we run as many services as we can on our own platform. Mission critical systems like our registry, api, redis servers, and much more are all running as fly apps in firecracker.


Why not just run Docker containers natively?


We have a whole post about workload isolation that answers this: https://fly.io/blog/sandboxing-and-workload-isolation/

The tldr is: Docker containers don't offer enough isolation for multi tenant systems.

They're also very slow to boot, compared to a Firecracker VM.


But does FC not incur high I/O overhead at runtime?


The flippant answer is "it doesn't really matter, safety usually wins over performance".

But also, we run Firecrackers on our own physical servers and performance is quite good (even network + disk performance).


Any insight into what QUIC/H3 stack you'll be using for the proxy?


To be determined. We're hoping to contribute and use what's going to come out of hyper's h3 efforts (we use Rust for our reverse-proxy). There's not much there yet though: https://github.com/hyperium/h3

We're not in a huge hurry to support QUIC / H3 given its current adoption. However, our users' apps will be able to support it once UDP is fully launched, if they want to.


Are you using a custom reverse proxy? For a recent project I started with Caddy but ended up needing some functionality it didn't have, and didn't need most of what it did have. I'm currently using a custom proxy layer, but I'm concerned I might end up having to implement more than I want to (I know I'll at least need gzip). Curious what your experience at fly has been with this.


We are! It's Rust + Hyper. It is a _lot_ of work, but that's because we're trying to proxy arbitrary client traffic to arbitrary backends AND give them geo load balancing.

Writing proxies is fun. Highly recommended.


Cool, thanks!

I was actually just playing with Hyper for a few hours last night. Are you guys using async/await yet? Any suggestions for learning materials for async rust other than the standard stuff?


another stupid question, but can't help it: golang seems like a popular choice among network developers. Any reason that made fly.io choose Rust over golang for the proxy?


Because of JavaScript. Really!

We settled on Rust back when we were building a JS runtime into our proxy. It's a great language for hosting v8. When we realized our customers just wanted to run any ol' code, and not be stuck in a v8 isolate, we extracted the proxy bits and kept them around.

I think you could build our proxy just fine in Go. One really nice thing about Rust, though, is the Hyper HTTP library. It's _much_ more flexible than Go's built in HTTP package, which turns out to be really useful for our particular product.


What functionality did you need that Caddy didn't have?


Hey Matt! I'm referring to the ability to change the Caddy config from an API that is itself proxied through Caddy. Here's the issue which you very helpful in[0].

Ultimately I realized that most of what I needed from Caddy was really just certmagic, which has worked flawlessly since I got it set up. Plus I need the final product to compile into a single binary. Since my custom reverse proxy only took a few lines of code, I haven't worried too much about it. But there are a few features which I'll have to integrate eventually.

If I end up seeing myself headed down the path of making a full-fledged reverse proxy, I'll reconsider trying to implement my project as a Caddy plugin.

[0]: https://github.com/caddyserver/caddy/issues/3754




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: