Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Nginx Image with HTTP/3 (QUIC), TLS1.3 with 0-RTT, Brotli
219 points by _1h6r on Oct 20, 2019 | hide | past | favorite | 63 comments



I'm just curious, is there a reason not to use a multi-stage docker build here? There are a ton of build steps, and it seems pretty tedious to have to start from scratch every time while developing the image without any layer caching.


Hi, my main task this weekend was to get it fully working. I will break down the steps over this week to utilize cache layers. I will try to get it done hopefully this week or by the weekend depending on work. Thanks a lot for your feedback.


Might I suggest you use Skaffold for this task, which can work with regular Docker as well.


While developing an image, I use layers as much as possible. But usually, when the image is finished, I prefer to minimize the number of layers, it saves some (or little) storage (I think it will not > 10%).


Fewer layers also performs better in the final image. Things like listing a directory get very slow with thousands of layers.


I suggest reading about multi-stage builds. They basically squash layers at the end so it's a "have cake and eat it too" scenario.

This pattern of concatenating commands to have a minimal image is a workaround from times when multi-stage builds were not available.


Multistage builds do not squash at the end.



@Leace

> I suggest reading about multi-stage builds. They basically squash layers at the end so it's a "have cake and eat it too" scenario.

That's completely different thing. Multi-stage builds are great for separating build environment from production environment, but if you need several layers for production environment, they won't be squashed. See here: https://github.com/docker/compose/issues/4235#issuecomment-3...


Yeah, I suspect that while the author could use a development image to do all of the compiling, given that all of the nginx files are not in a single directory it's my understanding that you'd need multiple COPY commands, and you'd still want to do all of the package manager operations in the prod build. It's really a tradeoff of development ease vs minimal packaged output.


> given that all of the nginx files are not in a single directory it's my understanding that you'd need multiple COPY commands

Workaround would be to move these files to one directory on host, COPY it in one command to /tmp (or even better, /dev/shm or other ramdisk) and then use script to distribute files where needed.


You can make the multi-stage builds actual images themselves, and setup your CI to auto push them to a registry. Then you can have shared multistage builds, which is pretty useful for things like compiling static libraries in their own images, and COPYing them into images that statically link to them.


You may find my NGINX image[1] interesting.

There's some features you could easily add to yours in order to make it a better overall image.

[1] https://github.com/ricardbejarano/nginx


The ARG approach to everything is really good IMO, it let you customize the final image without having to change the Dockerfile. For example, in the OP Dockerfile, the GPG key at very least should go in an ARG.


I will take a look at it. Thanks for the feedback.


I would suggest highlighting the experimental nature of the repo, especially if someone reaches it without going through HN. I've read the catchy "All built on the bleeding edge. Built on the edge, for the edge." but IMO it doesn't really sound like a warning that this may not be suitable for serious production use.


I did exactly this 3 days ago, forked from fholzer/docker-nginx-brotli our work looks very much the same

See https://github.com/githubcdr/docker-nginx-brotli


I've played around with the nginx cloudflare patches and quiche, and it all seems to work just fine in my lab setup.

I don't like having to apply third party patches to any mission critical software such as nginx. So I'll wait until nginx releases official support for linking the quiche library, like they did with brotli.


I think 0-RTT is just bad idea security wise.


Good news, any participants under your control can (and so should) refuse to do 0RTT. Clients can choose never to send early data and servers can choose to always reject it, everything still works.

At the API layer reject any libraries or tools that try to foist this on you, many today either don't do 0RTT or correctly offer it as a separate API call for those willing to pay a price in terms of Replay resistance.


That's even more concerning that there are libraries that hide such a thing. There is gonna be instances that bites someone hard where said replay is not idempotent.


How about 0-RTT replay attack protection?


Well, "ssl_early_data" is opt-in. If you enable it on a virtualhost, then you also need to look at the "Early-Data" request header in your backend and make a decision there. e.g. process GET requests, otherwise send HTTP 425 Too Early.

It does seem a bit unsafe. An administrator might opt-in because they copy-pasted it from a tutorial, and not understand or pay attention to the second part.


I think it will be better to fully disable early data for people without full control of DC's network equipment. I don't know why Cloudflare made a decision about using headers and Too Early response. They have full control of their POPs. It will be better to measure RTT and use UDP based KV storage with tickets only for clients with high RTT. So for clients with RTT higher then access to KV storage it will be better to issue tickets, for other clients it will be better to drop early data and use full handshake. Currently, I'm working on a project with the same idea.


> It will be better to measure RTT

To measure RTT you need to perform a round trip. Hence the name. But the _whole point_ of this feature is to avoid incurring the cost of an extra round trip if possible.


There is no need to send extra data to measure RTT. On the TCP handshake SYN/ACK you already know RTT. Linux kernel provides this info in tcp_info data structure.


This is great, and I'll be using it for development! However, I've been looking for something a bit more predictable, and yet still modern, for production use. I do not know why Brotli support isn't included in every nginx image at this point.


From WP I get the impression that the work-in-progress now called HTTP/3 was not necessarily designed supposed to supplant HTTP/2:

> On 28 October 2018 in a mailing list discussion, Mark Nottingham, Chair of the IETF HTTP and QUIC Working Groups, made the official request to rename HTTP-over-QUIC as HTTP/3 to "clearly identify it as another binding of HTTP semantics to the wire protocol ... so people understand its separation from QUIC"

Any opinions on how things are likely to play out?


I believe this is the separation of the transport layer protocol (QUIC) from the application layer protocol (HTTP/3). QUIC can be seen as a replacement for TCP. HTTP over QUIC then becomes HTTP/3 - with improvements in latency and head-of-line blocking over HTTP/2. So in that sense it will supplant HTTP/2 as QUIC gets adopted more widely.


Are there any benchmarcks of http3? I would like to see how it compare vs http2 h2 + tcp fast open


OT: musl is pronounced like 'muscle' or do you spell it out 'm-u-s-l'


>It’s pronounced the same as the English words “mussel” or “muscle”. musl is small like one but powerful like the other.

https://www.musl-libc.org/faq.html

The logo is a mussel :)


/mu:sul/


For non technical users why ia this interesting?


It's not. But if you work on libraries that transfer http data around, this could be used to help test http/3 support.


Looks I don't need Cloudflare anymore XD.


Yes, it's "almost" comical. The advantage of having a CDN with a pop in every city, vs. just having 3 or 4 well placed POP's around the world will be marginal once HTTP3 is broadly supported.


I honestly can't tell if this is a serious take or sarcasm. I hope the latter...


Aside from lowering latency, a big "edge footprint" also naturally load-balances and allows for more specialized localization, from what I can think of off the top of my head. I don't have much practical experience here so correct me if my speculation is off, anyone.


Why do you say this?


I haven't been following the TLS1.3 development. What is the current state of SNI encryption? Is it possible to encrypt the name of the host you're trying to connect to?


That's eSNI and I believe it's part of 1.3: https://tools.ietf.org/html/draft-ietf-tls-esni-04

Not sure what's the implementation status though.


No, it isn't part of TLS 1.3

At the point where the last drafts of TLS 1.3 were shaping up, Eric (Rescorla)'s initial ideas for how to achieve eSNI had failed and the extant draft was only a problem statement. It basically said: Here is what eSNI needs to achieve in our opinion, we don't know how to do that

Between that point and when TLS 1.3 was published, several people brainstormed a proof of concept for how to actually make it work, which so far led to the draft you've linked.

The eSNI draft is defined as an extension to TLS 1.3 but - since the whole point is to deny snoopers information about who we're talking to - if we have to "fall back" to not doing eSNI because the server isn't compatible then we lost.

Cloudflare and Firefox devs cooperate to implement drafts of eSNI, so if you have a recent Firefox and a site which has opted into Cloudflare's trial of this feature, then it works for you, but the drafts definitely will change further and you should not go building anything based on this draft that you aren't able to support updating to future drafts or abandon altogether weeks or months from now.


> Cloudflare and Firefox devs cooperate to implement drafts of eSNI, so if you have a recent Firefox and a site which has opted into Cloudflare's trial of this feature, then it works for you, ...

Well, at least not yet with the latest release version of Firefox (v69). Tested with Cloudflare's own page for testing eSNI browser support (and TLS 1.3, DNSSEC & DoH). Firefox supports the other three but not eSNI, according to that page. Even the Dev channel (v71) has no support.

https://www.cloudflare.com/ssl/encrypted-sni/


It's not enabled by default, and not exposed under browser preferences. It's available in about:config under network.security.esni.enabled.


Awesome, thanks for the details. I remembered FF doing something about it and thought it's already official.


Unfortunately, it’s not part of TLS 1.3 yet.

The link you posted is the right one, but it’s to the Internet-Draft. This is the step prior to becoming an RFC, where revisions are stored for everyone (including implementors) to use. So (for example) when you hear someone saying “I support TLS 1.3 Draft 8”, that means they support version 8 of the Internet-Draft.

Once this is finalized and becomes an RFC, you’ll see it updated here: https://datatracker.ietf.org/doc/draft-ietf-tls-esni/ (and I’m sure someone here will post about it!)


[flagged]


Your comment is correctly getting downvoted because it broke both the site guidelines and the Show HN guidelines:

https://news.ycombinator.com/newsguidelines.html

https://news.ycombinator.com/showhn.html

Would you mind reading those and taking the spirit of this site to heart when commenting here? We're trying for a bit better than internet default, and sarcastic dismissals push things in the wrong direction.

Your comment downthread (https://news.ycombinator.com/item?id=21308453) also broke the guidelines. We're really trying to avoid flamewars here, for the same reason that cities avoid flaming buildings. If you'd respect that in the future we'll be grateful.


OK, will do, I'm sorry the discussion went off the rails like this. Rest assured that while the form wasn't the most neutral, the intent was definitely the best possible and absolutely no malice was intended, contrary to the forced interpretations that were given in the responses.


Rephrased the post as https://news.ycombinator.com/item?id=21308812, hopefully we should be good now.


I'll explain it, if you'd like.

Correct, no one is making an argument against what you said, because everyone understands that point. For most people, it doesn't need to be said. No one is ripping out production infrastructure and replacing it with this image. There's no comment anywhere suggesting it.

What you're doing is called "preaching to the choir". You're trying to be a contrarian to show everyone how smart and mature you are. Yolo, amirite? But not you! You're thoughtful and have experience and know not to do this! You're still running Debian Stable!

But really, this is just a cool Show HN project, posted on a Sunday night, and no one cares about your thoughts on the risk of bleeding edge. So they downvote and move on.


Please don't respond to a bad comment by breaking the site guidelines yourself. That makes this place strictly worse. Crossing into personal attack, which you did here and downthread (https://news.ycombinator.com/item?id=21308653) is particularly bad and the sort thing we ban accounts for, even if another comment was provocative.

We're really trying to avoid flamewars here, for the same reason that cities avoid flaming buildings. If you'd respect that in the future we'll be grateful.

https://news.ycombinator.com/newsguidelines.html


[flagged]


> Does the readme say anywhere "experimental, not for prod use"?

"Built on the edge, for the edge"

It's pretty obvious it's an experiment.

It's reasonable of the author to assume anyone running serious production infrastructure will be prudent enough to not just blithely go ahead and implement this.

Even if it isn't, you could calmly suggest the author add more warnings to the readme. A project like this is no place for that kind of rage.

> I love the unexplained downvotes

Nobody has to explain their downvotes (and it makes for boring reading when people do). But angrily trashing someone's Show HN experiment is long established as being valid grounds for downvoting.


> A project like this is no place for that kind of rage.

I can assure you absolutely no rage was intended, or expressed, in my first message. Was the comment needlessly snarky? Yes. Was it anything beyond that? Absolutely not, and I would appreciate if this didn't get further blown out of proportion. I already apologized for the form of the comment.


> Sorry if I'm being such a joy killer. I guess I've been witness to too many failures for not feeling to call this out before someone gets burned.

But you didn't do it in a constructive way. You did it in a condescending way to make yourself appear better. Yolo, amirite? You could have had a constructive comment, explaining the level of support of various technologies, their maturity within nginx, etc. All of that would have been beneficial, discussing real world implications of things.

> Oh right, but no one needs to be told to be careful, right... No one is here to learn anything, as we all already know everything. Makes you wonder what's the point of showing something new in the first place.

No, some people need to be told to be careful. You didn't do that, though. You jerked yourself off. Likewise, the point of showing something new is to get real feedback. Again, which you didn't provide.

> And please spare me the "no-one suggested to replace prod infra with this". Does the readme say anywhere "experimental, not for prod use"?

You're the reason the iron needs to say "Do not iron while wearing clothes"


Because no one is suggesting you use this to "handle all your traffic" anywhere.


Neither is the contrary suggested. I wouldn't point out something marked as experimental/not for prod use.


They literally say

> All built on the bleeding edge. Built on the edge, for the edge.

The only suggestion that this is meant "to handle all your traffic" is from you.

It seems like your main complaint is that the author didn't use some specific 'experimental/not for prod use' tag. For my mind, that's exactly what 'bleeding edge' implies, but if you think it doesn't why not simply suggest that they add such a tag? No charitable reading of the post suggests malicious or careless misleading.


Well, I was actually responding to the OP... but I guess I could have been clearer in my intent - that was exactly what you stated: mark it as experimental/not for anything important.


Fair enough, I guess it doesn't really come across like that.

The author takes contributions, so they'd probably welcome a pull request on the README that introduced that kind of language. I have no idea if they'd merge it or not, but at least they would then reflect on the idea.


Do you really require someone to point this out before running in production?

If anyone doesn't know not to do that, it's on them, not OP.


I simply believe that honesty is important. If I publish something that I know to be experimental, I mark it clearly as such.

And I point out others when they fail to reach that bar, so that they can fix it.


Honesty's great, but breaking the site guidelines is not. Fortunately you can be honest while following the site guidelines in both letter and spirit.

Please see https://news.ycombinator.com/item?id=21308730 as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: