Hacker News new | past | comments | ask | show | jobs | submit | ivankuz's comments login

A fun tale about wildcard certificates for internal subdomains:

The browser will gladly reuse an http2 connection with a resolved IP address. If you happen to have many subdomains pointing to a single ingress / reverse proxy that returns the same certificate for different Host headers, you can very well end up in a situation where the traffic will get messed up between services. To add to that - debugging that stuff becomes kind of wild, as it will keep reusing connections between browser windows (and maybe even different Chromium browsers)

I might be messing up technical details, as it's been a long time since I've debugged some grpc Kubernetes mess. All I wanted to say is, that having an exact certificate instead of a wildcard is also a good way to ensure your traffic goes to the correct place internally.


Sounds like you need to get better reverse proxies...? Making your site traffic RELY on the fact that you're using different certificates for different hosts sounds fragile as hell and it's just setting yourself up for even more pain in the future


It was the latest nginx at the time. I actually found a rather obscure issue on Github that touches on this problem, for those who are curious:

https://github.com/kubernetes/ingress-nginx/issues/1681#issu...

> We discovered a related issue where we have multiple ssl-passthrough upstreams that only use different hostnames. [...] nginx-ingress does not inspect the connection after the initial handshake - no matter if the HOST changes.

That was 5-ish years ago though. I hope there are better ways than the cert hack now.


That's a misunderstanding in your use of this ingress-controller "ssl-passthrough" feature.

> This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

> SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation

So if you want multiple subdomains handled by the same ip address and using the same wildcard TLS cert, and chrome re-uses the connection for a different subdomain, nginx needs to handle/parse the http, and http-proxy to the backends. In this ssl-passthrough mode it can only look at the SNI host in the initial TLS handshake, and that's it, it can't look at the contents of the traffic. This is a limitation of http/tls/tcp, not of nginx.


Thank you very much for such a clear explanation of what's happening. Yeah, I sensed that it's not a limitation of the nginx per-se, as it was asked not to do ssl termination, hence of course it can't extract header from the scrambled bytes. As I needed it to do grpc through asp.net, it is a kestrel requirement to do ssl termination that forced me to use the ssl-passthrough, which probably comes from a whole different can of worms.


> it is a kestrel requirement to do ssl termination

Couldn't you just pass it x-forwarded-proto like any other web server? or use a different self signed key between nginx and kestrel instead?


There is definitely that. There is also some sort of strange bug with Chromium based browsers where you can get a tab to entirely fail making a certain connection. It will not even realize it is not connecting properly. That tab will be broken for that website until you close that tab and open a new one to navigate to that page.

If you close that tab and bring it back with command+shift+t, it still will fail to make that connection.

I noticed sometimes it responds to Close Idle Sockets and Flush Socket Pools in chrome://net-internals/#sockets.

I believe this regression came with Chrome 40 which brought H2 support. I know Chrome 38 never had this issue.


> so much of what the Soviets worked on feels rooted in a sort of “that’ll show them!” inferiority complex

That's what the Cold War was about, innit? Same reason why the US went to the moon.


My perception was that the Cold War was partly about exploiting this trait, forcing them into financial ruin by running races they couldn’t help but compete in.


A short and fun to read corrective is Kotkin's Armageddon Averted. The competition, inefficiency and the demands of its own power projections were a costly strain but nowhere close to leading to ruin or an irrecoverable state.


Except the US actually made it to the moon with functional equipment and not Ekranoplanz.


The soviets put the first satellite and man in space, so I bet they had functional equipment too


First woman in space too, and a very long list of other firsts some of which remain untouched by anyone else (eg landing in the surface of Venus and sending digital pictures back). Valentina Tereshkova went to space in 1963, at a time when women in the US had to ask their husband’s permission to open a bank account.


They did not get to the moon though did they, which is what we were discussing.


I'm not sure I follow you on the energy topic? If anything, the "progressive" shutdown of the nukes was a rather questionable move.

In general, I think there's a relatively healthy balance, albeit, with a fair bit of animosity between the sides.

The thing about "outdated technologies" is - they work. It's really easy to see in the software world, where I will choose a battle-tested technology any day, over the new flashy thing that everybody's buzzing about that is in theory or with many iterations will be great. So why would other industries be different? Surely there will be a drag from people whose wealth depends on the technology/approach being used, but that's just how it works. If I put a lot of effort into developing a thing, I will defend the thing as much as I can, and it's the job of the newcomers to push me over with big enough arguments.


> the "progressive" shutdown of the nukes was a rather questionable move.

Nuclear plants were shutdown by the conservatives. They decided this in 2011, when Fukushima happened, and executed it over the following decade while they were simultaneous derailing the switch to EV. A notable part here was that just months before Fukushima, they had already stopped the shutdown that previously was decided on.

Only the remaining three(?) plants were shutdown by the acting administration, because there was nothing to continue anyway. The previous administration had to renew the technical checks and order new nuclear rods for the plants to continue, which they obviously did not.

Overall, the whole act was a big clusterfuck of competing interests, but this wasn't the progressives fault, because they never had the power to fail this in the first place.

> In general, I think there's a relatively healthy balance, albeit, with a fair bit of animosity between the sides.

There is no healthy balance when you make up nonsensical lies to sabotage an ongoing process, which is even for the benefit of your own country.


This is simply not correct like this. The Green faction and the SPD had decided on the shutdown in 2002 when the "progressive" parties had the majority and conservatives where in the minorty. https://www.bundestag.de/webarchiv/textarchiv/2012/38640342_... More then 20 years ago the conservatives could already see that this would end in disaster and they resisted the change. Then fukushima happend and the media and public pressure was too big and the conservatives stopped resisting in 2011. The "progressive" shutdown was planned by the SPD and the Green faction and they pushed it through in 2002. The conservative did neither plan not decide the shutdown.


The conservatives stopped the shutdown in 2010, and reinstated it in 2011. And they continued with the plan the whole following decade when they were in power. True, the original plan was from the progressives, but the conservatives had the power and executed it. They, too, decided on this, and they had the power to change it for a whole decade, and did nothing.

And the BS about a "disaster" never happened. There were no blackouts, and even the prices went down. All the problems of the money driven fearmongers never manifested, yet they continue to spread lies about the greens doing harm and the all so great (money wasting) atomic plants...


While not exhausting nor practical in its absolute, I frequently return to a phrase from The Brothers Karamazov by Dostoevsky:

> You need to love the life more than its meaning.

Purpose in life is important, but for a healthy mind being lost is fine. It's "the next step", something you will inevitably face again.


I'm not advocating for either side here, just want to explore my ideas for a bit.

During some rough patches of my life, I found solace in being vulnerable with my friends and speaking about how I felt.

At other times, taking a more stoic approach worked wonders. Learning to notice feelings and emotions without them leading you into a spiral of debilitating self-pity or self-loathing, for me, is the ultimate life skill. And to be honest, I only could really progress with such a skill in relative solitude.

The way I have it now, the approach of reaching out for help with mental space is more like going to a doctor. There's a certain threshold of suffering that needs to be reached, for you to take deliberate actions. Practising mindfulness and all the stoic narratives is more like doing fitness. It helps with the ongoing, everyday, low-key pain, as well as prepares you for the bigger issues in the future.


By default, when following `<a>` links from site FOO to site BAR, the browser will tell site BAR in the Referrer header[1], that you're coming from the site FOO.

If you add attribute `rel=noreferrer`[2] to the `<a>` tag, the browser will not attach such a header.

It looks like the guys from asahilinux dot org somehow altered the behaviour of their website in cases when Referrer was set to hackernews, so hn added the `rel=noreferrer` to those links.

No idea about the context of the drama though.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re... [2] https://developer.mozilla.org/en-US/docs/Web/HTML/Attributes...


That's awesome! I did something similar a while ago, but instead of Flappy Bird I merged wordle with Snake https://ivan.engineering/projects/snordle/


I got extremely lucky on my first try as the letters were well-positioned, so I got it right after three guesses. And the answer was AWARD.

I did not have such luck on my second attempt, which left me SURLY.

I liked it.


I'm using CyberChef (https://gchq.github.io/CyberChef/) quite a lot for different tasks.

I love the simplicity of writing a program in mere minutes and distributing it with a single link.


This is one of my favourite tools. Especially nice is you can even run it offline with no web server.


Very honorable mention to this topic would be https://www.ryeboard.com/.

They're moving quite fast and seemingly in the right direction.


I've been lately working with a lot of F# (thus its compiler and Rider IDE) and my take on “more powerful machines for development” is that it's quality of life improvement. When intellisense (or whatever it's called in different circles) works instantly, as well as go to definition, find usages, etc: it's just so much better than waiting even a second for those actions.

There's another side in need of compiling large projects regularly, which I didn't encounter much, but if my newest MBP with i9 took around 20 minutes to compile haskell-ide-engine, I imagine what compile times you can hit potentially in bigger projects and how unbearable it could be on any less powerful machines.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: