Hacker News new | past | comments | ask | show | jobs | submit | ninkendo's comments login

I mean it’s not a great answer, but you could always set up a ULA for your guest network and use NAT. :-P

But I’m with you on prefix delegation sucking. Prefixes change, and that makes all your devices’ addresses change. ULA’s solve this. But then you start asking hard questions like “if I’m going to use a ULA anyway, why even use the GUA addresses?” And the answer is shrug.

I mean, it’s great that you can give real addresses to your devices when you want to host a service on them, but you can always just NAT your ISP-provided prefix to them anyway. You’ll probably want better addresses for them anyway, as those randomly generated host addresses aren’t easy to remember (may as well just start your public addresses at ::1 and increment from there, routing each one to the underlying ULA.)


Doing stateless NAT through prefix translation is still much more pleasant than stateful port mapping.

A static NAT is all you'd need for ULA to public 1:1.

If you have a static NAT you don't need connection tracking on the router.


You’d need more than a /64 from your ISP if you wanted to do a separate guest network with static NAT though.

OP was saying ipv6 makes it hard to do a guest network if all you get is a /64 from your ISP, but stateful NAT can fix that.


IPv6 isn't hard, and as the article alludes to, it is different, but it's not even that different.

What is hard, is dual stack networking. Dual stack networking absolutely sucks, it's twice the stuff you have to deal with, and the edge cases are exponentially more complex because if your setup isn't identical in both stacks, you're going to hit weird issues (like the article suggests) with one stack behaving one way and the other behaving another way.

None of this is the fault of IPv6, it's a natural consequence of the strategy of having two internets during the transition period. A transition period which has now been going on longer than the internet existed when it started.

Maybe 100 years from now we'll finally be able to shut off the last IPv4 system and the "dream" of IPv6 will become a reality, but until then, life is strictly more difficult on every network administrator because of dual stack.

(Note: no, we could not have done it any other way. Any upgrade strategy to IPv4 that extended the address size would have necessitated having two internets. It just sucks that it's been this long and we're still not even at 50% IPv6 usage.)


It doesn't help that the use of an IPv4 is a status symbol that indicates that you are willing to spend money on your infrastructure (and are thus not a spammer). If you attempt to run an email server on IPv6, you will see what I mean.

That's not what I am observing while running a number email servers which having both IPv4 and IPv6. Spam is only on IPv4 and about 80% of legitimate email is on IPv6.

Yes, but those facts don't reach the email filtering business: There, IPv4 IPs have some reputation, maybe good or bad. IPv6 "just doesn't exist" and has a universally bad reputation. Thus your IPv6-Emails will be filtered and dropped more often, even though you are correct that actually IPv6 is an indicator of non-spam. I've got quite a few transport rules to work around those kinds of braindead filters...

> IPv6 "just doesn't exist" and has a universally bad reputation.

I've been sending my mail via IPv6 for ages without issues… did you mean reputation or reputation?


> I've been sending my mail via IPv6 for ages without issues…

In most cases it does work and there are no problems. But especially medium-sized businesses who run their own email through some crappy mailfiltering middlebox file anything that comes from or through an IPv6 endpoint to spam.

> did you mean reputation or reputation?

I don't understand your question. I meant IP address reputation, which is what many mail filtering systems use to estimate spam probability or outright reject email. See for example https://www.spamhaus.org/ip-reputation/


That does not match my experience either. In fact I don't see why any mail server would bother getting and configuring an IPv6, listen on it, just to drop all received messages.

Just don't listen on v6, that would achieve your goal, would prevent reputable dual-stack senders (like Google!) to send messages over v6 that you'll drop, and simplify your configuration.


In many situations it's a necessity. Ipv4 only devices can't connect to ipv6 only servers. So if you have ipv4 clients, your server has to support ipv4.

I found an open email relay on IPv6 once, because nobody scans IPv6 and therefore nobody ever sent spam through it. (Apparently this was caused by a reverse proxy misconfiguration causing the server to think IPv6 connections were from localhost)

Also the surface area of IPv6 is huge, it just takes so much time to scan everything. It's not security, but it's a deterrent.

And yet scanning IPv6 is exactly what some people are doing.

IPv6 nodes aren't individual random addresses in a 128-bit address space. They are going to be grouped in subnets, so it makes sense to explore /64 ranges where you know there's already at least 1 address active. There's a pretty decent chance at least some addresses are going to be sequential - either due to manual configuration or DHCPv6 - so you can make a decent start by scanning those. For non-client devices, SLAAC usually generates a fixed address determined by the NIC's MAC address, which in turn has a large fixed component identifying the NIC's vendor. This leaves you with a 24-bit space to scan in order to find other devices using NICs made by that vendor - not exactly an unfair assumption in larger deployments. Much faster scanning can of course be done if you can use something like DNS records as source for potential targets, and it's game over once an attacker has compromised the first device and can do link-local discovery.

It's not going to be extremely fast or efficient, but IPv6 scanning isn't exactly impossible either. It's already happening in practice[0], and it's only going to get worse.

[0]: https://www.akamai.com/blog/security-research/vulnerability-...


Well, and the easiest way is some sort of web bug/tracker/log or traffic analysis.

See what address gets used, and blam.


For what it's worth I have many years of technical experience and a pretty decent knowledge of IPv4 networking, and I found it quite hard - at least when first learning.

The addressing is intimidating, getting your head away from NAT, thinking about being globally routable, the lack of (or variances in) DHCP, ND, "static" addressing via suffixes, etc.

None of this is technically "hard", but it's a significant barrier to entry because it's quite different to what most normal technical users are used to.

Dual stack may create some new problems, but it fixes a lot of others. All of the major cloud providers have awful IPv6 only support, and many prominent web resources that usually invisibly "just work" have little to no IPv6 support.


I think you thought it was hard precisely because you had a decent knowledge of IPv4.

I suspect that if the situation were reversed and we always had IPv6, and someone proposed a new IP suite where you had to use NAT for everything, and all hosts needed to have their addresses hand-picked (or served via a DHCP server that used an ugly hack to communicate with hosts that don't have an IP yet), you'd find that to be the "hard" one.


The one objectively difficult thing about IPv6 is remembering an address. Four base-10 numbers between 0 and 255 fit in working memory, while eight groups of four hex digits don't. Even a shortened address can be difficult to recall.

Your working memory must be better than mine.

I can remember two numbers from 0 to 255, but four? Not in a million years.

Isn't there an idea that our working memory holds seven pieces of information? Which would be two of those numbers and change, if you consider each digit as a piece of information.


Yeah, it makes me think of finding an automatic car difficult because you're used to manual. I have noticed people are often reluctant to give up on something they had to learn, though.

Yes. In my day job I work mainly on containers running entirely in data centers we own, they are fully ipv6, it’s very nice to work with.

Then there is this one customer, they are colocated in other ISP owned datacenters, and some of their server maybe have fully IPv6, some may have only IPv4, some may be a mixture of both. Supporting them is a never ending nightmare.


Good news: it's 20 years, not 100, at the most :-)

https://labs.apnic.net/index.php/2024/10/19/the-ipv6-transit...


You can't just assume linear growth. Your link acknowledges this, and the "prediction" you refer to has the right caveat: "we can look at the time series shown in Figure 1 and ask the question: 'If the growth trend of IPv6 adoption continues at its current rate, how long will it take for every device to be IPv6 capable?'". It's using such a naïve model to illustrate that we're still nowhere close to completing the transition, not to say that we'll be done in 20 years at most as you suggest.

Adoption will slow down and my guess is that we'll be stuck with a long tail forever, and only time will tell if that happens at 50% adoption or 90% adoption.


> Adoption will slow down and my guess is that we'll be stuck with a long tail forever, and only time will tell if that happens at 50% adoption or 90% adoption.

But that long tail will probably look like figure 4 of that link, only reversed: IPv4 running mostly as tunnels over a core IPv6 network. Some large cellphone networks are already at that stage, using things like 464XLAT to run IPv4 with NAT over an IPv6 network.


I'm not talking about a long tail of systems and networks which still have working IPv4 support. I'm talking about a long tail of systems and networks which still don't have working IPv6 support.

I would agree with you were it not for the fact that I'm personally investing my time into a FLOSS project to make sure the growth does not slow down by making “tomorrow NOT like today”, but gets a kick in the butt once we reach the 50% deployment inflection point instead. Stay tuned ;-)

Whatever you're cooking up, I wish you luck! My life would be significantly easier if I could just completely ignore one network protocol, be that v4 or v6. So far I have mostly just ignored v6 and everything I do uses v4, but I have no strong preference (except that v4 addresses fit in my brain while v6 addresses do not, which is honestly a not-insignificant reason why v6 is hard)

Here's to hoping, but I agree one way or another one stack needs to go :-)

BTW: Contact info is in my profile if anyone wants the inside scoop. The project just isn't fully ready for public launch yet, but we're well into the cooking.


To add:

The really sad thing about IPv6 is that it's a relic of a bygone era. The internet was envisioned as a thing where every device on every network could, in principle, initiate a connection to any other device on any other network. But then we started running out of addresses, ISP's wouldn't give you more than one for a household, so we started doing NAT, and that kind of sealed the internet's fate.

Nowadays, we wouldn't want a world where everything was connectable, even if we had enough addresses. Everyone's network in essence has a firewall. If you run IPv4 and using NAT, you're going to be dropping unsolicited incoming packets no matter what (let alone that you wouldn't know where to route them even if you wanted to let them through) and with IPv6 you'd be insane to allow them.

Devices have "grown up" in a world where we expect the router we're connecting to, to shield us from incoming unsolicited traffic. I certainly don't have the firewall enabled on my linux desktop. Windows I _think_ has it enabled by default, but I often turn it off because I want LAN connections to work. MacOS is probably similar. Suffice to say, if the IPv6 dream happened overnight and everyone's devices were instantly connectable, all hell would break loose. We need our routers to disallow traffic. (Edit: This wasn't all that clear, so let me restate: In both the IPv4 and IPv6 world, you'd be insane to disallow incoming unsolicited traffic, which is why basically everyone blocks it in IPv6 as well as IPv4. I'm not trying to say IPv6 can't do this... quite the contrary: IPv6 routers absolutely do, and should, block unsolicited incoming traffic. But my point is that this is what prevents the original vision of IPv6 from becoming a reality: We can't design software around the idea of direct peer-to-peer communication, without stuff like UPnP. Oh, and UPnP works with NAT and IPv4 anyway so, there's the rub.)

So, even in a pure-IPv6 world, that instantly prevents any startup with an idea to allow true peer-to-peer communication using end-to-end routability. What are you going to do, train your users to log into their router and enable traffic? Approximately 0% of your customers know how to do that, even in a counterfactual universe where IPv6 happened and we all have routable IP's on our devices. Maybe in such a universe, people would be trained to use local firewalls on their device, and say "ok" to popups asking if you want to let software through. But I'd wager that a lot of people would prefer the simplicity of just having their gateway drop the traffic for them.

No, the "all devices are routable" idea came from a naive world where there wasn't a financial incentive for malicious behavior at every turn. Where there aren't millions of hackers waiting for an open port to an exploitable service they can use to encrypt all your data and ransom it for bitcoin. The internet is a dark forest now. We don't _want_ end-to-end connectivity.


Every consumer router I’ve ever seen drops unsolicited IPv6 packets by default. But that doesn’t mean that P2P routing is suddenly impossible.

IPv6 makes P2P routing substantially easier, even in a world of default firewalls that drop unsolicited packets. You can still apply standard NAT busting techniques like STUN with IPv6 devices behind firewalls, and you get much better results because IPv6 removes the need to track and predict the port mapping a standard NAT does. Two P2P systems can both send unsolicited packets to each others IPv6 addresses, with a specific port, and know that port number isn’t going to be remapped, so their outbound packets are guaranteed to have the right address and port data in them to get their respective firewalls to treat inbound traffic on those addresses and ports as solicited.

This is particularly useful when dealing with CGNATs, where your residential devices ends up behind two NAT layers, both of them messing with your outbound traffic, which just creates an absolute nightmare when trying to NAT bust. IPv6 means that you’re no longer dealing with three or more stateful NATs/firewalls between two peers, and instead only have to deal with at most 2 (in the general case) stateful firewalls, who’s behaviour is inherently simpler and more predictable than a NAT ever is.


The argument that NAT-busting (firwall-busting?) becomes easier with IPv6 is certainly true, but "easier" isn't necessarily enough to warrant an entirely new protocol suite. If that's the only benefit of IPv6, it doesn't sound worth it to me. UPnP/STUN is still possible with NAT/CGNAT, and you're going to need something like that even if there were no NAT, so to me it sounds like a bit of a wash, at least to the end-user.

(Of course, the other huge benefit of IPv6 is "more addresses", so we need it just for that. But my point is that "global routability" isn't really the dream people think it is. In practice, the only differences between modern GUA-but-deny-by-default IPv6 setups and NAT'ed IPv4 setups are the simplicity of the former for the network administrators.)


> UPnP/STUN is still possible with NAT/CGNAT

You should tell that to my ISP, they’ve managed to deploy a CGNAT that’s proven to be completely STUN proof. The only way I can achieve any kind of P2P comms is using IPv6. IPv4 is useless for anything except strictly outbound connections.

> and you're going to need something like that even if there were no NAT, so to me it sounds like a bit of a wash, at least to the end-user.

Not in my case. As above, I simply can bust my ISPs CGNAT, so IPv6 is invaluable to me. Makes a huge difference to me, the end-user.


> If that's the only benefit of IPv6, it doesn't sound worth it to me.

It's not the only benefit, as anyone who's tried to build a large network or merge two networks with overlapping address space will tell you.


IPv6 doesn't actually save you because it presumes a cultural norm that people will use globally unique addresses for all their networks which, while now possible, I imagine many network engineers will use IPv6's private address space to avoid ever having to re-address their network on ISP changes.

> I imagine many network engineers will use IPv6's private address space to avoid ever having to re-address their network on ISP changes.

Except with 10/8 (or 172/12) everyone is using the same address space. How many network have a 10.0.0.0/24? What are the odds of a conflict for that?

But if you have an ULA FDxx:xxxx:xxxx/48 address space, what are the odds that all those x bits will be the same for any two sites? That's 40 bits 'entropy'. Much, much lower (notwithstanding folks doing DEADBEEF, BADCAFE, etc).


Not only that. Having private IPv6 addresses in the LAN is the easiest way to work with two ISPs and have fail-over. The other way is RFC8475, but I don't know of any implementation except my custom script on OpenWrt: https://forum.openwrt.org/t/ipv6-wan-fail-over-without-ipv6-...

I imagine the threshold of org size where they stop taking their prefixes from their ISP and just go get their own from the RIR is .... Not big.

Sure, some network engineers will try and design v6 networks as though they're v4, but everyone else will just go and get a squillion addresses from the RIR that's unique to their org and then just use/announce that.


What about home private network (for smart home, for example)?

Home networks don't usually have the problem of having to suddenly merge together with another, similarly-addressed network.

Nothing stops you getting a subnet allocated from your local RIR and announcing that on the internet, if you care about that sort of thing for your home network though. You just need a decent ISP.


'You should switch your working local networking stack so as to help the scenario where the company is bought and you may well not be working there anymore and it's someone else's problem. Or the scenario where the company buys a different company and very likely needs to investigate and remodel the other company's network anyway.'

> UPnP/STUN is still possible with NAT/CGNAT

Sometimes. I've experienced a few networks where even with STUN I'm still not able to get a workable session in IPv4.


Fixing that network you had an issue with to have working NAT-PMP or whatever is going to be infinitely easier than IPv6ing the entire world.

I can't just ask my cell provider to change their CGNAT setup. There's nothing I can do on my end.

The model of having every router being a firewall has created a fair bit of false security, with vulnerable systems being left behind the firewall and simply waiting until a user inside the network does something unwise. It seems a common story where whole hospitals or municipalities get shut down because one host got infected and then all the unpatched system in the same network got hit.

There is also a bit of conflated use of terms. A firewall on windows is just a fancy name of permission management. A program want to open a port and the user get a prompt if they want to allow it (assuming the user has such privileges). It is indistinguishable from similar permission system found on phones that ask if an app is allowed to access contacts. The only distinguishable "feature" is that programs might not be aware that the permission was denied, thus opening a port that is not actually open. Windows programmers may know more of the specific of this.


So your idea is to open up access so all the IT orgs that can't keep their systems secure ...

... which is ... virtually all of them ...

get p0wned on a massive scale? You know, now with AI powered assaults for extra special vulnerability levels...

And then a magic wand happens, and the IT orgs that couldn't even install patches will be able to back-discover all the compromised hard drive firmwares, rootkits, and nth-level security holes after the fact?

I get a NAT wall isn't perfect security. But pretending it is NO security is disingenuous.


Every company in spoken to in the last few years has been moving toward zero-trust network, where you assume that every single device is on a hostile network. There’s nothing inherently safe and secure about the corp network, so treat it as if all those cubicles were connecting in from Starbucks.

IMO this is the only way forward.


Not the parent but obviously no, but maybe if in the 90' it was the standard to not assume that the network was secure we would be in a better spot now (maybe a spot where company intranets use authentication instead of vpns

There is a solution to this: RFC6887.

You have a firewall at the edge of the network. It blocks incoming connections by default, but supports Port Control Protocol. The cacophony of unpatched legacy IoS devices stay firewalled because your ten year old network printer was never expected to have global connectivity and doesn't request it. End-to-end apps that actually want it do make the requests, and then the firewall opens the port for them without unsophisticated users having to manually configure anything.

The protocol is an evolution of NAT-PMP (RFC6886) for creating IPv4 NAT mappings, but RFC6887 supports IPv6 and "mappings" that just open incoming ports without NAT.


Sure, but is it actually a feature you want, though? We're never going to run out of legacy unpatched devices, and today's shiny new devices are going to be tomorrow's tech debt. If you support that kind of hole punching, sooner or later you will end up with vulnerable devices exposing themselves to the internet.

There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?

In my opinion the current kind of hole punching is a far better option: instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet. It's well-researched when it comes to the interplay between firewalls, NAT, and UDP[0].

And that pretty much solves the problem, to be honest. These days you only really need incoming traffic to support things like peer-to-peer video calls. Hosting game servers on your local machine is a thing of the past, everything has moved to cloud services. What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.

[0]: https://tailscale.com/blog/how-nat-traversal-works


> We're never going to run out of legacy unpatched devices, and today's shiny new devices are going to be tomorrow's tech debt.

It's not about new or old. The devices that don't have any reason to be globally reachable never request it. New ones would do likewise.

Devices that are expected to connect to random peers on the internet are going to do that anyway. It's not any more secure for them to do it via hole punching rather than port mapping; causing it to be nominally outgoing doesn't change that it's a connection to a peer on the internet initiated by request of the remote device.

> There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?

Enterprise admins can easily use it because they have control over the gateway and how it answers requests, so they can enable it for approved applications (in high security networks) or all but blocked applications (in normal networks), and log and investigate unexpected requests. They should strongly prefer this to the alternative where possibly-vulnerable applications make random outgoing HTTPS connections that can't easily be differentiated from one another.

Whether they will or not is a different question (there are a lot of cargo cult admins), but if they don't they can expect their security posture to get worse instead of better as the apps that make outgoing HTTPS connections to avoid rigid firewalls become the vulnerable legacy apps that need to be restricted.

Home users are already using NAT-PMP or UPnP and this has only advantages over those older solutions.

> instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet.

There are significant problems with this.

The first is that it's a privacy fail. The central server is at a minimum in a position to capture all the metadata that shows who everyone is communicating with. It's made much worse if the payload data being relayed isn't end-to-end encrypted as it ought to be.

But if it is E2EE, or the server is only facilitating NAT traversal, the server isn't really vetting anything. The attacker sends a connection request or encrypted payload to the target through the relay and the target is compromised. Still all the risks of exposing yourself to the internet, only now harder to spot because it looks like an outgoing connection to a random server.

Worse, the server becomes an additional attack vector. The server gets compromised, or the company goes out of business and the expired domain gets registered by an attacker, and then their vulnerable legacy products are presenting themselves for compromise by making outgoing connections directly to the attacker.

Doing it that way also requires you to have a central server, and therefore a funding source. This is an impediment for open source apps and community projects and encourages apps to become for-profit services instead. The central server then puts the entity maintaining it in control of the network effect and becomes a choke point for enshitification.

Meanwhile the NAT traversal methods are ugly hacks with significant trade offs. To keep a NAT mapping active requires keep-alive packets. For UDP the timeout on many gateways is as short as 30 seconds, which prevents radio sleep and eats battery life on mobile devices. Requiring a negotiated connection is a significant latency hit. For real peer to peer applications where nodes are communicating with large numbers of other nodes, keeping that many connections active can exceed the maximum number of NAT table entries on cheap routers, whereas an open port requires no state table entries.

> What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.

The point is to allow interesting projects to benefit more than a handful of nerds and allow them to become popular without expecting ordinary users to manually configure a firewall.


> It's not about new or old. The devices that don't have any reason to be globally reachable never request it. New ones would do likewise.

Doesn't this assume all devices are well behaved, trusted and correctly implemented? You don't think the crap Samsung Smart TV will consider itself to have a reason to be globally reachable?


Interesting, I haven't read much about that. I've always assumed somebody must have written an RFC to automate how home firewalls should be configurable automatically by software that "wants to" allow incoming connections. Good to see that it exists.

But it doesn't seem to be in widespread use, does it? Like, would a random internet gateway from podunk ISP support this? I kinda doubt it, right? Pretty sure the default Comcast modem/router setup my mom uses doesn't support this.

But I guess my point was about the contrapositive universe where IPv6 was actually used everywhere, and in that universe I suppose RFC6887 might have been commonplace.


Every ISP router I've had with even basic IPv6 support has supported PMP. Every home router I've handled with IPv6 support has supported it. So IME it is in pretty widespread support. Maybe there's some devices out there that don't, but it's at least as wide as UPnP support which was pretty wide.

Things like this can be useful without 100% penetration. Most end-to-end apps support a direct connection as long as either of the endpoints have an open port. Endpoints without one can also fall back to relays, but since port mapping is frequently used in gaming, that has higher latency and then users start demanding gateways that support the thing that lowers lag.

Then it starts to make it into popular internet gateways. For example, the Comcast consumer gateways generally do support UPnP, which is a miserable protocol with an unnecessary amount of attack surface that should be transitioned to Port Control Protocol as soon as possible, but it demonstrates you can get them to implement things of this nature.


> What are you going to do, train your users to log into their router and enable traffic?

You do what has already been done for decades. You ship a client premises router that does deny by default inbound and allow all outbound and things behave pretty much the same as they do today with these exact same rules in IPv4.

Network firewalls still exist in a publicly routable network. If I want my game console to allow incoming traffic for game matchmaking, I can then do that. Or have systems that auto configure that. But then I don't have to have multiple devices fighting for a limited port range, each device has more IPs and ports than they could know what to do with.


Please re-read my comment, as it seems like you skimmed it and think I’m making a different point than I did.

To use your example:

> If I want my game console to allow incoming traffic for game matchmaking, I can then do that

My point is that because of the fact that even in IPv6, consumer firewalls will block traffic by default, the company that makes the game would not have designed it to require your console to have special configuration on your firewall. Because this is not something a typical gamer knows how to do.

Instead, companies use UPnP for matchmaking, and UPnP works in both NAT and GUA environments, so what exactly does GUA give you?


If you have two Xboxes or Nintendo Switches in a standard UPnP network you'll still have NAT issues. They'll fight over port assignments. If I've got two things wanting to listen on :5000 I'm boned, I've only got one :5000 to forward from. I could remap that port in NAT, but now the downstream device needs to know to tell other clients it's actual public port.

You'll have even more problems if you're on CGNAT networks. You're not going to be able to get any of that traffic.

None of this is a problem if each device has its own IP address and its own range of ports to deal with. Every device can have its own :5000, it can know it's public IP address without having to have something see from outside, and with how big assignments usually are it can have dozens of things all listening on a public :5000 all at the same time.


If you want to hypothesize a world where we’re all on IPv6 and thus game matchmaking software can take advantage of the unique addresses for every console, you should also admit that the same software could also just use a unique port for each console today, in IPv4+NAT.

In fact, in IPv6, privacy addresses mean that the address the gaming service observes from my device shouldn’t be assumed to work for other peers, because my device may only be using that address for communication to the gaming service itself. Instead, authors of this software ought to understand that the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work (because if I have 2 XBox’s, they could decide on a different port if another console already has one used.)

It’s just I’m disillusioned with the idea of GUA’s for every device actually solving anything. It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there: firewall config, dynamic IP’s, mobile IP’s, security best practices, shitty middleware boxes not doing the right thing, etc etc etc.


My biggest argument would be CGNAT in the end really doesn't care what UPnP you do at the edge of your network; you're not getting that traffic no matter what you do. IPv6 means there's no reason to have CGNAT at all; there's more than enough addresses to go around.

Does IPv6 solve it all? No. Does it solve some of it better than IPv4? Yep. Does it just completely eliminate the need for CGNAT? Yep. It is not a silver bullet to solve all problems, but it does solve some, and for that I'd much rather use it. Because I'd rather just be able to host whatever at home and not need to remember what port is what or rely on proxies looking at other info in the request.

> shitty middleware boxes not doing the right thing

You reduce the need for shitty middleware boxes. I don't need a reverse proxy. I don't need to have a STUN/TURN server. I don't need to randomize ports or worry about running out of ports.


> the same software could also just use a unique port for each console today, in IPv4+NAT.

Why solve a problem once properly in the network stack when you can add an ad-hoc workaround to each individual protocol instead?

> the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work

And if you're behind CGNAT so your home router doesn't know what the address and port are, what then?

Not to mention the case of a multiplayer game between two people who live in the same house and a third person who doesn't. That's one that's trivial with IPv6 but difficult with every IPv4-based system I've seen.

> It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there

It all helps. I mean, people mostly manage to play games with each other today, they just get disconnects or random lag every so often. Cutting down on that, even if it was only 10%, would make a lot of lives better.


For badly designed games maybe? There's no real reason why they need the same port forwarded, it could be any random port.

I'm dealing with the devices I've got and the software stacks they have. In the end they fight for port assignments on IPv4 and expect to just have massive port ranges assigned to them.

> Within the port range, enter the starting port and the ending port to forward. For the Nintendo Switch console, this is port 1024 through 65535.

https://en-americas-support.nintendo.com/app/answers/detail/...

Nintendo tells me to forward all high numbered ports to my console. Because obviously we're only going to have a single one in the house.


Fixing the broken software that somehow requires a specific port and getting NAT-PMP deployed is a hell of a lot easier than redoing all of the software and all of the networks to implement IPv6.

In an IPv6 world, your home router still firewalls just like with IPv4 (the NAT is just an extra accidental firewall-ish thing), you can still skip the firewall on your laptop

Having globally allocated address space doesn’t actually imply openness of connectability


I am the last person to be defending NAT, but the benefit of NAT is that it sets up a really good default: incoming connections are not routable by default, and it's very hard to accidentally change this. There are many good ways to have "not routable by default" with IPv6, but you have to do that at the software level, while NAT forces it at a protocol level.

> incoming connections are not routable by default, and it's very hard to accidentally change this

Not just difficult, but impossible, even in principle, because there are more than one device sharing the same IP so at most one host would be vulnerable. Not the same as with IPv6, where screwing up the defaults leaves your entire network vulnerable.


You can forward all ports to a device, pretty common feature.

Right but that only leaves that device vulnerable. The other devices are not.

In practice, that makes the entire network vulnerable.

Yeah, server-side NAT + use of a reverse proxy makes hosting many websites from one IPv4 a possibility, albeit a relatively difficult one. You are only really in trouble if two consumers want raw TCP/UDP on one port.

> incoming connections are not routable by default

Tons of firewalls ship with this as a default logic, it doesn't require NAT in the slightest.


Parent poster is making the (good and underrated) point that NAT makes this logic failsafe: Turn an IPv6 firewall off and you’ve got all incoming connections allowed. Turn IPv4 NAT off and you’ve got no connectivity at all

So gate it behind a "here be dragons" option or hide it in the GUI entirely for the basic home version.

Or make the "turned off" state block all traffic, just like closing a water valve, or a road gate. I never understood why network firewalls did not default to this.

Turning off the firewall could just as easily be a unnoticed configuration error that causes it to die on startup

How are you gonna get the IPv6s of your targets? Can’t scan a /64, and with ephemeral ”security” addresses used for outbound conns, an adversary won’t be able to guess the address anyway. At least that’s my understanding. So I guess my question is: is this a real threat?


This reminds me of a company where the admin used public routable IPs for the internal subnet. The gap is always the human in the loop.

This was the norm for most of the 90's and earlier. NAT was supposed to be the exception, but it became the rule. Now we have an entire generation that doesn't understand that the Internet was always supposed to support end-to-end connectivity.

There's nothing wrong with this, and it's common at universities that have large IPv4 allocations.

It's still basically another mac address or unique identifier for the system, regardless of which local network it happens to be on. Great for being tracked by FAANG, I guess, but I'd rather for my devices use a generic local IP and a randomized mac. There's no reason why the refrigerator needs to be uniquely identifiable on the Internet.

No, v6 addresses are assigned by (or chosen based on) the current network, not permanently associated with a given system. They aren't MAC addresses or unique identifiers.

I wouldn’t want to relay on random caffee’s router to do this for me. I would still end up running firewall more carefully on my end devices. Which for my iPhone I’m not even sure if I can. So probably a personal VPN would be a must.

Sure... But wouldn't you want to treat this random network as hostile anyway? The router might already have port forwarding to the IP you grab from DHCP, not to mention other clients on the network. I'm also unsure how a VPN would help against inbound traffic regardless?

> I wouldn’t want to relay on random caffee’s router to do this for me. I would still end up running firewall more carefully on my end devices.

Are you not doing that already? If you trust whoever else happens to be on the same wifi in the cafe you're a braver man than I.


Does a VPN prevent inbound traffic on other IPs? If I put my laptop on a VPN, I can still SSH to it on its RFC 1918 address.

It depends on the VPN and its policies. Some deny all local traffic when active, routing everything through the VPN, and only leave a IPv4 /32 route for the default gateway. Some are more permissive.

A VPN can’t prevent inbound traffic but if the VPN alters the routing table it can prevent the return leg from working. This probably isn’t enough to prevent compromise.

> Having globally allocated address space doesn’t actually imply openness of connectability

Of course not, that's not my point. My point is that because of the fact that your home router still firewalls with both IPv6 and IPv4, any software which relies on being able to "just" connect to a peer over the internet, is doomed. Our networks don't work that way any more (they probably did in the early 90's though.)

My point is that even if we had global routability, we still wouldn't have open connectability, because open connectability is a stupid idea. Which means any software ideas people might have that rely on connectability, are already a non-starter. So why do we need open routability in the first place? (Honest question. This is the crux of the issue. Yes, open routability means you can have a host listen on the open internet, but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.)


> My point is that because of the fact that your home router still firewalls with both IPv6 and IPv4, any software which relies on being able to "just" connect to a peer over the internet, is doomed.

With IPv6 the only thing you need is PCP (or equivalent).

With IPv4 you need PCP/whatever plus a whole bunch of STUN/TURN/ICE infrastructure.

Just hole punching is a lot easier to support than more-than-just hole punching.


If you have PCP (NAT-PMP or whatever), why would you need STUN or TURN? (You still might like ICE, but ICE also makes sense to have on IPV6.)

Yeah, I've had to make this point several times. Getting rid of the NAT doesn't rid you of the need for NAT hole-punching and whatnot, since any sort of firewall will need a hole-punching scheme to allow incoming connections.

I'd say the biggest practical objection (not just "NAT is ugly" or "DHCP is ugly" or "NAT is evil since it delayed IPv6") is CGNAT, which really does put a lot of restrictions on end-users that they can't circumvent. The more active hosts stuffed behind a single NAT, the more they have to compete for connections.

> but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.

And a chunk of that 1% are on WANs that they aren't authorized to configure even if they wanted to.


Devices behind a firewall being routable with IPv6 doesn't mean you're allowed to connect to them by default. And if you want to, you have to explicitly allow that on the firewall. Just like you would open a port and redirect it with IPv4 before. Only with IPv6 you don't have NAT and a redirect. Some people need it and some don't. Again, just like with opening ports.

Please reread my comment, it seems like you're just pattern-matching to arguments you've heard others make in the past and assuming I'me saying the same thing. I'm not. See sibling comments for elaboration.

How many users know how to "open a port"? UPnP exists because people don't want to deal with these things.

What's your point?

  |              | IPv4                                | IPv6                                |
  |--------------+-------------------------------------+-------------------------------------|
  | With uPnP    | Unsolicited connection goes through | Unsolicited connection goes through |
  | without uPnP | Unsolicited connection blocked      | Unsolicited connection blocked      |
It's exactly the same in IPv4 and IPv6 just like FeistySkink was suggesting.

> "The really sad thing about IPv6 is that it's a relic of a bygone era."

Yes, and not only for the NAT reason you say; see "The world in which IPv6 was a good deisgn" by Apenwarr, from 2017: https://apenwarr.ca/log/20170810


This is a misconception holding back the adoption of IPv6.

Routers have a stateful firewall meaning by default they only allow incoming packets that belong to a connection that was initiated from the inside. By default you also have this kind of firewall on every operating system. NAT adds 0 additional security.


Is the alternative reality One where all devices can connect to all other devices and come with built-in mitigations for the problems that firewalls are normally used for?

Maybe. In the alternate reality, we probably would have the same threats on the internet as we do today, and the same boneheaded software that misconfigures things, so my guess is that we’d still have consumer firewalls at the gateway that just default-deny everything. Maybe there would be better standards about auto-configuring these firewalls though. (Or maybe the standards we already have would have just been better supported?)

But for certain, today if we had IPv6 everywhere, it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device. That software would only be usable by people who know how to configure their router, and that doesn’t make for very good market penetration. You still need something like UPnP, so you’re basically right back to where we are today. At least the connection tracking would be stateless, I guess?


> You still need something like UPnP

I have two SIP phones both wanting to register port 5061 to forward to their address. How does this work with IPv4/UPnP?


I’m not talking about SIP, I’m talking about a hypothetical piece of software that an IPv6 proponent may typically claim that is made possible by unique GUA’s for each device. My point is that the GUA alone isn’t enough to make this work, you still need:

- UPnP or something like it - A place to register your address, since they change all the time even in IPv6

And if you need these two things anyway, you can do this with IPv4, with the added change that you also include the port when registering your address, which makes the “multiple phones in a network” thing work.


> I’m talking about a hypothetical piece of software that an IPv6 proponent may typically claim that is made possible by unique GUA’s for each device.

Like, say, a SIP client? Wanting to listen on the standard SIP port?

Or say two web servers that both want to listen on :443 and you don't want to have to reverse proxy them from another box.

> UPnP or something like it

> https://datatracker.ietf.org/doc/html/rfc6887

Port Control Protocol seems to be pretty well supported on most of the devices I own even in IPv6.

And in the end I'm pretty boned if I'm on CGNAT usually. I can pretty much never get my Switch to do matchmaking with peers on IPv4 when I'm on a CGNAT network. If we were all on IPv6, it wouldn't be a problem.


> Like, say, a SIP client?

No, not like, say, a SIP client.

You seem pretty intent on not reading my whole comments or something. Mind responding to what I’m saying and not just making up your own arguments?

I’m saying “ipv6 alone won’t solve X”, and you’re saying “but what about Y?”


I'm saying "but what about Y" and you're acting like Y is never a valid issue that anyone ever seems to have.

So once again, how do two devices both share port 5061 on a UPnP NAT with a single public IP address? Or even worse, if they're CGNAT'd? It's a simple answer with IPv6...


Again with the not reading. Let me try to sum up my position so we can end this whole “nuh uh!” nonsense.

IPv6 was designed in a world where we thought we’d have a truly peer-to-peer internet where anyone could just talk to someone else’s device. This obviously isn’t what happened. An ipv6 proponent may say “this is because NAT breaks the necessary assumptions”, but that’s extremely oversimplified and wrong.

The reason that, when I use my iPhone to FaceTime my mom’s iPhone, it doesn’t just use SIP to contact her phone, isn’t because IPv4 and NAT. It’s because the very idea of that is nearly impossible to implement at scale even if everyone had a unique address.

I’m aware you can’t have multiple devices behind a NAT address listen on the same port. Thank you for pointing that out, you’re very smart. But it really doesn’t address my point at all, does it?

My point being that the reason we don’t use SIP and related technologies for billions of end user devices today, isn’t because of NAT. It’s because the myriad of other problems that would need to be solved for it to work reliably, and because of NAT. Eliminating NAT wouldn’t really meaningfully change the landscape of how software works. FaceTime calls would still go through Apple servers. People would still use centralized services run in real data centers, etc.


> I’m aware you can’t have multiple devices behind a NAT address listen on the same port. Thank you for pointing that out, you’re very smart.

Ah, finally, you do acknowledge that there are issues that aren't actually solved with IPv4 and NAT. Thanks.

> But it really doesn’t address my point at all, does it?

Lets go back to the first thing I was reponding to.

> it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device.

With having a public IP address, PMP enabled on my router, and DNS registration I have this today. My SIP phone here is sitting by, waiting for incoming calls. I don't need Apple's servers for this. Anyone with a SIP client and who knows the name can get to it (or trawls IPv6 space). This becomes a headache with IPv4 with multiple devices all wanting that 5061 port. Sure, one could just also tell people the port and have lots of random ports assigned, but that's just yet another bit of information to get lost, one more mapping to maintain, etc. Imagine if Amazon ran their ecommerce site off a random high number port instead of :443, think they'd get as much traffic?


> With having a public IP address, PMP enabled on my router, and DNS registration I have this today.

That’s cool for you, congratulations.

I’m talking about the broader internet at large, and the billions of users on it, and the software that is written for these billions of users. This software does not behave the way your cool phone does. Because people’s IP’s change. They’re behind firewalls they are unable or don’t understand how to configure. We don’t write software this way for a lot of good reasons.

Now, I keep talking about this, then you bring up “but I have this use case, what about that?” Demanding an answer as if I give a shit that you want to run a SIP phone at your house, and you’re willing to configure your firewall, etc.

My whole point (!!!) is that what you’re doing is not what the billions of people using the internet are doing. When I bought my mom an iPhone, I didn’t have to tell her “ok so, FaceTime only works at your house, you have to reconfigure this DDNS service when your IP changes, you have to configure your router this way, and don’t ever leave the WiFi network or it won’t work, but thank god for ipv6, because it means dad can do all this too for his phone!” No. Because FaceTime doesn’t work that way, nor could it ever, because doing true peer to peer calling was never a thing that would have ever worked at scale. It’s not being held back by NAT. It’s being held back because it was never gonna work in the first place.*


> and you’re willing to configure your firewall

I didn't have to configure the firewall, that happened automatically with PMP. That thing you didn't even know existed until a few hours ago.

But somehow you know all that is and is not possible. All that could have been if things were different.


Sick burn bro

I mean if you’re just going to sling ad-hominems around and invent your own argument so you have a chance at winning it, sure. I’m not aware of the specific RFC’s du jour that are used for auto-configuring routers. But after spending some time looking it up, it seems relatively niche (not as widespread as UPnP), and it appears the last 3 routers I’ve owned don’t support it. (I run an openbsd box with pf currently, but my unifi gateway before that didn’t support it, and the shitbox gateway Comcast gave me before that certainly didn’t support it. I could run a service on my openbsd box to support it but it wouldn’t make a difference because I’m perfectly capable of editing my pf.conf as it is.)

But if you’re gonna dig up other posts of mine to try and get a dig in at me, maybe at least read the rest of the post? You certainly haven’t done a good job of that so far.

Because I indicated that of course a protocol like this ought to exist, but what percentage of users of the broader internet are actually running a router that supports this? If you wanted to come out with a video calling app that only worked if your users had PMP working, what market share would you be losing out on? That’s the topic of discussion here after all (not that that’ll stop you from inventing your own discussion as you’ve continued to do.)


> I’m not aware of the specific RFC’s du jour

Funny way to say you don't know what you're talking about. If you don't know what technologies are actually out there how can you say what is and is not possible?

> but what percentage of users of the broader internet are actually running a router that supports this?

Most who aren't running their own home rolled pf setup that can't be bothered to read about "the latest" (over a decade old) RFCs.

And yeah, last I ran a UniFi gateway it supported PMP. That was several years ago. If it's a recent model device with halfway decent IPv6 support it's practically a sure thing it's got PMP support. Maybe disabled, but it supports it. Same with UPnP, might be disabled, but probably supported.


I guess you really enjoy feeling superior so I’m just going to give you this one. You win. You’re very smart and you know more about router software than I do.

(I’ll leave you to ponder whether this changes my point at all, but I don’t think it really matters. You seem to be pretty fixated on getting your digs in, so maybe we’ll just end the discussion here. Good day.)


> I guess you really enjoy feeling superior

> Sick burn bro

> My post was basically the biggest pile of sarcasm I could conjure and you still took it seriously, congratulations!

> That’s cool for you, congratulations

> not that that’ll stop you from inventing your own discussion

> I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need.

And yet you accuse me of ad homenims for pointing out your acknowledgement of not knowing of the last decade+ of networking tech and that I have a need to feel superior. Pot meet kettle buddy.

You've spent so much effort telling me what's possible or not, berating me multiple times, while acknowledging you haven't kept up with decade+ old tech.


The two devices shouldn't both need to be on the one port 5061: we have numerous solutions to that problem at this point, from specifying the port as part of the connection to auto-detecting it using SRV in DNS. Yes: making some small changes to existing software to not assume hardcoded ports is required in this future... but there is no way that those changes would have been (or frankly even still are) harder than the insane solution of reimplementing and redesigning the entire world--requiring its own changes to every protocol (including SIP) and breaking a strict and extremely large superset of the same references that merely needed a port number--to support IPv6.

Why would they need the same port?

Because they don't want to have to rely on an external service to find the right port. Just like how your browser doesn't need to be told to connect to :443 for HTTPS. If you're connecting encrypted SIP, the port is assumed to be 5061. Just like HTTP is assumed to be 80, SSH is 22, etc.

In what world do you not need to rely on an external service to find information like this? IP addresses change, people move around, etc etc.

The topic of discussion right now is peer-to-peer software, which is the quintessential thing that is always lauded by ipv6 proponents as the killer app of ipv6, because each device has its own globally-routable IP.

But “what address do I send these packets to?” is like, 10% of the issue with designing p2p software. Users ip’s are always changing. They move around. They go behind firewalls that will block their traffic. They go behind firewalls they don’t have permission to reconfigure. This is the case in IPv4, and it will always continue to be the case even if we were 100% IPv6.

IPv4 can work today for p2p use cases if you don’t make the assumption that the port is static. But that’s like 1 of N assumptions you have to check if you’re writing peer to peer software. The other N of them are all still issues, even in IPv6.


> But “what address do I send these packets to?” is like, 10% of the issue with designing p2p software

So wouldn't it be nice to go ahead and solve that 10% of the problem instead of just limping along with it?


I hope you're not implying that IPv6 solves the problem of knowing what address to connect to.

If you can avoid an explicit port, that can be nice. But anywhere you're using an IP, v6 without a port is longer than v4 with a port. An implicit port is only useful when you're using DNS. (And if you're designing a service, you can choose to use DNS records that contain the port.)


I'm all for things to start using SRV records as well!

But the external service needs to know your ip, so why not port as well?

We have this thing called NATMap: https://github.com/heiher/natmap

It gives you a public dynamic IP + port combination if your network is NAT1 all the way to the internet. Both IP & port being dynamic kinda complicates things, they ended up inventing a new DNS record type, "IP4P" where IPv4 address and port are encoded as an IPv6 address, and modified WireGuard/OpenVPN clients are required.

We are supposed to solve this using SRV records, but I don't think many consumer-facing apps can do this.


Did you have to look up something to know this site is hosted on :443 or did you just make an assumption? Some kind of standardized port you connected on? Wouldn't it be nice if every device could just assume the standardized ports were available for them if they wanted them?

VOIP systems don't really work like random webservers though.

There's little reason why they couldn't though. Why shouldn't I just call Apple by doing sip://support.apple.com ? Why can't people just call me at sip://desk-phone.vel0city.whatever (they actually can, if they know the right name) or sip://cell.hikikomori.apple.net ?

Instead, we're locked into proprietary platforms or relying on old phone systems.


Ok so try calling my phone at sip://ninkendo.example. Oh, I’m actually not at my house. I’m traveling and my device is moving from one cell tower to the next, getting a new IP address every few minutes. Then I get to my friends house and his router is gonna block unestablished connections so you better call me soon.

No, there’s a reason software like FaceTime, and calling in general, are mediated by centralized services and implemented as client-server. Because relying on your users to have stable IP’s, and to always be behind a properly configured firewall, would be an utterly insane way to design something.

Hence “ipv6 is a relic of a bygone era”, because in the era we live in now, IP addresses change and users move between networks constantly, and NAT is very much the least of our problems.


FWIW IPv6 can handle this scenario just fine, by letting you take your home IP with you when you move to other networks. I don't think "v6 is a relic of a bygone era" is fair to say when it has a way to handle this exact problem.

How does ipv6 let me take my IP with me, exactly? Citation needed. I mean, I have WireGuard installed on my phone and I get a tunnel my house wherever I go, but that’s not really the same thing, is it? You’re saying there’s a way for me, wherever I go, my house, 5G, my friends house, to just have traffic routed to my phone for a consistent IP address wherever I go? Not a VPN? Through something only IPv6 provides? Could you explain what this is? Because my ISP at home won’t even give me a consistent IP, not even a stable prefix.

It kind of exists in IPv4, but is a lot messier. Also, the lack of IPv4 addresses makes this a lot harder to do something like this in IPv4.

https://datatracker.ietf.org/doc/html/rfc6275


> my device is moving from one cell tower to the next, getting a new IP address every few minutes

You're pretty much never getting a new IP at every cell tower.

> Then I get to my friends house and his router is gonna block unestablished connections so you better call me soon.

PMP can solve that.

> Because relying on your users to have stable IP’s

Not a stable IP, but some kind of stable identity. Like a DNS name. That can be provided by someone like Apple, or Google, or whoever. It doesn't need to be ultra-centralized by only a single thing controlling it.


Wow.

My post was basically the biggest pile of sarcasm I could conjure and you still took it seriously, congratulations!

Am I to understand your stated position is, we should have designed mobile phones to work in such a way that requires (1) dynamic DNS that instantly converges when I move to another network, and every single mobile customer getting a DNS address (2) a requirement that every network you connect to, perfectly knows to reconfigure its firewall rules to accommodate people coming and going, including cell towers? And furthermore, that that would totally be how all phones would work today, if not for IPv4 and NAT standing in the way?

I guess I’m… impressed? I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need.

I can’t wait for you to explain how you think text messaging ought to work.


> I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need

I never said this. You're putting some rather strong words in my mouth. Just giving an example of something that could have been if it wasn't for NAT and an acceptance that you need a third party platform to allow you talk. Feel free to change out the pieces to whatever.

Personally I'd like to be able to just dial a FaceTime-like client from any device instead of just what Apple blesses with accounts Apple allows. And have that stack just be the norm. I understand NAT isn't the only headache that prevents this, but it is one of the several that does.

I don't think it's crazy to think easy dynamic DNS services could have become common if advertised right. People don't know how phone numbers work but in the end they know how to use them. The tech for routers to auto configure firewall rules based on client requests exists, and potentially if everyone-has-a-public-ip was common we'd have seen less reliance on brittle edge firewalls and NAT to provide so much of our security.

Killing NAT doesn't solve all the problems, but it does solve some of them. And I'd prefer solving the ones that help give users more freedom instead of accepting things like CGNAT.


This seems like it's conflating problems?

"All devices are routeable" is a good idea, because it means when we want devices to be routeable there's a simple and obvious way that should be done.

Where we've ended up with NAT though is worse though: we still need a lot of devices to routeable, and so we enable all sorts of workarounds and non-obvious ways to make it happen, giving both an unreasonable illusion of security and making network configuration sufficiently unintuitive we're increasing the risk of failures.

Something like UPnP for example - shouldn't exist (and in fact I have it turned off these days because UDP hole-punching works just fine, but all of this is insane).


NAT is not an illusion of security, it's very practical security.

Just look at the distribution of successful attacks in 2000 and in 2025.


Yes, because NAT was the only notable change in home computing between 2000 and 2025. Totally not all the other changes that happened, 100% NAT. Yep.

NAT was invented to connect entire businesses to the internet that would otherwise have to manually reconfigure every single device for the internet. Keeping IPv4 going amidst address exhaustion going was a lucky coincidence of it.

Your comment comes grom a place If ignorance.

> We don't _want_ end-to-end connectivity.

YOU dont want that.

The wish to not have all devices connected to the Internet does not defeat the need of having a protocol that allows that. NAT is merely a workaround that introduced a lot of technical debt because we don't have enought IPv4 addresses. And having IPv6 does not mean that you must connect everything to the internet.

IPv6 Is good and absolutely necessary. NAT is really expensive and creates a lot of problems in large networks. Having the possibility to just hand out routsble addresses solves a lot of problems.


I'm starting to notice a pattern in these threads in HN every time IPv6 comes up. It's kinda like the vim/emacs holy war back in the day. I also see it in threads about Rust vs C++.

There's a kind of holy war, where people pick sides, and if you're "team IPv6", you look for any post that kinda vaguely smells like it defends IPv4, call them ignorant, and respond with the typical slew of reasons why IPv6 is better, NAT is a hack, you can still firewall with IPv6, etc. If you're "team IPv4" you look for the IPv6 posts and talk about how NAT is useful, IPv6 should have been less complicated, etc etc.

It's really tiresome.

Personally, I'm not on either "team". These things I believe:

- We need more addresses

- IPv6 is as good a solution as any, as no solution was going to be backwards compatible.

- We should get on with it and migrate to IPv6 as soon as possible. What's the hold up? I'm sick of running dual stack.

- But don't throw away NAT, please. I want my internal addresses to be stable and if possible, static and memorizable (An fd00::1 ULA [0] is even easier to remember than 192.168.1.1! And it won't change when my ISP suddenly hands me a new prefix.)

- Oh and don't pretend that IPv6 would have led to a more decentralized internet. A client/server model is inevitable for so many reasons, and true peer-to-peer where customers just communicate directly with one another is not a realistic goal, even with IPv6. Even with PTP/PCP. Even with DDNS. You just can't design software around the idea that "everyone's router is perfectly configured."

[0] No, I don't actually use fd00::/64 as my ULA prefix, I actually put my cellphone number in there :-P. The point being it's as memorizable as you want it to be.


>The really sad thing about IPv6 is that it's a relic of a bygone era. The internet was envisioned as a thing where every device on every network could, in principle, initiate a connection to any other device on any other network. But then we started running out of addresses, ISP's wouldn't give you more than one for a household, so we started doing NAT, and that kind of sealed the internet's fate.

I thought the problem was lack of upload bandwidth for almost all households without fiber to the home. Surely, if people had symmetric broadband internet at home, then there would have been commercial solutions to allow devices to connect to each other, thereby decreasing the need for giant cloud providers.


I don't want all my devices to have their own unique address. The internet dream that created ipv6 is dead. It's malware, spyware, surveillance capitalism now, and the more obscurity through layers of addresses, the better.

Oh god, no, no, no. It's so frustrating to hear this over and over again. NAT is not a firewall. A firewall is a firewall and any reasonably component network admin uses one (and likewise any reasonably competent ISP supplies equipment with one to users who don't know any better). A little knowledge is a dangerous thing...

Can you pick a specific thing in my post to argue against? I never claimed NAT and firewalls are the same thing. I claimed that whether you’re using IPv4 with NAT, or IPv6 with a default-deny firewall, you can’t have software just “assume” that it can listen to incoming connections and have things just work. You need to go to your router and configure something. Since approximately 0% of people in the world at large know how to do this, it means software can’t work this way in general, and IPv6 doesn’t change this.

Ok, but things "just working" isn't the only advantage of v6. Are you aware some people don't even get v4 address at all? Have you ever been unfortunate enough to have your LAN v4 range overlap with a network you VPN to (like a corporate network)? What if you want to run two web servers?

A lot of people seem to think I’m anti-ipv6… I’m not, actually. I run it at home, I evangelize for it, I think it’s the future and the sooner we move to it, the better. It solves real problems, the shortage of addresses being the most obvious.

What I’m against is the oversimplification that if we only had ipv6 from the start, that the internet would have turned out very different and we’d be directly connecting to each other through direct unsolicited connections or something.

No, there are a lot of reasons direct unsolicited peer to peer communication are not a good expectation to have on the modern internet. NAT makes it tough, yes, but so does security (you’re gonna want to firewall your network at the gateway anyway, even in v6), changing addresses, mobility, etc.

For instance, even in my network at home, I don’t think using my real GUA prefix on each device is a great idea. Because the prefix changes, I have to use ULA for any static configs (dns, etc), which means the GUA address is this extra address I don’t use, and it only complicates things. So I’m moving toward ULA-only with NAT66, which I think is a more sane choice. I get the benefits of lots of addresses, which is great, but instead of my firewall simply allowing traffic to my public stuff, I just NAT a public IP to each public thing I want.


Maybe I'm blind, but I haven't seen anywhere else in the thread where "we'd be directly connecting unsolicited" is mentioned apart from where you've prompted it.

I don't think any reasonable engineer expects that that end would ever be the case, even if it was one of the stated original "dreams" of IPv6.


> I don't think any reasonable engineer expects that that end would ever be the case, even if it was one of the stated original "dreams" of IPv6.

Then imagine my frustration when that'e the exact argument people are giving: https://news.ycombinator.com/item?id=43072844

Like, I come up with these scenarios as a strawman to illustrate that direct client-to-client connections aren't going to work, and people come and say "actually it totally could work! We have PCP, and DDNS! Cell towers have mobile IP's too so you're only going to change IP addresses 3 times on your drive to your friend's house! And his router will also support PCP so it could have totally worked this way!"

I literally came up with the example to sound as crazy as possible, and people still say "Yup, that's exactly how things could have worked if we had IPv6. Now look at how stupid ninkendo is for not knowing about PCP, the moron~"

Like imagine if steve jobs came on stage in 2011 to introduce FaceTime, and said "You can make video calls to other people on iPhones, it's great! But you have to know their IP address. Or subscribe to a DDNS service. Make sure to use really low TTL's on your DNS records in case you roam to a different network. Oh and you have to have a router which supports PCP. Every router you connect to must support it." Even in a world where IPv6 was everywhere, that would be insane. (Well, other commenters seem to think that's exactly how phones ought to work? Maybe I'm the crazy one?)


How would you design a NAT-based LAN that shares nothing in common with a firewall?

1 to 1 NAT isn't doing any firewalling. Source NAT isn't doing any firewalling. Static NAT isn't doing any firewalling. All of these will generally just send traffic along if there's no other firewall rule associated.

When most people say "NAT", in the home user / consumer sense, they really mean "PAT": ports mapping to multiple private IPs using a single public IP.

"PAT" is not doing any firewalling either.

(It does rely on state tracking, which it shares in common with stateful firewalls, but you could run a LAN with either a stateless firewall or no firewall if you really wanted to meet the "nothing in common with a firewall" requirement.)


When you say IPv6 isn't hard, do you mean implementing v6-only network stack isn't hard or understanding IPv6 isn't hard? Depending on which layer one lives on, I feel like the lack of networking effect makes supporting v6 difficult. Github is still isn't on v6 and some load balancers will prefer the v4 address over v6, etc...

Understanding it isn’t hard.

Obviously you can use (just) IPv6, e.g. GitHub.


ISPs aren't helping either. Ziply Fiber only provides IPv6 with their 10 Gig and up plans, starting at $300/month.

> What is hard, is dual stack networking. Dual stack networking absolutely sucks, [...]

Clearly you have gotten some unfortunate hands on experience with those edge cases. In my uses the dual-stacking of IPv4 & IPv6 has been a big benefit in that I have to worry a lot less about locking myself out of systems, as I can always reconnect and correct configuration mistakes on the second stack.

Comparing the IPv4+IPv6 dual-stack story to the one from the 90s of IPv4 with IPX/SPX (Novell Netware) and/or NetBIOS (Microsoft Windows), the current state is a lot more smooth and robust.

I've so far only run into 3 issues over the years (2013 till now). A local train operator running IPv6 on routers that didn't properly support non-default MTU sizes (fragmented packages getting dropped) and Microsoft where Github is still IPv4-only and Teams recently developed an IPv6-only failure on a single API endpoint.

Should we ever turn off IPv4 during my working life-time, I hope we have at that point introduced a successor to IPv6, so I can keep using a dual- or triple-stack solution, using different network protocols for robustness.


One major annoyance of IPv6 is that it lacks support for IPv4's whole 127.0.0.1/8 range.

What are the use cases for it beyond the regular .1 address that you dont get from ::1?

It's very useful for cooperatively pretending that a single machine is actually a network of machines, without all the madness of network namespaces.

Basically every server program ever supports the notion of "which IP address am I supposed to bind to?" Normally the answer is either "localhost" or "whatever my LAN or WAN address is, for a particular interface". And specifying this (both for servers and for clients) is much less effort than replacing all the standard-ish ports with ad-hoc ones.


You can literally do the same thing with IPv6.

Just assign multiple IPv6 to your loopback interface and everything will work as expected. You can use either private IPv6 or even link-local IPv6 range.

There’s no need to waste thousands of addresss on this use case.


Which is configuration that's requires arcane invocations and permissions I might not have.

As opposed to IPv4, where everything Just Works™.


I wish that were true. MacOS for example only assigns 127.0.0.1/32. You have to assign extra IPs manually, the same way you would on IPv6.

that is a mac problem, not an ipv4 problem. The mac does a lot of pretty stupid things

As someone who used Windows and Linux my whole career and now am forced to use Mac for a new job; I completely agree, I was right, Mac is a very weird thing overall.

…until it doesn't, when something looks at the configured addresses. The "other" addresses in 127.0/8 will easily have weird behavior in nontrivial setups.

> waste thousands

Yeah would hate to run out


I love this feature so much when jumping back and forth between two projects that when devving expect exclusive control of postgres. All I have to do is set the bind address for one .1 and the other .2 and all is good.

I was counting on that just a few days ago and failed spectacularly when Windows SMB client refused to connect to 127.0.0.2

I’m gonna get downvoted for this, but IPv6 has lost and they have to come up with something else with enough benefit for the user to achieve widespread adoption to beat IPv4.

You can run a pure IPv6 network that can connect out to both, though. You don't need to run dual stack.

there could have been allocated a prefix to ipv6 that would contain the entire ipv4 space, and then all ISPs could have simply translated between at the edge, which would have made it possible for all normal consumer/company devices to only carry an ipv6 stack

The dual stack strategy was criticized 20+ years ago on these grounds, it seemed pretty obvious to me, and as you point out critics were right.

> Note: no, we could not have done it any other way. Any upgrade strategy to IPv4 that extended the address size would have necessitated having two internets

No, this isn't right. Instead of having IPv6 we ended up extending IPv4 by using port numbers as extra address bits and using NAT. This transition was painless (for the most part), and didn't require building a second internet.

In a better and saner world we could have got roughly the same thing - except conceptually simpler and standard - instead of IPv6.


> and the "dream" of IPv6 will become a reality

Depending on what that dream is, it may already be a reality in the upper layers (webrtc, websockets, quic/http3, etc). This is why not many really bother if IPv4 will go away?


This is a little self contradictory. Saying that it's "not that different", then going on to mention "the strategy of having two internets".

Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6 protocol. Just to provide the architypical links:

This was documented about 25 years ago by DJB:

https://cr.yp.to/djbdns/ipv6mess.html

And has been repeatedly discussed on HN:

https://news.ycombinator.com/item?id=10854570

As you mention, IPv6 has exsited for the majority of the commercial internet's history, now 25 years later, it's still not the default transport protocol.

It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.


> Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6 protocol.

It is not a design flaw of IPv6, it is a limitation of the laws of physics: how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

Even if the IPng protocol could recognize IPv4 address and send out data, how can a host which only understands 32-bit addresses send back a reply back to the IPng host? If a packet with a >32b address comes in, how would a 32b-only hosts be able to handle it?

From Bernstein:

> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses. They also don't allow public IPv4 addresses to send packets to public IPv6 addresses.

How would a non-updated IPv4 system know about the IPng protocol? If a non-updated router had a IPng packet arrive on an interface, how would it know how to handle the packet and routing?

All the folks criticizing IPv6 don't ever describe how this mythical system would work: how do you get 32-bit-only system to understand more-32-bits of addressing? Translation boxes? Congratulations, you've just invented 6rd BR or NAT64, making it no different that IPv6.

The primary purpose of IPng was a bigger address space, but how would that work without updating every single IPv4 device? You know, like had to be done for the eventual IPng, SIPP (aka IPv6).

* https://datatracker.ietf.org/doc/html/rfc1752


>how do you get 32-bit-only system to understand more-32-bits of addressing

Maybe you don't. I imagine it would work like this (where XXXX::<IPv4> is the transition block mapping the current Internet v4 into IPv6):

- 32 bit host could only talk to other 32-bit hosts, and other XXXX::<IPv4 Adress> hosts. We have this now, just without the XXXX::<IPv4> part.

- Hosts with v6 addresses outside of XXXX::<IPv4> could not talk to IPv4 addresses without having a 6to4 proxy. We also have this now.

- Hosts with v4 addresses could switch to IPv6, KEEPING their current v4 address by setting their IPv6 addres to XXXX::<IPv4>. They can now talk to all the hosts they used to be able to, AND they can start talking to IPv6 hosts, without having to have two IP, dual stack config, etc.

So we end up with a significant benefit of allowing people with IPv4 addresses and IPv6 connectivity to do an IPv6-only setup.

In my case, I simply don't see us transitioning to IPv6. Our main service has an IPv4-only address and we receive 0 complaints about it. We've literally never had anyone say they couldn't connect to our services because of it. Our users are geographically located in the central US, and everybody has IPv4s. Maybe they have an IPv6 as well, but if we went v6-only I can basically guarantee that we'd have users screaming at us. We'd probably have lawsuits over it. But going v4-only, not a peep.


>how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

By using nat46 on ISP's routers.


>> how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

> By using nat46 on ISP's routers.

I do not understand.

In Ye Olde Times a system would call gethostbyname() and this send out a DNS packet asking for the IP address. Of course DNS record returned would be an A record, which is where your first problem is: A records are fixed at 32-bits (RFC 1035 § 3.4.1). So your first task is to create a new record type and update every DNS server and client to support the longer addresses.

Of course the longer address has to fit into the data structures of the OS you called gethostbyname() from, but gethostbyname() is set to use particular data structures. So now you have to update all the client DNS code to support the new, longer addresses.

Further, gethostbyname() had no way to specify an address type (like if you wanted the IPv4 address(es) or the IPng one(s)):

* https://man.freebsd.org/cgi/man.cgi?query=gethostbyname&manp...

The the returned data structures only allowed for one type of address type, so you either got IPv4 or IPng. So if you wanted to be able to specify the record/address type to query, a new API had to be created.

Luckily the next step, setting up a socket() and connect()ing is less of a burden, because that API already supported different protocols:

* https://man.freebsd.org/cgi/man.cgi?query=socket&manpath=Fre...

But if you have a host that support IPng, but its upstream router does not, how does it support a packet with a longer source address a shorter destination address get through?

Do you send IPv4 packets with embedded IPng packets with-in them to translation relays perhaps? How is this any different IPv6 transition mechanisms like Teredo?


Your isp dns converts the AAAA record to AA record with the ipv4 of it's own (only router) configured dualstack machine, that machine does cgnat46 and visits the ipv6 only server.

If software has hardcoded ipv6 addresses, this is much harder to solve, but I don't think this is really an issue, because hardcoded ipv6 addresses appear very seldom.


Assuming you mean nat46, your description is a bit off. The NAT box doesn't have a single ipv4, it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

But that means it has to intercept all DNS requests and doesn't work with anything it didn't intercept, which seems far from ideal.


>it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

But those don't have to be globally routable.

>it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

This wasn't really a problem until doh/dot became a thing.

But even nowadays doh and dot are provided by whom? By google and cloudflare who are tls-mitm'ing half of the internet anyway.

Moreover most ISPs intercept dns to display those "please enter your phone number to verify your identity" screens anyway.

>far from ideal.

No more imperfect than NAT. Idealism is what is killing ipv6.


If I'm going to run single stack locally I'm going to use IPv6 and a stateless converter for IPv4.

Which means that servers have zero incentive to convert to ipv6.

All this cgnat46 setup is not in the name of end users, because they will always adapt somehow, as reels have a huge force of attraction.

It is to make sure that service providers can deploy ipv6-only _servers_ and not bother that 60 percent of their target audience will be unavailable.

The issue with the ipv6 internet is not the end users, they are paying, so the ISPs will always adapt to their needs, or the users themselves will buy new equipment in the worst case, it's the service providers who stop being profitable by going ipv6-only.


Your idea also gives servers no incentive to convert, and if they do convert they become less reliable for anyone on IPv4.

That process seems extremely error prone on anything but most basic protocols... And given how many people here for example don't run their ISPs DNS... Just seems massively complex and very resource intensive process.

> It _was_ possible to create an address space extension that was IPv4 backwards compatible

How? People keep claiming this, but I've yet to see a coherent design (either back then or today). Do you think they broke backwards-compatibility _on purpose_? What's the motivation here?


By promoting cgnat46

> no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6

This is not an inherent design flaw and is often brought up in IPv6 threads commonly referred to as the "just add more octets, bro" argument. This comment[0] sums it up well, but I'll leave it here for convenience:

> Fact is you'd run into exactly the same problems as with IPv6. Sure, network-enabled software might be easier to rewrite to support 40-bit IPv4+, but any hardware-accelerated products (routers, switches, network cards, etc.) would still need replacement (just as with IPv6), and you'd still need everyone to be assigned unique IPv4+ addresses in order to communicate with each other (just as with IPv6).

[0]: https://news.ycombinator.com/item?id=37120422


> Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6

That's not at all what I'm suggesting, I'm saying "two internets" because there are two internets: IPv4 and IPv6. You need two internet stacks on every host to deal with this, hence two internets. If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, there would still be two internets, because they are not mutually compatible.

Saying "two internets" is not a judgement call on whether IPv6 changed too much or is too different. It's just the literal truth. There are two internets, because one can't communicate with the other.


Well, that's just not true:

    $ ip -4 addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever

    $ ping -6 ipv4.google.com
    PING ipv4.google.com(sof04s06-in-f14.1e100.net (64:ff9b::142.251.140.78)) 56 data bytes
    64 bytes from 64:ff9b::8efb:8c4e: icmp_seq=1 ttl=55 time=27.9 ms
This is a v6-only machine (no v4 address other than lo), talking to a v4-only hostname. If it was truly not possible to communicate, this wouldn't work.

> If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, because they are not mutually compatible.

This part is true though. v6 is backwards compatible in rather a lot of ways, but v4 isn't forwards compatible, and it's important to note that this comes entirely from the larger address size -- no amount of making v6 more identical to v4 than it already is would make v4 any more compatible with it, unless you undermined the entire point and made the addresses 32 bits.


Now try that on an ipv4-only host trying to talk to an IPv6-only address. You can’t, hence, 2 internets.

(Yes, I understand there is no other way for a larger address space to exist that doesn’t have this problem. We are in violent agreement here. But that doesn’t mean there aren’t 2 internets. Let’s call a spade a spade.)


> there is no address space overlap between IPv4 and IPv6

Yes there is: https://www.rfc-editor.org/rfc/rfc4291.html#section-2.5.5.2


It's not that different, it's separate.

>It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.

This was never possible.


cgnat46 is totally possible

We don't live in the world in which IPv6 was a good design. Please avoid acting like IPv6 makes for a good design, because it doesn't.

See apenwarr's by now nearly a decade old blog post "The world in which IPv6 was a good design": https://apenwarr.ca/log/20170810, previous discussions of it here: https://hn.algolia.com/?query=The%20world%20in%20which%20IPv..., as well as the follow up blog post here: https://apenwarr.ca/log/20200708, previous discussions here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

And the issues with IP (and by extension, TCP, ignoring the fundamental results from the Delta-T research at Lawrence Livermore keeps biting us all in the ass) whether IPv4 or IPv6 go even deeper, far deeper, than what that blog post already tells us, so here, have this—flawed in some minor aspects, which makes CCIEs burry their head in the sand of denial about the deeper point of it—polemic for dessert: https://web.archive.org/web/20210415054027if_/http://rina.ts...


> Please avoid acting like IPv6 makes for a good design, because it doesn't.

Where did I give that impression? I tried my hardest in that post to not make a judgement call one way or the other as to whether it was a good design, only that dual stack fucking sucks.

My followup post in fact, totally agrees with you? https://news.ycombinator.com/item?id=43070286


It's almost exactly just IPv4 with longer addresses. If IPv6 isn't a good design then IPv4 is even worse because of the address shortage.

>It's almost exactly just IPv4 with longer addresses

No it's not. Slaac and NA make it a totally different beast.

If ipv6 only had dhcp6-pd, it would have been "just like ipv4 with longer addresses".


> No it's not. Slaac and NA make it a totally different beast.

How many ways to be assigned an ipv6 address are there anyway? Two or three too many?

Why should the ISP know what devices I have behind their router?

Considering the amount of enterprise-ish thought that went into ipv6, they thought preciously little of privacy, for example.


Well, they also junked useless un-scalable things like broadcast and ARP.

Meanwhile adding science fiction things like mobile IP.

Yeah, that one's a bit silly.

that's not what this article says. I dug through it, and the main point seems to be 'It would have been beautiful. Except for one problem: it never happened.'

there is nothing really wrong with the design of ipv6 relative to ipv4


The article keeps snapping back to the top as I scroll. (iOS safari, latest version.)

Why does anyone ever implement JavaScript that tries to mess with your scroll position? Like, ever? Can we stop this madness?


macOS’s terminal app does #1 but you have to option-click. It’s kind of a hack too, it just issues a bunch of right/left arrow key codes to the shell to move your cursor to the designated spot.

Your second point is definitely my #1 issue, at least it is on Linux and Windows (macOS at least uses Cmd-c everywhere and it works.) I have to constantly remember to use ctrl-shift-c to copy in the terminal but ctrl-c to copy everywhere else, and I constantly forget and use ctrl-shift-c in a not-terminal program, and do something stupid like open dev tools in chrome, ugh. This is, unfortunately, not fixable. Not without rewriting every app on my system to use a different copy/paste combo.


Agree, MacOS terminal has this distinction between Control which is purely terminal modifier, and Command which is purely Mac modifier, hence when I work in terminal on MacOS I do CMD+C - CMD+V and this is consistent with all the other apps, while on Linux you have to either think of CTRL+C as a copy or as a SIGINT, depends on which app you are using.

This was so good experience for me, that I reconfigured my Linux terminal to have Win+C, Win+V and Win+T (for new tab). However this is inconsistent with Linux window management. Since then I have been actively looking for a way to change the Linux OS manager to have default global hotkeys in the MacOS style. Anybody knows how to do that? ( https://news.ycombinator.com/item?id=41439601 - suggested xmodmap is a key mapping way which will break my CTRL+C in terminal )


while writing this I found https://github.com/rbreaves/kinto - I need to give it a try as this (hopefully) is not simple overlay for xmodmap...

Wait, what do you mean not fixable when you can just change the keybind in the terminal app to use ctrl-c to copy?

Ctrl-C is SIGINT, I don’t want to change its definition of sigint just to make room for copy behavior.

What I want is to make a different shortcut trigger copy/paste, everywhere, and that’s not fixable. Every app independently decides to make ctrl-c implement copy, so changing that is basically not possible.


You don't need to change the definition of sigint, just its shortcut!

Though it's also possible to fix it everywhere - use a keyboard rebinding tool to make Ctrl-Shift-C send Ctrl-C key combo instead


> You don't need to change the definition of sigint, just its shortcut!

That’s what I mean? Changing sigint’s shortcut isn’t what I want to do.

> Ctrl-Shift-C send Ctrl-C key combo instead

Then ctrl-shift-C will send SIGINT too, and I’m back to where I started (SIGINT and copy having the same shortcut)


> That’s what I mean?

I'm sorry, I thought there might've been some confusion re. actual sigint definition based on some deeper history and ASCII codes (which didn't seem like it from cursory search)

> ctrl-shift-C will send SIGINT

You filter against your terminal app, where Ctrl+Shift+C will continue send Ctrl+Shift+C


> rendering React Native directly on canvas or webgl

I just threw up in my mouth a little. I can’t wait to:

- not be able to double click to highlight text a word at a time, because the developers of the “super smooth UX” didn’t know that’s what typical text does.

- or triple click to highlight a paragraph at a time

- or have the standard menu available when I highlight text (which I often use to look up words, etc)

- or have text editing support any of my OS’s key bindings, like ctrl-left to move the caret one word, ctrl-shift left to highlight in the process, etc etc

- or any one of the hundreds upon hundreds of common control behaviors, accessibility behaviors, system-wide settings on text behaviors, etc etc etc be respected

Of course if they’re anything like the Flutter folks, they’ll look at every one of these things and say “well, I guess we gotta implement all that” rather than understanding the basic fact that common UI elements offered by the OS should actually be reused, not reimplemented poorly.

I really worry about what software will look like 20 years from now. At this rate we’re just going to forget every thing we learned about how it should behave.


Don't worry, they'll figure out a way to compile Skia to webassembly and re-link it to the DOM through JS.

Maybe then the circle(jerk) will be complete.

Ugh.


Funny, sounds like the Simpsons gag from the same time period: “what’s wrong with this country? Can’t a man walk down the street without being offered a job?”

https://youtube.com/watch?v=yDbvVFffWV4


Interesting. I was SO into the Simpsons at one time, but somehow I'd never seen that episode (as best as I can remember anyway). Now I feel the urge to go back and rewatch every episode of the Simpsons from the beginning. It would be fun, but man, what a time sink. I started the same thing with South Park a while back and stalled out somewhere around Season 5. I'd like to get back to it, but time... time is always against us.

That episode is by far my #1 favorite. Season 8 Episode 2, “You Only Move Twice”, during the period considered by most to be the peak of the Simpsons show quality, and IMO the best episode of the season.

Cypress Creek was intended to be a reference to Silicon Valley and the tech companies there of the time, and it’s got some of the best comedy in the season (Hank Scorpio is the best one-off character ever in the show IMO.)


The Hank Scorpio episode is indeed one of the great classics!

Their point is that the files shouldn’t be there in their home directory in the first place. Your approach would still leave symlinks/junctions in their homedir. Their proposed driver-based approach would allow apps to think they’re writing to your homedir when they’re actually writing under appdata, leaving the homedir clean.

> We are destroying software by no longer caring about backward APIs compatibility.

My take: SemVer is the worst thing to happen to software engineering, ever.

It was designed as a way to inform dependents that you have a breaking change. But all it has done is enable developers to make these breaking changes in the first place, under the protective umbrella of “I’ll just bump the major version.”

In a better universe, semver wouldn’t exist, and instead people would just understand that breaking changes must never happen, unless the breakage is obviously warranted and it’s clear that all downstreams are okay with the change (ie. Nobody’s using the broken path any more.)

Instead we have a world where SemVer gives people a blank check to change their mind about what API they want, regularly and often, and for them to be comfortable that they won’t break anyone because SemVer will stop people from updating.

But you can’t just not update your dependencies. It’s not like API authors are maintaining N different versions and doing bug fixes going all the way back to 1.0. No, they just bump majors all the time and refactor all over the place, never even thinking about maintaining old versions. So if you don’t do a breaking update, you’re just delaying the inevitable, because all the fixes you may need are only going to be in the latest version. So any old major versions you’re on are by definition technical debt.

So as a consumer, you have to regularly do breaking upgrades to your dependencies and refactor your code to work with whatever whim your dependency is chasing this week. That callback function that used to work now requires a full interface just because, half the functions were renamed, and things you used to be able to do are replaced with things that only do half of what you need. This happens all the god damned time, and not just in languages like JavaScript and Python. I see it constantly in Rust as well. (Hello Axum. You deserve naming and shaming here.)

In a better universe, you’d have to think very long and very carefully about any API you offer. Anything you may change your mind on later, you better minimize. Make your surface area as small as possible. Keep opinions to a minimum. Be as flexible as you can. Don’t paint yourself into a corner. And if you really, really need to do a big refactor, you can’t just bump major versions: you have to start a new project, pick a new name (!), and find someone to maintain the old one. This is how software used to work, and I would love so much to get back to it.


>But all it has done is enable developers to make these breaking changes in the first place, under the protective umbrella of “I’ll just bump the major version.”

Which is just fine when it is a non funded free software project. No one owes you anything in that case, let alone backwards compatibility.


The problem is the normalization of breaking changes that has happened as a result. Sure, you don’t owe anybody backwards compatibility. You don’t owe anybody anything. But then whole ecosystems crop up built out of constituent parts which each don’t owe anyone backwards compatibility, and the result is that writing software in these ecosystems is a toxic wasteland.

It’s not an automatic outcome of free software either. The Linux kernel is famous for “we don’t break user space, ever”, and some of Linus’s most heated rants have come from this topic. All of GNU is made of software that doesn’t break backwards compatibility. libc, all the core utilities, etc, all have maintained deprecated features basically forever. It’s all free software.


Agree with the toxic ecosystem wasteland, but I'm not sure semver is to blame. Linux has been good, but most projects were pretty wild before SemVer came to be. At least with SemVer you stand a chance knowing what you have.

It problem is more deep rooted with both "move fast and break everything" and non/under funded project. Everyone is depending on each others hobby project. The js/npm culture is especially bad.

Yes, SemVer makes it easy, but versioning has to be dead easy.


This attitude doesn't fix anything. Refusing to ever break an interface is how you get stuck with trash like C++'s std::regex, which is quite literally slower than shelling out to PHP - and it's not like we can just go and make C+++ to fix that.

Forbidding breaking changes isn't going to magically make people produce the perfect API out of the gate. It just means that fixes either don't get implemented at all, or get implemented as bolt-on warts that sow confusion and add complexity.


Of course, mistakes happen. But the degree to which they happen reflects bad engineering in the first place, and this sort of bad engineering is the very thing that SemVer encourages. Move fast and break things, if you change your mind later you can just bump the major version. This attitude is killing software. I’d rather have a broken std::regex and a working std::regex2 than have a world where C++ code just stops compiling every time I update my compiler.

I think likewise. Semver in theory was a good idea (at least in principle: I never liked the exact form, since the start), and the original idea was that the version number was a form of documentation. But what it became is very different: a justification for breaking backward compatibility "because I bumped the version".

LWN has a great impartial summary of what is occurring, including background on why the DMA subsystem is an important interface point for rust drivers: https://lwn.net/SubscriberLink/1006805/be4cb766fd906623/

(This is a shared link to a subscriber-only LWN article, please consider a paid subscription to LWN if you liked it!)


This isn’t impartial. Jonathan Corbet is naming and shaming, on behalf of one side.

> But Christoph Hellwig … turned this submission away with a message reading, in its entirety: "No rust code in kernel/dma, please" (despite the fact that the patch did not put any code in that directory)

> Already overworked kernel maintainers will have to find time to learn Rust well enough to manage it within their subsystems.

Frankly, this article feels like it’s steps away from social media brigading.


Journalists having an opinion in a news article, or even printing an opinion in an op-ed page, is not the same as—or even close to—social media brigading. It’s also a practice as old as the press itself. Brigading is about harassment and intimidation, not about possessing or publishing an opinion.

I came away with the opposite interpretation of yours.

The first quote of yours edited out the part where it says he does a lot of work in the DMA subsystem. It’s saying “someone who does a lot of work in the kernel turned the patch away”, which is absolutely true.

The second quote is understanding of the maintainers’ side, saying that they’re already overworked and now they’ll have to find time to learn rust on top of that. Not so much “I think these people ought to learn Rust”, but “accepting these patches means they will now have to learn rust”. This seems true to me? If anything it’s overly partial to the maintainer’s side, admitting that rust patches put more work on the old guard maintainers who are already overworked. It’s interesting that you feel this is too partial to the Rust side.


> Shouldn't allocation strategy be decided by the caller, not type definiton?

Yes.

Swift made this mistake too. Classes are always heap allocated (and passed by reference and refcounted) and structs are always stack allocated (and passed by value).

It makes for a super awkward time trying to abstractly define a data model: you need to predict how people will be using your types, and how they’re used affects whether you should use struct or class.

The “swifty” way to do it is to just eschew class altogether and just always use struct, but with the enormous caveat that only classes can interop with ObjC. So you end up in an awkward state where if you need to send anything to an ObjC API, it has to be a class, and once you have a lot of classes, it starts to be more expensive to hold pointers to them inside structs, since it means you need to do a whole lot of incrementing/decrementing of refcounts. Because if you have a struct with 10 members that are all classes, you need to incur 10 refcount bumps to pass it to a function. Which means you may as well make that holding type a class, so that you only incur one refcount bump. But that just makes the problem worse, and you end up with code bases with all classes, even though that’s not the “swifty” way of doing things.

Rust did it right: there’s only one way to define a piece of data, and if you want it on the heap, you put it in a Box. If you want to share it, you put it in an Arc.


C# also has exactly the same concept. "class"-types are put on the heap and get assigned by reference, while "struct"-types are assigned by copy.

It always seemed weird to me. You need to know which data type you are working with when passing things around and can not adjust according to the application. For certain types from the standard library it makes sense, like datetimes you probably want always be copied. But when you work in a big team where everybody has their own style and special optimization corner case, it decays quickly.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: