Hacker News new | past | comments | ask | show | jobs | submit login

IPv6 isn't hard, and as the article alludes to, it is different, but it's not even that different.

What is hard, is dual stack networking. Dual stack networking absolutely sucks, it's twice the stuff you have to deal with, and the edge cases are exponentially more complex because if your setup isn't identical in both stacks, you're going to hit weird issues (like the article suggests) with one stack behaving one way and the other behaving another way.

None of this is the fault of IPv6, it's a natural consequence of the strategy of having two internets during the transition period. A transition period which has now been going on longer than the internet existed when it started.

Maybe 100 years from now we'll finally be able to shut off the last IPv4 system and the "dream" of IPv6 will become a reality, but until then, life is strictly more difficult on every network administrator because of dual stack.

(Note: no, we could not have done it any other way. Any upgrade strategy to IPv4 that extended the address size would have necessitated having two internets. It just sucks that it's been this long and we're still not even at 50% IPv6 usage.)






For what it's worth I have many years of technical experience and a pretty decent knowledge of IPv4 networking, and I found it quite hard - at least when first learning.

The addressing is intimidating, getting your head away from NAT, thinking about being globally routable, the lack of (or variances in) DHCP, ND, "static" addressing via suffixes, etc.

None of this is technically "hard", but it's a significant barrier to entry because it's quite different to what most normal technical users are used to.

Dual stack may create some new problems, but it fixes a lot of others. All of the major cloud providers have awful IPv6 only support, and many prominent web resources that usually invisibly "just work" have little to no IPv6 support.


I think you thought it was hard precisely because you had a decent knowledge of IPv4.

I suspect that if the situation were reversed and we always had IPv6, and someone proposed a new IP suite where you had to use NAT for everything, and all hosts needed to have their addresses hand-picked (or served via a DHCP server that used an ugly hack to communicate with hosts that don't have an IP yet), you'd find that to be the "hard" one.


The one objectively difficult thing about IPv6 is remembering an address. Four base-10 numbers between 0 and 255 fit in working memory, while eight groups of four hex digits don't. Even a shortened address can be difficult to recall.

Your working memory must be better than mine.

I can remember two numbers from 0 to 255, but four? Not in a million years.

Isn't there an idea that our working memory holds seven pieces of information? Which would be two of those numbers and change, if you consider each digit as a piece of information.


We live in an era with ubiquitous DNS, easy host file management, post-it notes still exist, and everyone has some note taking app or three on their pocket computer too. At some point if needing to remember an IPv6 address is your biggest concern with IPv6, that's a decent place to live and there are plenty of workarounds all around you.

Yeah, it makes me think of finding an automatic car difficult because you're used to manual. I have noticed people are often reluctant to give up on something they had to learn, though.

It doesn't help that the use of an IPv4 is a status symbol that indicates that you are willing to spend money on your infrastructure (and are thus not a spammer). If you attempt to run an email server on IPv6, you will see what I mean.

That's not what I am observing while running a number email servers which having both IPv4 and IPv6. Spam is only on IPv4 and about 80% of legitimate email is on IPv6.

Yes, but those facts don't reach the email filtering business: There, IPv4 IPs have some reputation, maybe good or bad. IPv6 "just doesn't exist" and has a universally bad reputation. Thus your IPv6-Emails will be filtered and dropped more often, even though you are correct that actually IPv6 is an indicator of non-spam. I've got quite a few transport rules to work around those kinds of braindead filters...

> IPv6 "just doesn't exist" and has a universally bad reputation.

I've been sending my mail via IPv6 for ages without issues… did you mean reputation or reputation?


> I've been sending my mail via IPv6 for ages without issues…

In most cases it does work and there are no problems. But especially medium-sized businesses who run their own email through some crappy mailfiltering middlebox file anything that comes from or through an IPv6 endpoint to spam.

> did you mean reputation or reputation?

I don't understand your question. I meant IP address reputation, which is what many mail filtering systems use to estimate spam probability or outright reject email. See for example https://www.spamhaus.org/ip-reputation/


That does not match my experience either. In fact I don't see why any mail server would bother getting and configuring an IPv6, listen on it, just to drop all received messages.

Just don't listen on v6, that would achieve your goal, would prevent reputable dual-stack senders (like Google!) to send messages over v6 that you'll drop, and simplify your configuration.


> I don't understand your question. I meant IP address reputation, […]

There's reputation of the IP address(es) among spam filters, and there's reputation among human operators configuring mail servers. I was trying to riff on ambiguity with ambiguity, that wasn't particularly helpful, sorry.


I found an open email relay on IPv6 once, because nobody scans IPv6 and therefore nobody ever sent spam through it. (Apparently this was caused by a reverse proxy misconfiguration causing the server to think IPv6 connections were from localhost)

Also the surface area of IPv6 is huge, it just takes so much time to scan everything. It's not security, but it's a deterrent.

And yet scanning IPv6 is exactly what some people are doing.

IPv6 nodes aren't individual random addresses in a 128-bit address space. They are going to be grouped in subnets, so it makes sense to explore /64 ranges where you know there's already at least 1 address active. There's a pretty decent chance at least some addresses are going to be sequential - either due to manual configuration or DHCPv6 - so you can make a decent start by scanning those. For non-client devices, SLAAC usually generates a fixed address determined by the NIC's MAC address, which in turn has a large fixed component identifying the NIC's vendor. This leaves you with a 24-bit space to scan in order to find other devices using NICs made by that vendor - not exactly an unfair assumption in larger deployments. Much faster scanning can of course be done if you can use something like DNS records as source for potential targets, and it's game over once an attacker has compromised the first device and can do link-local discovery.

It's not going to be extremely fast or efficient, but IPv6 scanning isn't exactly impossible either. It's already happening in practice[0], and it's only going to get worse.

[0]: https://www.akamai.com/blog/security-research/vulnerability-...


Well, and the easiest way is some sort of web bug/tracker/log or traffic analysis.

See what address gets used, and blam.


In many situations it's a necessity. Ipv4 only devices can't connect to ipv6 only servers. So if you have ipv4 clients, your server has to support ipv4.

Get a front that binds a IPv4, with a loadbalancer or a wireguard tunnel.

Yes. In my day job I work mainly on containers running entirely in data centers we own, they are fully ipv6, it’s very nice to work with.

Then there is this one customer, they are colocated in other ISP owned datacenters, and some of their server maybe have fully IPv6, some may have only IPv4, some may be a mixture of both. Supporting them is a never ending nightmare.


To add:

The really sad thing about IPv6 is that it's a relic of a bygone era. The internet was envisioned as a thing where every device on every network could, in principle, initiate a connection to any other device on any other network. But then we started running out of addresses, ISP's wouldn't give you more than one for a household, so we started doing NAT, and that kind of sealed the internet's fate.

Nowadays, we wouldn't want a world where everything was connectable, even if we had enough addresses. Everyone's network in essence has a firewall. If you run IPv4 and using NAT, you're going to be dropping unsolicited incoming packets no matter what (let alone that you wouldn't know where to route them even if you wanted to let them through) and with IPv6 you'd be insane to allow them.

Devices have "grown up" in a world where we expect the router we're connecting to, to shield us from incoming unsolicited traffic. I certainly don't have the firewall enabled on my linux desktop. Windows I _think_ has it enabled by default, but I often turn it off because I want LAN connections to work. MacOS is probably similar. Suffice to say, if the IPv6 dream happened overnight and everyone's devices were instantly connectable, all hell would break loose. We need our routers to disallow traffic. (Edit: This wasn't all that clear, so let me restate: In both the IPv4 and IPv6 world, you'd be insane to disallow incoming unsolicited traffic, which is why basically everyone blocks it in IPv6 as well as IPv4. I'm not trying to say IPv6 can't do this... quite the contrary: IPv6 routers absolutely do, and should, block unsolicited incoming traffic. But my point is that this is what prevents the original vision of IPv6 from becoming a reality: We can't design software around the idea of direct peer-to-peer communication, without stuff like UPnP. Oh, and UPnP works with NAT and IPv4 anyway so, there's the rub.)

So, even in a pure-IPv6 world, that instantly prevents any startup with an idea to allow true peer-to-peer communication using end-to-end routability. What are you going to do, train your users to log into their router and enable traffic? Approximately 0% of your customers know how to do that, even in a counterfactual universe where IPv6 happened and we all have routable IP's on our devices. Maybe in such a universe, people would be trained to use local firewalls on their device, and say "ok" to popups asking if you want to let software through. But I'd wager that a lot of people would prefer the simplicity of just having their gateway drop the traffic for them.

No, the "all devices are routable" idea came from a naive world where there wasn't a financial incentive for malicious behavior at every turn. Where there aren't millions of hackers waiting for an open port to an exploitable service they can use to encrypt all your data and ransom it for bitcoin. The internet is a dark forest now. We don't _want_ end-to-end connectivity.


Every consumer router I’ve ever seen drops unsolicited IPv6 packets by default. But that doesn’t mean that P2P routing is suddenly impossible.

IPv6 makes P2P routing substantially easier, even in a world of default firewalls that drop unsolicited packets. You can still apply standard NAT busting techniques like STUN with IPv6 devices behind firewalls, and you get much better results because IPv6 removes the need to track and predict the port mapping a standard NAT does. Two P2P systems can both send unsolicited packets to each others IPv6 addresses, with a specific port, and know that port number isn’t going to be remapped, so their outbound packets are guaranteed to have the right address and port data in them to get their respective firewalls to treat inbound traffic on those addresses and ports as solicited.

This is particularly useful when dealing with CGNATs, where your residential devices ends up behind two NAT layers, both of them messing with your outbound traffic, which just creates an absolute nightmare when trying to NAT bust. IPv6 means that you’re no longer dealing with three or more stateful NATs/firewalls between two peers, and instead only have to deal with at most 2 (in the general case) stateful firewalls, who’s behaviour is inherently simpler and more predictable than a NAT ever is.


The argument that NAT-busting (firwall-busting?) becomes easier with IPv6 is certainly true, but "easier" isn't necessarily enough to warrant an entirely new protocol suite. If that's the only benefit of IPv6, it doesn't sound worth it to me. UPnP/STUN is still possible with NAT/CGNAT, and you're going to need something like that even if there were no NAT, so to me it sounds like a bit of a wash, at least to the end-user.

(Of course, the other huge benefit of IPv6 is "more addresses", so we need it just for that. But my point is that "global routability" isn't really the dream people think it is. In practice, the only differences between modern GUA-but-deny-by-default IPv6 setups and NAT'ed IPv4 setups are the simplicity of the former for the network administrators.)


> UPnP/STUN is still possible with NAT/CGNAT

You should tell that to my ISP, they’ve managed to deploy a CGNAT that’s proven to be completely STUN proof. The only way I can achieve any kind of P2P comms is using IPv6. IPv4 is useless for anything except strictly outbound connections.

> and you're going to need something like that even if there were no NAT, so to me it sounds like a bit of a wash, at least to the end-user.

Not in my case. As above, I simply can bust my ISPs CGNAT, so IPv6 is invaluable to me. Makes a huge difference to me, the end-user.


> If that's the only benefit of IPv6, it doesn't sound worth it to me.

It's not the only benefit, as anyone who's tried to build a large network or merge two networks with overlapping address space will tell you.


IPv6 doesn't actually save you because it presumes a cultural norm that people will use globally unique addresses for all their networks which, while now possible, I imagine many network engineers will use IPv6's private address space to avoid ever having to re-address their network on ISP changes.

> I imagine many network engineers will use IPv6's private address space to avoid ever having to re-address their network on ISP changes.

Except with 10/8 (or 172/12) everyone is using the same address space. How many network have a 10.0.0.0/24? What are the odds of a conflict for that?

But if you have an ULA FDxx:xxxx:xxxx/48 address space, what are the odds that all those x bits will be the same for any two sites? That's 40 bits 'entropy'. Much, much lower (notwithstanding folks doing DEADBEEF, BADCAFE, etc).


Not only that. Having private IPv6 addresses in the LAN is the easiest way to work with two ISPs and have fail-over. The other way is RFC8475, but I don't know of any implementation except my custom script on OpenWrt: https://forum.openwrt.org/t/ipv6-wan-fail-over-without-ipv6-...

I imagine the threshold of org size where they stop taking their prefixes from their ISP and just go get their own from the RIR is .... Not big.

Sure, some network engineers will try and design v6 networks as though they're v4, but everyone else will just go and get a squillion addresses from the RIR that's unique to their org and then just use/announce that.


What about home private network (for smart home, for example)?

Home networks don't usually have the problem of having to suddenly merge together with another, similarly-addressed network.

Nothing stops you getting a subnet allocated from your local RIR and announcing that on the internet, if you care about that sort of thing for your home network though. You just need a decent ISP.


> Nothing stops you […] You just need a decent ISP.

I’ll just leave that there.

No, I can’t just choose to have a decent ISP. I live in the real world where there’s only one choice where I live, same as a rather large majority of people in the world.


'You should switch your working local networking stack so as to help the scenario where the company is bought and you may well not be working there anymore and it's someone else's problem. Or the scenario where the company buys a different company and very likely needs to investigate and remodel the other company's network anyway.'

> UPnP/STUN is still possible with NAT/CGNAT

Sometimes. I've experienced a few networks where even with STUN I'm still not able to get a workable session in IPv4.


Fixing that network you had an issue with to have working NAT-PMP or whatever is going to be infinitely easier than IPv6ing the entire world.

I can't just ask my cell provider to change their CGNAT setup. There's nothing I can do on my end.

1) To the extent to which you can't ask them to change their CGNAT setup, you also can't "just" ask them to upgrade to IPv6.

2) In fact you at least can ask them to change their CGNAT setup, as they at least could, if they chose, unilaterally fix their CGNAT setup, while--even if you also felt it possible to ask them to upgrade to IPv6--upgrading to IPv6 as the solution requires the rest of the world to upgrade as well.

The real question is then: which is solution to your problem is easier and cheaper to obtain, and the answer is clearly not "upgrade the world to IPv6". "I couldn't convince my ISP to fix their CGNAT setup, so I'd rather convince them--and everyone else--to upgrade to IPv6" makes no sense as a coping mechanism.


The model of having every router being a firewall has created a fair bit of false security, with vulnerable systems being left behind the firewall and simply waiting until a user inside the network does something unwise. It seems a common story where whole hospitals or municipalities get shut down because one host got infected and then all the unpatched system in the same network got hit.

There is also a bit of conflated use of terms. A firewall on windows is just a fancy name of permission management. A program want to open a port and the user get a prompt if they want to allow it (assuming the user has such privileges). It is indistinguishable from similar permission system found on phones that ask if an app is allowed to access contacts. The only distinguishable "feature" is that programs might not be aware that the permission was denied, thus opening a port that is not actually open. Windows programmers may know more of the specific of this.


So your idea is to open up access so all the IT orgs that can't keep their systems secure ...

... which is ... virtually all of them ...

get p0wned on a massive scale? You know, now with AI powered assaults for extra special vulnerability levels...

And then a magic wand happens, and the IT orgs that couldn't even install patches will be able to back-discover all the compromised hard drive firmwares, rootkits, and nth-level security holes after the fact?

I get a NAT wall isn't perfect security. But pretending it is NO security is disingenuous.


Every company in spoken to in the last few years has been moving toward zero-trust network, where you assume that every single device is on a hostile network. There’s nothing inherently safe and secure about the corp network, so treat it as if all those cubicles were connecting in from Starbucks.

IMO this is the only way forward.


Not the parent but obviously no, but maybe if in the 90' it was the standard to not assume that the network was secure we would be in a better spot now (maybe a spot where company intranets use authentication instead of vpns

There is a solution to this: RFC6887.

You have a firewall at the edge of the network. It blocks incoming connections by default, but supports Port Control Protocol. The cacophony of unpatched legacy IoS devices stay firewalled because your ten year old network printer was never expected to have global connectivity and doesn't request it. End-to-end apps that actually want it do make the requests, and then the firewall opens the port for them without unsophisticated users having to manually configure anything.

The protocol is an evolution of NAT-PMP (RFC6886) for creating IPv4 NAT mappings, but RFC6887 supports IPv6 and "mappings" that just open incoming ports without NAT.


Sure, but is it actually a feature you want, though? We're never going to run out of legacy unpatched devices, and today's shiny new devices are going to be tomorrow's tech debt. If you support that kind of hole punching, sooner or later you will end up with vulnerable devices exposing themselves to the internet.

There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?

In my opinion the current kind of hole punching is a far better option: instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet. It's well-researched when it comes to the interplay between firewalls, NAT, and UDP[0].

And that pretty much solves the problem, to be honest. These days you only really need incoming traffic to support things like peer-to-peer video calls. Hosting game servers on your local machine is a thing of the past, everything has moved to cloud services. What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.

[0]: https://tailscale.com/blog/how-nat-traversal-works


> We're never going to run out of legacy unpatched devices, and today's shiny new devices are going to be tomorrow's tech debt.

It's not about new or old. The devices that don't have any reason to be globally reachable never request it. New ones would do likewise.

Devices that are expected to connect to random peers on the internet are going to do that anyway. It's not any more secure for them to do it via hole punching rather than port mapping; causing it to be nominally outgoing doesn't change that it's a connection to a peer on the internet initiated by request of the remote device.

> There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?

Enterprise admins can easily use it because they have control over the gateway and how it answers requests, so they can enable it for approved applications (in high security networks) or all but blocked applications (in normal networks), and log and investigate unexpected requests. They should strongly prefer this to the alternative where possibly-vulnerable applications make random outgoing HTTPS connections that can't easily be differentiated from one another.

Whether they will or not is a different question (there are a lot of cargo cult admins), but if they don't they can expect their security posture to get worse instead of better as the apps that make outgoing HTTPS connections to avoid rigid firewalls become the vulnerable legacy apps that need to be restricted.

Home users are already using NAT-PMP or UPnP and this has only advantages over those older solutions.

> instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet.

There are significant problems with this.

The first is that it's a privacy fail. The central server is at a minimum in a position to capture all the metadata that shows who everyone is communicating with. It's made much worse if the payload data being relayed isn't end-to-end encrypted as it ought to be.

But if it is E2EE, or the server is only facilitating NAT traversal, the server isn't really vetting anything. The attacker sends a connection request or encrypted payload to the target through the relay and the target is compromised. Still all the risks of exposing yourself to the internet, only now harder to spot because it looks like an outgoing connection to a random server.

Worse, the server becomes an additional attack vector. The server gets compromised, or the company goes out of business and the expired domain gets registered by an attacker, and then their vulnerable legacy products are presenting themselves for compromise by making outgoing connections directly to the attacker.

Doing it that way also requires you to have a central server, and therefore a funding source. This is an impediment for open source apps and community projects and encourages apps to become for-profit services instead. The central server then puts the entity maintaining it in control of the network effect and becomes a choke point for enshitification.

Meanwhile the NAT traversal methods are ugly hacks with significant trade offs. To keep a NAT mapping active requires keep-alive packets. For UDP the timeout on many gateways is as short as 30 seconds, which prevents radio sleep and eats battery life on mobile devices. Requiring a negotiated connection is a significant latency hit. For real peer to peer applications where nodes are communicating with large numbers of other nodes, keeping that many connections active can exceed the maximum number of NAT table entries on cheap routers, whereas an open port requires no state table entries.

> What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.

The point is to allow interesting projects to benefit more than a handful of nerds and allow them to become popular without expecting ordinary users to manually configure a firewall.


> It's not about new or old. The devices that don't have any reason to be globally reachable never request it. New ones would do likewise.

Doesn't this assume all devices are well behaved, trusted and correctly implemented? You don't think the crap Samsung Smart TV will consider itself to have a reason to be globally reachable?


Most of them actually wouldn't because they have no reason to do it and creating the mapping for no reason is work. Making the thing where you're not exposed to the internet the thing that lazy developers get by default is a win.

Some of them could do it for no reason, but they could also have the devices make outgoing connections to the company's servers and then blindly trust any data the servers send back, even if the servers are compromised or the domain expires and falls into the hands of someone else, or the company's own servers are simple relays that forward arbitrary internet traffic back to their devices.

If a device can make outgoing connections then it can emulate incoming connections. Blocking incoming connections to devices that explicitly request them is therefore not a security improvement unless you're also prohibiting them from making any outgoing connections, because the result is only that they do it in a less efficient way with more complexity and attack surface and less visibility to the user/administrator.


Interesting, I haven't read much about that. I've always assumed somebody must have written an RFC to automate how home firewalls should be configurable automatically by software that "wants to" allow incoming connections. Good to see that it exists.

But it doesn't seem to be in widespread use, does it? Like, would a random internet gateway from podunk ISP support this? I kinda doubt it, right? Pretty sure the default Comcast modem/router setup my mom uses doesn't support this.

But I guess my point was about the contrapositive universe where IPv6 was actually used everywhere, and in that universe I suppose RFC6887 might have been commonplace.


Every ISP router I've had with even basic IPv6 support has supported PMP. Every home router I've handled with IPv6 support has supported it. So IME it is in pretty widespread support. Maybe there's some devices out there that don't, but it's at least as wide as UPnP support which was pretty wide.

Things like this can be useful without 100% penetration. Most end-to-end apps support a direct connection as long as either of the endpoints have an open port. Endpoints without one can also fall back to relays, but since port mapping is frequently used in gaming, that has higher latency and then users start demanding gateways that support the thing that lowers lag.

Then it starts to make it into popular internet gateways. For example, the Comcast consumer gateways generally do support UPnP, which is a miserable protocol with an unnecessary amount of attack surface that should be transitioned to Port Control Protocol as soon as possible, but it demonstrates you can get them to implement things of this nature.


> What are you going to do, train your users to log into their router and enable traffic?

You do what has already been done for decades. You ship a client premises router that does deny by default inbound and allow all outbound and things behave pretty much the same as they do today with these exact same rules in IPv4.

Network firewalls still exist in a publicly routable network. If I want my game console to allow incoming traffic for game matchmaking, I can then do that. Or have systems that auto configure that. But then I don't have to have multiple devices fighting for a limited port range, each device has more IPs and ports than they could know what to do with.


Please re-read my comment, as it seems like you skimmed it and think I’m making a different point than I did.

To use your example:

> If I want my game console to allow incoming traffic for game matchmaking, I can then do that

My point is that because of the fact that even in IPv6, consumer firewalls will block traffic by default, the company that makes the game would not have designed it to require your console to have special configuration on your firewall. Because this is not something a typical gamer knows how to do.

Instead, companies use UPnP for matchmaking, and UPnP works in both NAT and GUA environments, so what exactly does GUA give you?


If you have two Xboxes or Nintendo Switches in a standard UPnP network you'll still have NAT issues. They'll fight over port assignments. If I've got two things wanting to listen on :5000 I'm boned, I've only got one :5000 to forward from. I could remap that port in NAT, but now the downstream device needs to know to tell other clients it's actual public port.

You'll have even more problems if you're on CGNAT networks. You're not going to be able to get any of that traffic.

None of this is a problem if each device has its own IP address and its own range of ports to deal with. Every device can have its own :5000, it can know it's public IP address without having to have something see from outside, and with how big assignments usually are it can have dozens of things all listening on a public :5000 all at the same time.


If you want to hypothesize a world where we’re all on IPv6 and thus game matchmaking software can take advantage of the unique addresses for every console, you should also admit that the same software could also just use a unique port for each console today, in IPv4+NAT.

In fact, in IPv6, privacy addresses mean that the address the gaming service observes from my device shouldn’t be assumed to work for other peers, because my device may only be using that address for communication to the gaming service itself. Instead, authors of this software ought to understand that the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work (because if I have 2 XBox’s, they could decide on a different port if another console already has one used.)

It’s just I’m disillusioned with the idea of GUA’s for every device actually solving anything. It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there: firewall config, dynamic IP’s, mobile IP’s, security best practices, shitty middleware boxes not doing the right thing, etc etc etc.


My biggest argument would be CGNAT in the end really doesn't care what UPnP you do at the edge of your network; you're not getting that traffic no matter what you do. IPv6 means there's no reason to have CGNAT at all; there's more than enough addresses to go around.

Does IPv6 solve it all? No. Does it solve some of it better than IPv4? Yep. Does it just completely eliminate the need for CGNAT? Yep. It is not a silver bullet to solve all problems, but it does solve some, and for that I'd much rather use it. Because I'd rather just be able to host whatever at home and not need to remember what port is what or rely on proxies looking at other info in the request.

> shitty middleware boxes not doing the right thing

You reduce the need for shitty middleware boxes. I don't need a reverse proxy. I don't need to have a STUN/TURN server. I don't need to randomize ports or worry about running out of ports.


> the same software could also just use a unique port for each console today, in IPv4+NAT.

Why solve a problem once properly in the network stack when you can add an ad-hoc workaround to each individual protocol instead?

> the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work

And if you're behind CGNAT so your home router doesn't know what the address and port are, what then?

Not to mention the case of a multiplayer game between two people who live in the same house and a third person who doesn't. That's one that's trivial with IPv6 but difficult with every IPv4-based system I've seen.

> It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there

It all helps. I mean, people mostly manage to play games with each other today, they just get disconnects or random lag every so often. Cutting down on that, even if it was only 10%, would make a lot of lives better.


For badly designed games maybe? There's no real reason why they need the same port forwarded, it could be any random port.

I'm dealing with the devices I've got and the software stacks they have. In the end they fight for port assignments on IPv4 and expect to just have massive port ranges assigned to them.

> Within the port range, enter the starting port and the ending port to forward. For the Nintendo Switch console, this is port 1024 through 65535.

https://en-americas-support.nintendo.com/app/answers/detail/...

Nintendo tells me to forward all high numbered ports to my console. Because obviously we're only going to have a single one in the house.


Fixing the broken software that somehow requires a specific port and getting NAT-PMP deployed is a hell of a lot easier than redoing all of the software and all of the networks to implement IPv6.

In an IPv6 world, your home router still firewalls just like with IPv4 (the NAT is just an extra accidental firewall-ish thing), you can still skip the firewall on your laptop

Having globally allocated address space doesn’t actually imply openness of connectability


I am the last person to be defending NAT, but the benefit of NAT is that it sets up a really good default: incoming connections are not routable by default, and it's very hard to accidentally change this. There are many good ways to have "not routable by default" with IPv6, but you have to do that at the software level, while NAT forces it at a protocol level.

> incoming connections are not routable by default, and it's very hard to accidentally change this

Not just difficult, but impossible, even in principle, because there are more than one device sharing the same IP so at most one host would be vulnerable. Not the same as with IPv6, where screwing up the defaults leaves your entire network vulnerable.


You can forward all ports to a device, pretty common feature.

Right but that only leaves that device vulnerable. The other devices are not.

In practice, that makes the entire network vulnerable.

Yeah, server-side NAT + use of a reverse proxy makes hosting many websites from one IPv4 a possibility, albeit a relatively difficult one. You are only really in trouble if two consumers want raw TCP/UDP on one port.

> incoming connections are not routable by default

Tons of firewalls ship with this as a default logic, it doesn't require NAT in the slightest.


Parent poster is making the (good and underrated) point that NAT makes this logic failsafe: Turn an IPv6 firewall off and you’ve got all incoming connections allowed. Turn IPv4 NAT off and you’ve got no connectivity at all

So gate it behind a "here be dragons" option or hide it in the GUI entirely for the basic home version.

Turning off the firewall could just as easily be a unnoticed configuration error that causes it to die on startup

Or make the "turned off" state block all traffic, just like closing a water valve, or a road gate. I never understood why network firewalls did not default to this.

How are you gonna get the IPv6s of your targets? Can’t scan a /64, and with ephemeral ”security” addresses used for outbound conns, an adversary won’t be able to guess the address anyway. At least that’s my understanding. So I guess my question is: is this a real threat?


This reminds me of a company where the admin used public routable IPs for the internal subnet. The gap is always the human in the loop.

There's nothing wrong with this, and it's common at universities that have large IPv4 allocations.

The subnet that he used internally was "officially" used and owned by oracle and our printers regularly send traffic falsely to Silicon Valley.

This was the norm for most of the 90's and earlier. NAT was supposed to be the exception, but it became the rule. Now we have an entire generation that doesn't understand that the Internet was always supposed to support end-to-end connectivity.

Plus too many of that generation see NAT as a "security feature" not an "IP bug" and can't seem to (re-)imagine traditional firewall/router design from the time before NAT. "end-to-end connectivity" is not the same thing as "open and unsecured connectivity", we have the security tools to deal with that and they are not and have never been named "NAT".

To say NAT has nothing to do with "security" is a lie. The most common NAT (PAT) configurations are basically acting as stateful firewalls with a default deny policy for all unsolicited inbound traffic to a private, non-routeable network.

They aren't doing that though. The most common NAT (PAT) configurations don't deny unsolicited inbound traffic.

They are of course commonly deployed together with a firewall that does deny that traffic, but claiming that NAT blocks connections because it's usually deployed together with a different technology that handles all of the blocking would also be lying.


> The most common NAT (PAT) configurations don't deny unsolicited inbound traffic

That doesn’t make sense.

If I have a single routable IPv4 addresses and 100 machines behind it with RFC1918 addresses, how can any possible router “allow by default” say, port 22? Which machine would it route it to? Would it pick the first one? Randomly select one?

Of course NAT has to drop incoming unsolicited packets. Unless you tell it which machine to route them too, it couldn’t possibly know how to “allow” them in the first place.


IP packets have a "destination IP" header field that specifies where the packet goes. The router reads that to figure out where to send the packet.

The only thing NAT does is rewrite the dst or src headers of packets. If there's no rule or state entry that applies to a packet, it doesn't drop the packet. It just leaves the original headers on it.


How would a packet with a destination IP of my internal RFC1918 address have landed on my WAN interface in the first place? It would require my ISP to have a routing rule that sends it to my router. Is that the threat model you’re thinking about?

He's assuming your non-routeable RFC1918 addresses are actually routed from the Internet, which won't be happening in the normal consumer use case.

Even if there's no blocking, using internal RFC-1918 addresses still adds a layer of security since they are not reachable globally.

> basically acting as stateful firewalls

Stateful Firewalls are the security tool. NATs being mediocre to somewhat alright stateful firewalls "out-of-the-box" before adding a real Firewall is the accident (and sometimes bug). Something doing security by accident (or as a bug) isn't a security tool (just like security through obscurity isn't a security tool). You can have Stateful Firewalls without NAT. Everyone saying that you "need" NAT to have Stateful Firewalls doesn't understand Firewalls or even possibly why "firewall" is and has always been a different word from "NAT". NAT has something to do with security, but that's being generally always paired with a good firewall, not being a mediocre firewall mostly by accident.


It's still basically another mac address or unique identifier for the system, regardless of which local network it happens to be on. Great for being tracked by FAANG, I guess, but I'd rather for my devices use a generic local IP and a randomized mac. There's no reason why the refrigerator needs to be uniquely identifiable on the Internet.

No, v6 addresses are assigned by (or chosen based on) the current network, not permanently associated with a given system. They aren't MAC addresses or unique identifiers.

I wouldn’t want to relay on random caffee’s router to do this for me. I would still end up running firewall more carefully on my end devices. Which for my iPhone I’m not even sure if I can. So probably a personal VPN would be a must.

Sure... But wouldn't you want to treat this random network as hostile anyway? The router might already have port forwarding to the IP you grab from DHCP, not to mention other clients on the network. I'm also unsure how a VPN would help against inbound traffic regardless?

> I wouldn’t want to relay on random caffee’s router to do this for me. I would still end up running firewall more carefully on my end devices.

Are you not doing that already? If you trust whoever else happens to be on the same wifi in the cafe you're a braver man than I.


Does a VPN prevent inbound traffic on other IPs? If I put my laptop on a VPN, I can still SSH to it on its RFC 1918 address.

It depends on the VPN and its policies. Some deny all local traffic when active, routing everything through the VPN, and only leave a IPv4 /32 route for the default gateway. Some are more permissive.

A VPN can’t prevent inbound traffic but if the VPN alters the routing table it can prevent the return leg from working. This probably isn’t enough to prevent compromise.

> Having globally allocated address space doesn’t actually imply openness of connectability

Of course not, that's not my point. My point is that because of the fact that your home router still firewalls with both IPv6 and IPv4, any software which relies on being able to "just" connect to a peer over the internet, is doomed. Our networks don't work that way any more (they probably did in the early 90's though.)

My point is that even if we had global routability, we still wouldn't have open connectability, because open connectability is a stupid idea. Which means any software ideas people might have that rely on connectability, are already a non-starter. So why do we need open routability in the first place? (Honest question. This is the crux of the issue. Yes, open routability means you can have a host listen on the open internet, but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.)


> My point is that because of the fact that your home router still firewalls with both IPv6 and IPv4, any software which relies on being able to "just" connect to a peer over the internet, is doomed.

With IPv6 the only thing you need is PCP (or equivalent).

With IPv4 you need PCP/whatever plus a whole bunch of STUN/TURN/ICE infrastructure.

Just hole punching is a lot easier to support than more-than-just hole punching.


If you have PCP (NAT-PMP or whatever), why would you need STUN or TURN? (You still might like ICE, but ICE also makes sense to have on IPV6.)

Yeah, I've had to make this point several times. Getting rid of the NAT doesn't rid you of the need for NAT hole-punching and whatnot, since any sort of firewall will need a hole-punching scheme to allow incoming connections.

I'd say the biggest practical objection (not just "NAT is ugly" or "DHCP is ugly" or "NAT is evil since it delayed IPv6") is CGNAT, which really does put a lot of restrictions on end-users that they can't circumvent. The more active hosts stuffed behind a single NAT, the more they have to compete for connections.

> but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.

And a chunk of that 1% are on WANs that they aren't authorized to configure even if they wanted to.


Devices behind a firewall being routable with IPv6 doesn't mean you're allowed to connect to them by default. And if you want to, you have to explicitly allow that on the firewall. Just like you would open a port and redirect it with IPv4 before. Only with IPv6 you don't have NAT and a redirect. Some people need it and some don't. Again, just like with opening ports.

Please reread my comment, it seems like you're just pattern-matching to arguments you've heard others make in the past and assuming I'me saying the same thing. I'm not. See sibling comments for elaboration.

How many users know how to "open a port"? UPnP exists because people don't want to deal with these things.

What's your point?

  |              | IPv4                                | IPv6                                |
  |--------------+-------------------------------------+-------------------------------------|
  | With uPnP    | Unsolicited connection goes through | Unsolicited connection goes through |
  | without uPnP | Unsolicited connection blocked      | Unsolicited connection blocked      |
It's exactly the same in IPv4 and IPv6 just like FeistySkink was suggesting.

>The really sad thing about IPv6 is that it's a relic of a bygone era. The internet was envisioned as a thing where every device on every network could, in principle, initiate a connection to any other device on any other network. But then we started running out of addresses, ISP's wouldn't give you more than one for a household, so we started doing NAT, and that kind of sealed the internet's fate.

I thought the problem was lack of upload bandwidth for almost all households without fiber to the home. Surely, if people had symmetric broadband internet at home, then there would have been commercial solutions to allow devices to connect to each other, thereby decreasing the need for giant cloud providers.


This seems like it's conflating problems?

"All devices are routeable" is a good idea, because it means when we want devices to be routeable there's a simple and obvious way that should be done.

Where we've ended up with NAT though is worse though: we still need a lot of devices to routeable, and so we enable all sorts of workarounds and non-obvious ways to make it happen, giving both an unreasonable illusion of security and making network configuration sufficiently unintuitive we're increasing the risk of failures.

Something like UPnP for example - shouldn't exist (and in fact I have it turned off these days because UDP hole-punching works just fine, but all of this is insane).


NAT is not an illusion of security, it's very practical security.

Just look at the distribution of successful attacks in 2000 and in 2025.


Yes, because NAT was the only notable change in home computing between 2000 and 2025. Totally not all the other changes that happened, 100% NAT. Yep.

> "The really sad thing about IPv6 is that it's a relic of a bygone era."

Yes, and not only for the NAT reason you say; see "The world in which IPv6 was a good deisgn" by Apenwarr, from 2017: https://apenwarr.ca/log/20170810


Your comment comes grom a place If ignorance.

> We don't _want_ end-to-end connectivity.

YOU dont want that.

The wish to not have all devices connected to the Internet does not defeat the need of having a protocol that allows that. NAT is merely a workaround that introduced a lot of technical debt because we don't have enought IPv4 addresses. And having IPv6 does not mean that you must connect everything to the internet.

IPv6 Is good and absolutely necessary. NAT is really expensive and creates a lot of problems in large networks. Having the possibility to just hand out routsble addresses solves a lot of problems.


I'm starting to notice a pattern in these threads in HN every time IPv6 comes up. It's kinda like the vim/emacs holy war back in the day. I also see it in threads about Rust vs C++.

There's a kind of holy war, where people pick sides, and if you're "team IPv6", you look for any post that kinda vaguely smells like it defends IPv4, call them ignorant, and respond with the typical slew of reasons why IPv6 is better, NAT is a hack, you can still firewall with IPv6, etc. If you're "team IPv4" you look for the IPv6 posts and talk about how NAT is useful, IPv6 should have been less complicated, etc etc.

It's really tiresome.

Personally, I'm not on either "team". These things I believe:

- We need more addresses

- IPv6 is as good a solution as any, as no solution was going to be backwards compatible.

- We should get on with it and migrate to IPv6 as soon as possible. What's the hold up? I'm sick of running dual stack.

- But don't throw away NAT, please. I want my internal addresses to be stable and if possible, static and memorizable (An fd00::1 ULA [0] is even easier to remember than 192.168.1.1! And it won't change when my ISP suddenly hands me a new prefix.)

- But CGNAT is The Devil. Don't conflate my desire for using NAT at my house for some desire to be behind CGNAT. Again, we should get on IPv6 ASAP to banish CGNAT back to hell.

- Oh and don't pretend that IPv6 would have led to a more decentralized internet. A client/server model is inevitable for so many reasons, and true peer-to-peer where customers just communicate directly with one another is not a realistic goal, even with IPv6. Even with PMP/PCP. Even with DDNS. You just can't design software around the idea that "everyone's router is perfectly configured."

[0] No, I don't actually use fd00::/64 as my ULA prefix, I actually put my cellphone number in there :-P. The point being it's as memorizable as you want it to be.


>We need more addresses

True.

>IPv6 is as good a solution as any...

There aren't any other practical solutions. Debatable if it was the best solution but it is what it is.

>We should get on with it and migrate to IPv6 as soon as possible

Most LANs will run IPv4 forever. IPv6 offers nothing to LANs larger than home labs (when SLAAC is not enough) but smaller than huge companies (when there are more than enough addresses even in IPv4. 240/4 is for every practical purpose private). The community's hostility to NAT/NPT means LAN admins will get it de facto by running another stack internally.

>I want my internal addresses to be stable and if possible, static and memorizable.

Right. But ULAs have ridiculous routing rules. It would take at least a decade for fixes to be deployed everywhere.

>Don't conflate my desire for using NAT at my house for some desire to be behind CGNAT.

Right. But most users don't care.

> A client/server model is inevitable for so many reasons.

Right.


> Right. But ULAs have ridiculous routing rules

Could you elaborate? I’m able to run ULA-only just fine, with one line in my pf.conf to route 1:1 to my GUA prefix. The NAT is stateless.

Maybe you’re referring to the fact that typical OS’s will prefer to use IPv4 over a ULA if both are available? I’ve noticed that, and indeed it’s unfortunate.


>typical OS’s will prefer to use IPv4 over a ULA if both are available?

Yep. There's an RFC for this. I guess it would take a decade to be everywhere.

The Global > Local rule is also unfortunate in some configurations*, but that's more of a result of the end-to-end idea and software not realizing a device can have more than one address...

* One example from this thread: https://news.ycombinator.com/item?id=43070290


Is the alternative reality One where all devices can connect to all other devices and come with built-in mitigations for the problems that firewalls are normally used for?

Maybe. In the alternate reality, we probably would have the same threats on the internet as we do today, and the same boneheaded software that misconfigures things, so my guess is that we’d still have consumer firewalls at the gateway that just default-deny everything. Maybe there would be better standards about auto-configuring these firewalls though. (Or maybe the standards we already have would have just been better supported?)

But for certain, today if we had IPv6 everywhere, it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device. That software would only be usable by people who know how to configure their router, and that doesn’t make for very good market penetration. You still need something like UPnP, so you’re basically right back to where we are today. At least the connection tracking would be stateless, I guess?


> You still need something like UPnP

I have two SIP phones both wanting to register port 5061 to forward to their address. How does this work with IPv4/UPnP?


I’m not talking about SIP, I’m talking about a hypothetical piece of software that an IPv6 proponent may typically claim that is made possible by unique GUA’s for each device. My point is that the GUA alone isn’t enough to make this work, you still need:

- UPnP or something like it - A place to register your address, since they change all the time even in IPv6

And if you need these two things anyway, you can do this with IPv4, with the added change that you also include the port when registering your address, which makes the “multiple phones in a network” thing work.


> I’m talking about a hypothetical piece of software that an IPv6 proponent may typically claim that is made possible by unique GUA’s for each device.

Like, say, a SIP client? Wanting to listen on the standard SIP port?

Or say two web servers that both want to listen on :443 and you don't want to have to reverse proxy them from another box.

> UPnP or something like it

> https://datatracker.ietf.org/doc/html/rfc6887

Port Control Protocol seems to be pretty well supported on most of the devices I own even in IPv6.

And in the end I'm pretty boned if I'm on CGNAT usually. I can pretty much never get my Switch to do matchmaking with peers on IPv4 when I'm on a CGNAT network. If we were all on IPv6, it wouldn't be a problem.


> Like, say, a SIP client?

No, not like, say, a SIP client.

You seem pretty intent on not reading my whole comments or something. Mind responding to what I’m saying and not just making up your own arguments?

I’m saying “ipv6 alone won’t solve X”, and you’re saying “but what about Y?”


I'm saying "but what about Y" and you're acting like Y is never a valid issue that anyone ever seems to have.

So once again, how do two devices both share port 5061 on a UPnP NAT with a single public IP address? Or even worse, if they're CGNAT'd? It's a simple answer with IPv6...


Again with the not reading. Let me try to sum up my position so we can end this whole “nuh uh!” nonsense.

IPv6 was designed in a world where we thought we’d have a truly peer-to-peer internet where anyone could just talk to someone else’s device. This obviously isn’t what happened. An ipv6 proponent may say “this is because NAT breaks the necessary assumptions”, but that’s extremely oversimplified and wrong.

The reason that, when I use my iPhone to FaceTime my mom’s iPhone, it doesn’t just use SIP to contact her phone, isn’t because IPv4 and NAT. It’s because the very idea of that is nearly impossible to implement at scale even if everyone had a unique address.

I’m aware you can’t have multiple devices behind a NAT address listen on the same port. Thank you for pointing that out, you’re very smart. But it really doesn’t address my point at all, does it?

My point being that the reason we don’t use SIP and related technologies for billions of end user devices today, isn’t because of NAT. It’s because the myriad of other problems that would need to be solved for it to work reliably, and because of NAT. Eliminating NAT wouldn’t really meaningfully change the landscape of how software works. FaceTime calls would still go through Apple servers. People would still use centralized services run in real data centers, etc.


> I’m aware you can’t have multiple devices behind a NAT address listen on the same port. Thank you for pointing that out, you’re very smart.

Ah, finally, you do acknowledge that there are issues that aren't actually solved with IPv4 and NAT. Thanks.

> But it really doesn’t address my point at all, does it?

Lets go back to the first thing I was reponding to.

> it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device.

With having a public IP address, PMP enabled on my router, and DNS registration I have this today. My SIP phone here is sitting by, waiting for incoming calls. I don't need Apple's servers for this. Anyone with a SIP client and who knows the name can get to it (or trawls IPv6 space). This becomes a headache with IPv4 with multiple devices all wanting that 5061 port. Sure, one could just also tell people the port and have lots of random ports assigned, but that's just yet another bit of information to get lost, one more mapping to maintain, etc. Imagine if Amazon ran their ecommerce site off a random high number port instead of :443, think they'd get as much traffic?


> With having a public IP address, PMP enabled on my router, and DNS registration I have this today.

That’s cool for you, congratulations.

I’m talking about the broader internet at large, and the billions of users on it, and the software that is written for these billions of users. This software does not behave the way your cool phone does. Because people’s IP’s change. They’re behind firewalls they are unable or don’t understand how to configure. We don’t write software this way for a lot of good reasons.

Now, I keep talking about this, then you bring up “but I have this use case, what about that?” Demanding an answer as if I give a shit that you want to run a SIP phone at your house, and you’re willing to configure your firewall, etc.

My whole point (!!!) is that what you’re doing is not what the billions of people using the internet are doing. When I bought my mom an iPhone, I didn’t have to tell her “ok so, FaceTime only works at your house, you have to reconfigure this DDNS service when your IP changes, you have to configure your router this way, and don’t ever leave the WiFi network or it won’t work, but thank god for ipv6, because it means dad can do all this too for his phone!” No. Because FaceTime doesn’t work that way, nor could it ever, because doing true peer to peer calling was never a thing that would have ever worked at scale. It’s not being held back by NAT. It’s being held back because it was never gonna work in the first place.*


> and you’re willing to configure your firewall

I didn't have to configure the firewall, that happened automatically with PMP. That thing you didn't even know existed until a few hours ago.

But somehow you know all that is and is not possible. All that could have been if things were different.


Sick burn bro

I mean if you’re just going to sling ad-hominems around and invent your own argument so you have a chance at winning it, sure. I’m not aware of the specific RFC’s du jour that are used for auto-configuring routers. But after spending some time looking it up, it seems relatively niche (not as widespread as UPnP), and it appears the last 3 routers I’ve owned don’t support it. (I run an openbsd box with pf currently, but my unifi gateway before that didn’t support it, and the shitbox gateway Comcast gave me before that certainly didn’t support it. I could run a service on my openbsd box to support it but it wouldn’t make a difference because I’m perfectly capable of editing my pf.conf as it is.)

But if you’re gonna dig up other posts of mine to try and get a dig in at me, maybe at least read the rest of the post? You certainly haven’t done a good job of that so far.

Because I indicated that of course a protocol like this ought to exist, but what percentage of users of the broader internet are actually running a router that supports this? If you wanted to come out with a video calling app that only worked if your users had PMP working, what market share would you be losing out on? That’s the topic of discussion here after all (not that that’ll stop you from inventing your own discussion as you’ve continued to do.)


> I’m not aware of the specific RFC’s du jour

Funny way to say you don't know what you're talking about. If you don't know what technologies are actually out there how can you say what is and is not possible?

> but what percentage of users of the broader internet are actually running a router that supports this?

Most who aren't running their own home rolled pf setup that can't be bothered to read about "the latest" (over a decade old) RFCs.

And yeah, last I ran a UniFi gateway it supported PMP. That was several years ago. If it's a recent model device with halfway decent IPv6 support it's practically a sure thing it's got PMP support. Maybe disabled, but it supports it. Same with UPnP, might be disabled, but probably supported.


I guess you really enjoy feeling superior so I’m just going to give you this one. You win. You’re very smart and you know more about router software than I do.

(I’ll leave you to ponder whether this changes my point at all, but I don’t think it really matters. You seem to be pretty fixated on getting your digs in, so maybe we’ll just end the discussion here. Good day.)


> I guess you really enjoy feeling superior

> Sick burn bro

> My post was basically the biggest pile of sarcasm I could conjure and you still took it seriously, congratulations!

> That’s cool for you, congratulations

> not that that’ll stop you from inventing your own discussion

> I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need.

And yet you accuse me of ad homenims for pointing out your acknowledgement of not knowing of the last decade+ of networking tech and that I have a need to feel superior. Pot meet kettle buddy.

You've spent so much effort telling me what's possible or not, berating me multiple times, while acknowledging you haven't kept up with decade+ old tech.


The two devices shouldn't both need to be on the one port 5061: we have numerous solutions to that problem at this point, from specifying the port as part of the connection to auto-detecting it using SRV in DNS. Yes: making some small changes to existing software to not assume hardcoded ports is required in this future... but there is no way that those changes would have been (or frankly even still are) harder than the insane solution of reimplementing and redesigning the entire world--requiring its own changes to every protocol (including SIP) and breaking a strict and extremely large superset of the same references that merely needed a port number--to support IPv6.

Why would they need the same port?

Because they don't want to have to rely on an external service to find the right port. Just like how your browser doesn't need to be told to connect to :443 for HTTPS. If you're connecting encrypted SIP, the port is assumed to be 5061. Just like HTTP is assumed to be 80, SSH is 22, etc.

In what world do you not need to rely on an external service to find information like this? IP addresses change, people move around, etc etc.

The topic of discussion right now is peer-to-peer software, which is the quintessential thing that is always lauded by ipv6 proponents as the killer app of ipv6, because each device has its own globally-routable IP.

But “what address do I send these packets to?” is like, 10% of the issue with designing p2p software. Users ip’s are always changing. They move around. They go behind firewalls that will block their traffic. They go behind firewalls they don’t have permission to reconfigure. This is the case in IPv4, and it will always continue to be the case even if we were 100% IPv6.

IPv4 can work today for p2p use cases if you don’t make the assumption that the port is static. But that’s like 1 of N assumptions you have to check if you’re writing peer to peer software. The other N of them are all still issues, even in IPv6.


> But “what address do I send these packets to?” is like, 10% of the issue with designing p2p software

So wouldn't it be nice to go ahead and solve that 10% of the problem instead of just limping along with it?


I hope you're not implying that IPv6 solves the problem of knowing what address to connect to.

If you can avoid an explicit port, that can be nice. But anywhere you're using an IP, v6 without a port is longer than v4 with a port. An implicit port is only useful when you're using DNS. (And if you're designing a service, you can choose to use DNS records that contain the port.)


I'm all for things to start using SRV records as well!

But the external service needs to know your ip, so why not port as well?

We have this thing called NATMap: https://github.com/heiher/natmap

It gives you a public dynamic IP + port combination if your network is NAT1 all the way to the internet. Both IP & port being dynamic kinda complicates things, they ended up inventing a new DNS record type, "IP4P" where IPv4 address and port are encoded as an IPv6 address, and modified WireGuard/OpenVPN clients are required.

We are supposed to solve this using SRV records, but I don't think many consumer-facing apps can do this.


Did you have to look up something to know this site is hosted on :443 or did you just make an assumption? Some kind of standardized port you connected on? Wouldn't it be nice if every device could just assume the standardized ports were available for them if they wanted them?

VOIP systems don't really work like random webservers though.

There's little reason why they couldn't though. Why shouldn't I just call Apple by doing sip://support.apple.com ? Why can't people just call me at sip://desk-phone.vel0city.whatever (they actually can, if they know the right name) or sip://cell.hikikomori.apple.net ?

Instead, we're locked into proprietary platforms or relying on old phone systems.


Ok so try calling my phone at sip://ninkendo.example. Oh, I’m actually not at my house. I’m traveling and my device is moving from one cell tower to the next, getting a new IP address every few minutes. Then I get to my friends house and his router is gonna block unestablished connections so you better call me soon.

No, there’s a reason software like FaceTime, and calling in general, are mediated by centralized services and implemented as client-server. Because relying on your users to have stable IP’s, and to always be behind a properly configured firewall, would be an utterly insane way to design something.

Hence “ipv6 is a relic of a bygone era”, because in the era we live in now, IP addresses change and users move between networks constantly, and NAT is very much the least of our problems.


FWIW IPv6 can handle this scenario just fine, by letting you take your home IP with you when you move to other networks. I don't think "v6 is a relic of a bygone era" is fair to say when it has a way to handle this exact problem.

How does ipv6 let me take my IP with me, exactly? Citation needed. I mean, I have WireGuard installed on my phone and I get a tunnel my house wherever I go, but that’s not really the same thing, is it? You’re saying there’s a way for me, wherever I go, my house, 5G, my friends house, to just have traffic routed to my phone for a consistent IP address wherever I go? Not a VPN? Through something only IPv6 provides? Could you explain what this is? Because my ISP at home won’t even give me a consistent IP, not even a stable prefix.

It kind of exists in IPv4, but is a lot messier. Also, the lack of IPv4 addresses makes this a lot harder to do something like this in IPv4.

https://datatracker.ietf.org/doc/html/rfc6275


> my device is moving from one cell tower to the next, getting a new IP address every few minutes

You're pretty much never getting a new IP at every cell tower.

> Then I get to my friends house and his router is gonna block unestablished connections so you better call me soon.

PMP can solve that.

> Because relying on your users to have stable IP’s

Not a stable IP, but some kind of stable identity. Like a DNS name. That can be provided by someone like Apple, or Google, or whoever. It doesn't need to be ultra-centralized by only a single thing controlling it.


[flagged]


> I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need

I never said this. You're putting some rather strong words in my mouth. Just giving an example of something that could have been if it wasn't for NAT and an acceptance that you need a third party platform to allow you talk. Feel free to change out the pieces to whatever.

Personally I'd like to be able to just dial a FaceTime-like client from any device instead of just what Apple blesses with accounts Apple allows. And have that stack just be the norm. I understand NAT isn't the only headache that prevents this, but it is one of the several that does.

I don't think it's crazy to think easy dynamic DNS services could have become common if advertised right. People don't know how phone numbers work but in the end they know how to use them. The tech for routers to auto configure firewall rules based on client requests exists, and potentially if everyone-has-a-public-ip was common we'd have seen less reliance on brittle edge firewalls and NAT to provide so much of our security.

Killing NAT doesn't solve all the problems, but it does solve some of them. And I'd prefer solving the ones that help give users more freedom instead of accepting things like CGNAT.


This is a misconception holding back the adoption of IPv6.

Routers have a stateful firewall meaning by default they only allow incoming packets that belong to a connection that was initiated from the inside. By default you also have this kind of firewall on every operating system. NAT adds 0 additional security.


NAT was invented to connect entire businesses to the internet that would otherwise have to manually reconfigure every single device for the internet. Keeping IPv4 going amidst address exhaustion going was a lucky coincidence of it.

I don't want all my devices to have their own unique address. The internet dream that created ipv6 is dead. It's malware, spyware, surveillance capitalism now, and the more obscurity through layers of addresses, the better.

Oh god, no, no, no. It's so frustrating to hear this over and over again. NAT is not a firewall. A firewall is a firewall and any reasonably component network admin uses one (and likewise any reasonably competent ISP supplies equipment with one to users who don't know any better). A little knowledge is a dangerous thing...

Can you pick a specific thing in my post to argue against? I never claimed NAT and firewalls are the same thing. I claimed that whether you’re using IPv4 with NAT, or IPv6 with a default-deny firewall, you can’t have software just “assume” that it can listen to incoming connections and have things just work. You need to go to your router and configure something. Since approximately 0% of people in the world at large know how to do this, it means software can’t work this way in general, and IPv6 doesn’t change this.

Ok, but things "just working" isn't the only advantage of v6. Are you aware some people don't even get v4 address at all? Have you ever been unfortunate enough to have your LAN v4 range overlap with a network you VPN to (like a corporate network)? What if you want to run two web servers?

A lot of people seem to think I’m anti-ipv6… I’m not, actually. I run it at home, I evangelize for it, I think it’s the future and the sooner we move to it, the better. It solves real problems, the shortage of addresses being the most obvious.

What I’m against is the oversimplification that if we only had ipv6 from the start, that the internet would have turned out very different and we’d be directly connecting to each other through direct unsolicited connections or something.

No, there are a lot of reasons direct unsolicited peer to peer communication are not a good expectation to have on the modern internet. NAT makes it tough, yes, but so does security (you’re gonna want to firewall your network at the gateway anyway, even in v6), changing addresses, mobility, etc.

For instance, even in my network at home, I don’t think using my real GUA prefix on each device is a great idea. Because the prefix changes, I have to use ULA for any static configs (dns, etc), which means the GUA address is this extra address I don’t use, and it only complicates things. So I’m moving toward ULA-only with NAT66, which I think is a more sane choice. I get the benefits of lots of addresses, which is great, but instead of my firewall simply allowing traffic to my public stuff, I just NAT a public IP to each public thing I want.


Maybe I'm blind, but I haven't seen anywhere else in the thread where "we'd be directly connecting unsolicited" is mentioned apart from where you've prompted it.

I don't think any reasonable engineer expects that that end would ever be the case, even if it was one of the stated original "dreams" of IPv6.


> I don't think any reasonable engineer expects that that end would ever be the case, even if it was one of the stated original "dreams" of IPv6.

Then imagine my frustration when that'e the exact argument people are giving: https://news.ycombinator.com/item?id=43072844

Like, I come up with these scenarios as a strawman to illustrate that direct client-to-client connections aren't going to work, and people come and say "actually it totally could work! We have PCP, and DDNS! Cell towers have mobile IP's too so you're only going to change IP addresses 3 times on your drive to your friend's house! And his router will also support PCP so it could have totally worked this way!"

I literally came up with the example to sound as crazy as possible, and people still say "Yup, that's exactly how things could have worked if we had IPv6. Now look at how stupid ninkendo is for not knowing about PCP, the moron~"

Like imagine if steve jobs came on stage in 2011 to introduce FaceTime, and said "You can make video calls to other people on iPhones, it's great! But you have to know their IP address. Or subscribe to a DDNS service. Make sure to use really low TTL's on your DNS records in case you roam to a different network. Oh and you have to have a router which supports PCP. Every router you connect to must support it." Even in a world where IPv6 was everywhere, that would be insane. (Well, other commenters seem to think that's exactly how phones ought to work? Maybe I'm the crazy one?)


How would you design a NAT-based LAN that shares nothing in common with a firewall?

1 to 1 NAT isn't doing any firewalling. Source NAT isn't doing any firewalling. Static NAT isn't doing any firewalling. All of these will generally just send traffic along if there's no other firewall rule associated.

When most people say "NAT", in the home user / consumer sense, they really mean "PAT": ports mapping to multiple private IPs using a single public IP.

"PAT" is not doing any firewalling either.

(It does rely on state tracking, which it shares in common with stateful firewalls, but you could run a LAN with either a stateless firewall or no firewall if you really wanted to meet the "nothing in common with a firewall" requirement.)


> Note: no, we could not have done it any other way. Any upgrade strategy to IPv4 that extended the address size would have necessitated having two internets

No, this isn't right. Instead of having IPv6 we ended up extending IPv4 by using port numbers as extra address bits and using NAT. This transition was painless (for the most part), and didn't require building a second internet.

In a better and saner world we could have got roughly the same thing - except conceptually simpler and standard - instead of IPv6.


I think I’d mostly agree with you if only one layer of NAT is in play, but in reality we have CGNAT and it makes it supremely difficult to host services without a very cooperative ISP (laughable.) And there are more internet subscribers in the world than there are IP’s for them, so CGNAT is required for at least some unfortunate part of the population.

In reality we just really do need more IP’s. 32-bit addresses with 16 bits of ports isn’t enough.


> makes it supremely difficult to host services without a very cooperative ISP

Sadly, that's a feature, not a bug.

Two layers of NAT sucks, but I guess it could have been avoided if a designed (instead of ad-hoc) solution to the problem was devised.

> And there are more internet subscribers in the world than there are IP’s for them

Is this really true, did somebody actually run the business math? "Internet subscriber" roughly corresponds to "household", not to "person".


When you say IPv6 isn't hard, do you mean implementing v6-only network stack isn't hard or understanding IPv6 isn't hard? Depending on which layer one lives on, I feel like the lack of networking effect makes supporting v6 difficult. Github is still isn't on v6 and some load balancers will prefer the v4 address over v6, etc...

Understanding it isn’t hard.

Obviously you can use (just) IPv6, e.g. GitHub.


Good news: it's 20 years, not 100, at the most :-)

https://labs.apnic.net/index.php/2024/10/19/the-ipv6-transit...


You can't just assume linear growth. Your link acknowledges this, and the "prediction" you refer to has the right caveat: "we can look at the time series shown in Figure 1 and ask the question: 'If the growth trend of IPv6 adoption continues at its current rate, how long will it take for every device to be IPv6 capable?'". It's using such a naïve model to illustrate that we're still nowhere close to completing the transition, not to say that we'll be done in 20 years at most as you suggest.

Adoption will slow down and my guess is that we'll be stuck with a long tail forever, and only time will tell if that happens at 50% adoption or 90% adoption.


> Adoption will slow down and my guess is that we'll be stuck with a long tail forever, and only time will tell if that happens at 50% adoption or 90% adoption.

But that long tail will probably look like figure 4 of that link, only reversed: IPv4 running mostly as tunnels over a core IPv6 network. Some large cellphone networks are already at that stage, using things like 464XLAT to run IPv4 with NAT over an IPv6 network.


I'm not talking about a long tail of systems and networks which still have working IPv4 support. I'm talking about a long tail of systems and networks which still don't have working IPv6 support.

I would agree with you were it not for the fact that I'm personally investing my time into a FLOSS project to make sure the growth does not slow down by making “tomorrow NOT like today”, but gets a kick in the butt once we reach the 50% deployment inflection point instead. Stay tuned ;-)

Whatever you're cooking up, I wish you luck! My life would be significantly easier if I could just completely ignore one network protocol, be that v4 or v6. So far I have mostly just ignored v6 and everything I do uses v4, but I have no strong preference (except that v4 addresses fit in my brain while v6 addresses do not, which is honestly a not-insignificant reason why v6 is hard)

Here's to hoping, but I agree one way or another one stack needs to go :-)

BTW: Contact info is in my profile if anyone wants the inside scoop. The project just isn't fully ready for public launch yet, but we're well into the cooking.


> and the "dream" of IPv6 will become a reality

Depending on what that dream is, it may already be a reality in the upper layers (webrtc, websockets, quic/http3, etc). This is why not many really bother if IPv4 will go away?


ISPs aren't helping either. Ziply Fiber only provides IPv6 with their 10 Gig and up plans, starting at $300/month.

there could have been allocated a prefix to ipv6 that would contain the entire ipv4 space, and then all ISPs could have simply translated between at the edge, which would have made it possible for all normal consumer/company devices to only carry an ipv6 stack

There is such a prefix, though, but the problem is the end user devices themselves (or a few applications) are not always modern enough to have decent operation with a IPv6 stack. Less so these days, though. See https://en.wikipedia.org/wiki/IPv6_transition_mechanism, ::ffff:0:0:0/96.

That said, a lot of posts here don't seem to reckon with the fact that a slim majority of www.google.com connections in the United States are via IPv6, and a super-majority from India, Germany, and France. Comcast, T-Mobile, Verizon, as far as I have experienced, these all default to IPv6. While dropping IPv4 support is both a worthy, distant goal and sometimes used in goal-post moving rhetoric, it's not like nobody uses IPv6...rather, mobile broadband networks have depended on it for over a decade (see T-Mobile's deployment of 464XLAT)


"Simply" is doing a lot of heavy lifting there.

There are multiple v6 prefixes already allocated for containing the v4 space. It would help if all ISPs ran NAT64 routers on the standard NAT64 prefix, but e.g. how would that function with software that only works with v4 addresses?


it wouldnt, that software would update or get left behind, too bad.

Then you're back at having "two internets" again. (Well, I don't agree that it's two Internets, but you'd be stuck with the same situation we have already because people would continue to use v4 in order for that software to continue to work.)

One major annoyance of IPv6 is that it lacks support for IPv4's whole 127.0.0.1/8 range.

What are the use cases for it beyond the regular .1 address that you dont get from ::1?

It's very useful for cooperatively pretending that a single machine is actually a network of machines, without all the madness of network namespaces.

Basically every server program ever supports the notion of "which IP address am I supposed to bind to?" Normally the answer is either "localhost" or "whatever my LAN or WAN address is, for a particular interface". And specifying this (both for servers and for clients) is much less effort than replacing all the standard-ish ports with ad-hoc ones.


I love this feature so much when jumping back and forth between two projects that when devving expect exclusive control of postgres. All I have to do is set the bind address for one .1 and the other .2 and all is good.

You can literally do the same thing with IPv6.

Just assign multiple IPv6 to your loopback interface and everything will work as expected. You can use either private IPv6 or even link-local IPv6 range.

There’s no need to waste thousands of addresss on this use case.


Which is configuration that's requires arcane invocations and permissions I might not have.

As opposed to IPv4, where everything Just Works™.


I wish that were true. MacOS for example only assigns 127.0.0.1/32. You have to assign extra IPs manually, the same way you would on IPv6.

that is a mac problem, not an ipv4 problem. The mac does a lot of pretty stupid things

As someone who used Windows and Linux my whole career and now am forced to use Mac for a new job; I completely agree, I was right, Mac is a very weird thing overall.

Same thing on FreeBSD. It's just not something you can rely on and the spec doesn't say it needs to be assigned.

Yeah, and I always do.

Because parent is right, having multiple loopback addresses is just so nice for a lot of things.


…until it doesn't, when something looks at the configured addresses. The "other" addresses in 127.0/8 will easily have weird behavior in nontrivial setups.

> waste thousands

Yeah would hate to run out


I was counting on that just a few days ago and failed spectacularly when Windows SMB client refused to connect to 127.0.0.2

> we're still not even at 50% IPv6 usage

Partially due to companies like Metronet whose focus is on growing rather than doing what it should be doing.


> What is hard, is dual stack networking. Dual stack networking absolutely sucks, [...]

Clearly you have gotten some unfortunate hands on experience with those edge cases. In my uses the dual-stacking of IPv4 & IPv6 has been a big benefit in that I have to worry a lot less about locking myself out of systems, as I can always reconnect and correct configuration mistakes on the second stack.

Comparing the IPv4+IPv6 dual-stack story to the one from the 90s of IPv4 with IPX/SPX (Novell Netware) and/or NetBIOS (Microsoft Windows), the current state is a lot more smooth and robust.

I've so far only run into 3 issues over the years (2013 till now). A local train operator running IPv6 on routers that didn't properly support non-default MTU sizes (fragmented packages getting dropped) and Microsoft where Github is still IPv4-only and Teams recently developed an IPv6-only failure on a single API endpoint.

Should we ever turn off IPv4 during my working life-time, I hope we have at that point introduced a successor to IPv6, so I can keep using a dual- or triple-stack solution, using different network protocols for robustness.


I’m gonna get downvoted for this, but IPv6 has lost and they have to come up with something else with enough benefit for the user to achieve widespread adoption to beat IPv4.

IPv6 certainly hasn’t “won” in the sense that it’s nowhere near where it should have been by now, but I would definitely not say we need to make something else that also isn’t ipv4 and thus also isn’t forward compatible.

IPv6 is still the only game in town as a successor technology to IPv4, and it makes no sense to invent some other new thing.

The real question at hand, is do we keep trying to make IPv6 happen, or do we give up and just deal with IPv4 until the end of time. (It’s not an obvious answer. L7 load balancing and IP anycast makes it so you can basically run an entire tech company on a single IPv4 address. CGNAT is becoming more and more commonplace. The pressure on IPv4 availability is alleviating all the time. Maybe 4 billion addresses will ultimately be enough for humanity, who knows.)

But I mean it when I say that if we continue as is and just sorta gradually, grudgingly add IPv6 on a best-effort basis, it’ll likely be 100 years or so before we can actually turn off IPv4.


The dual stack strategy was criticized 20+ years ago on these grounds, it seemed pretty obvious to me, and as you point out critics were right.

You can run a pure IPv6 network that can connect out to both, though. You don't need to run dual stack.

We don't live in the world in which IPv6 was a good design. Please avoid acting like IPv6 makes for a good design, because it doesn't.

See apenwarr's by now nearly a decade old blog post "The world in which IPv6 was a good design": https://apenwarr.ca/log/20170810, previous discussions of it here: https://hn.algolia.com/?query=The%20world%20in%20which%20IPv..., as well as the follow up blog post here: https://apenwarr.ca/log/20200708, previous discussions here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

And the issues with IP (and by extension, TCP, ignoring the fundamental results from the Delta-T research at Lawrence Livermore keeps biting us all in the ass) whether IPv4 or IPv6 go even deeper, far deeper, than what that blog post already tells us, so here, have this—flawed in some minor aspects, which makes CCIEs burry their head in the sand of denial about the deeper point of it—polemic for dessert: https://web.archive.org/web/20210415054027if_/http://rina.ts...


> Please avoid acting like IPv6 makes for a good design, because it doesn't.

Where did I give that impression? I tried my hardest in that post to not make a judgement call one way or the other as to whether it was a good design, only that dual stack fucking sucks.

My followup post in fact, totally agrees with you? https://news.ycombinator.com/item?id=43070286


It's almost exactly just IPv4 with longer addresses. If IPv6 isn't a good design then IPv4 is even worse because of the address shortage.

>It's almost exactly just IPv4 with longer addresses

No it's not. Slaac and NA make it a totally different beast.

If ipv6 only had dhcp6-pd, it would have been "just like ipv4 with longer addresses".


> No it's not. Slaac and NA make it a totally different beast.

How many ways to be assigned an ipv6 address are there anyway? Two or three too many?

Why should the ISP know what devices I have behind their router?

Considering the amount of enterprise-ish thought that went into ipv6, they thought preciously little of privacy, for example.


Their router needs to know what devices are behind it so it can route to them. But if by "their router" you meant "your router"... your ISP doesn't need to know at all. They send all traffic for your prefix to your router, and your router figures it out from there. Your ISP has no idea what devices are involved.

The existence of privacy addresses suggests that some thought was put into this.


There are no more "your routers" for about 10 years. The router is given to you by the ISP, and you don't have the root password for it, you can only reboot it over an APP on your phone.

The prefix is also not delegated to that router, the router does npd proxying, and the ISP only routes the ips which have the corresponding NAs recorded into its database.


Well, they also junked useless un-scalable things like broadcast and ARP.

Meanwhile adding science fiction things like mobile IP.

That's a solution to a problem you don't personally have, and it exists in IPv4 too.

Your cellphone company uses it - or would like to.

It's like SCTP: just because you don't use it doesn't mean there isn't a big group of people who do.


Yeah, that one's a bit silly.

That's the adaptation layer between Ethernet and IP.

Broadcast was renamed to "all nodes multicast". ARP was renamed to Neighbor Discovery.

Slight improvements: ND isn't broadcast, but multicast based on several bits of the IP address. This allows NICs to filter most of the irrelevant ones based on multicast MAC address. And subnet broadcast addresses were removed. There's only local broadcast to your own subnet and not to someone else's subnet, since IPv4 routers found that to be a bad idea and mostly started blocking it anyway.


that's not what this article says. I dug through it, and the main point seems to be 'It would have been beautiful. Except for one problem: it never happened.'

there is nothing really wrong with the design of ipv6 relative to ipv4


This is a little self contradictory. Saying that it's "not that different", then going on to mention "the strategy of having two internets".

Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6 protocol. Just to provide the architypical links:

This was documented about 25 years ago by DJB:

https://cr.yp.to/djbdns/ipv6mess.html

And has been repeatedly discussed on HN:

https://news.ycombinator.com/item?id=10854570

As you mention, IPv6 has exsited for the majority of the commercial internet's history, now 25 years later, it's still not the default transport protocol.

It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.


> Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6 protocol.

It is not a design flaw of IPv6, it is a limitation of the laws of physics: how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

Even if the IPng protocol could recognize IPv4 address and send out data, how can a host which only understands 32-bit addresses send back a reply back to the IPng host? If a packet with a >32b address comes in, how would a 32b-only hosts be able to handle it?

From Bernstein:

> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses. They also don't allow public IPv4 addresses to send packets to public IPv6 addresses.

How would a non-updated IPv4 system know about the IPng protocol? If a non-updated router had a IPng packet arrive on an interface, how would it know how to handle the packet and routing?

All the folks criticizing IPv6 don't ever describe how this mythical system would work: how do you get 32-bit-only system to understand more-32-bits of addressing? Translation boxes? Congratulations, you've just invented 6rd BR or NAT64, making it no different that IPv6.

The primary purpose of IPng was a bigger address space, but how would that work without updating every single IPv4 device? You know, like had to be done for the eventual IPng, SIPP (aka IPv6).

* https://datatracker.ietf.org/doc/html/rfc1752


>how do you get 32-bit-only system to understand more-32-bits of addressing

Maybe you don't. I imagine it would work like this (where XXXX::<IPv4> is the transition block mapping the current Internet v4 into IPv6):

- 32 bit host could only talk to other 32-bit hosts, and other XXXX::<IPv4 Adress> hosts. We have this now, just without the XXXX::<IPv4> part.

- Hosts with v6 addresses outside of XXXX::<IPv4> could not talk to IPv4 addresses without having a 6to4 proxy. We also have this now.

- Hosts with v4 addresses could switch to IPv6, KEEPING their current v4 address by setting their IPv6 addres to XXXX::<IPv4>. They can now talk to all the hosts they used to be able to, AND they can start talking to IPv6 hosts, without having to have two IP, dual stack config, etc.

So we end up with a significant benefit of allowing people with IPv4 addresses and IPv6 connectivity to do an IPv6-only setup.

In my case, I simply don't see us transitioning to IPv6. Our main service has an IPv4-only address and we receive 0 complaints about it. We've literally never had anyone say they couldn't connect to our services because of it. Our users are geographically located in the central US, and everybody has IPv4s. Maybe they have an IPv6 as well, but if we went v6-only I can basically guarantee that we'd have users screaming at us. We'd probably have lawsuits over it. But going v4-only, not a peep.


> - 32 bit host could only talk to other 32-bit hosts, and other XXXX::<IPv4 Adress> hosts. We have this now, just without the XXXX::<IPv4> part.

Oh, you mean like IPv4-mapped Address (::ffff:0:0/96) as defined in RFC 4291 § 2.2.5, or NAT64 (64:ff9b::/96) as per RFC 6050. See also 6to4 in RFC 3056 dating back to 2001.

Your idea is not new.


Not my idea, wasn't representing it as my idea; djb proposed it long ago (as other commenters in this thread mentioned), and as you pointed out as IPv4 mapped addresses. I was just describing how it might work, since it's a little confusing. In particular when I first started thinking about it I kept getting stuck on "Ok, so how do non-v4 mapped IPs talk to v4 addresses?"

> Not my idea, wasn't representing it as my idea; djb proposed it long ago (as other commenters in this thread mentioned), and as you pointed out as IPv4 mapped addresses. I was just describing how it might work, since it's a little confusing.

Every time IPv6 comes up some number of people talk about a protocol that 'just' added more bits IPv4 and calling it a day. But when going over the details of how what would work, exactly, you end up at the same place as IPv6 (and various 'transition mechanisms').

But there is no 'just' adding more addresses: you need to update (at least) end-hosts, update DNS, have relays/proxies if the middle of your network does not support IPng, have tunnelling to/from those special translation systems. It's basically the same thing as IPv6.

Certainly you can argue about keeping ARP versus going with NDP. Or not having SLAAC—but then you need infrastructure like DHCP (which is optional with IPv6).


>how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

By using nat46 on ISP's routers.


>> how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?

> By using nat46 on ISP's routers.

I do not understand.

In Ye Olde Times a system would call gethostbyname() and this send out a DNS packet asking for the IP address. Of course DNS record returned would be an A record, which is where your first problem is: A records are fixed at 32-bits (RFC 1035 § 3.4.1). So your first task is to create a new record type and update every DNS server and client to support the longer addresses.

Of course the longer address has to fit into the data structures of the OS you called gethostbyname() from, but gethostbyname() is set to use particular data structures. So now you have to update all the client DNS code to support the new, longer addresses.

Further, gethostbyname() had no way to specify an address type (like if you wanted the IPv4 address(es) or the IPng one(s)):

* https://man.freebsd.org/cgi/man.cgi?query=gethostbyname&manp...

The the returned data structures only allowed for one type of address type, so you either got IPv4 or IPng. So if you wanted to be able to specify the record/address type to query, a new API had to be created.

Luckily the next step, setting up a socket() and connect()ing is less of a burden, because that API already supported different protocols:

* https://man.freebsd.org/cgi/man.cgi?query=socket&manpath=Fre...

But if you have a host that support IPng, but its upstream router does not, how does it support a packet with a longer source address a shorter destination address get through?

Do you send IPv4 packets with embedded IPng packets with-in them to translation relays perhaps? How is this any different IPv6 transition mechanisms like Teredo?


Your isp dns converts the AAAA record to AA record with the ipv4 of it's own (only router) configured dualstack machine, that machine does cgnat46 and visits the ipv6 only server.

If software has hardcoded ipv6 addresses, this is much harder to solve, but I don't think this is really an issue, because hardcoded ipv6 addresses appear very seldom.


Assuming you mean nat46, your description is a bit off. The NAT box doesn't have a single ipv4, it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

But that means it has to intercept all DNS requests and doesn't work with anything it didn't intercept, which seems far from ideal.


>it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

But those don't have to be globally routable.

>it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.

This wasn't really a problem until doh/dot became a thing.

But even nowadays doh and dot are provided by whom? By google and cloudflare who are tls-mitm'ing half of the internet anyway.

Moreover most ISPs intercept dns to display those "please enter your phone number to verify your identity" screens anyway.

>far from ideal.

No more imperfect than NAT. Idealism is what is killing ipv6.


If I'm going to run single stack locally I'm going to use IPv6 and a stateless converter for IPv4.

Which means that servers have zero incentive to convert to ipv6.

All this cgnat46 setup is not in the name of end users, because they will always adapt somehow, as reels have a huge force of attraction.

It is to make sure that service providers can deploy ipv6-only _servers_ and not bother that 60 percent of their target audience will be unavailable.

The issue with the ipv6 internet is not the end users, they are paying, so the ISPs will always adapt to their needs, or the users themselves will buy new equipment in the worst case, it's the service providers who stop being profitable by going ipv6-only.


Your idea also gives servers no incentive to convert, and if they do convert they become less reliable for anyone on IPv4.

That process seems extremely error prone on anything but most basic protocols... And given how many people here for example don't run their ISPs DNS... Just seems massively complex and very resource intensive process.

>That process seems extremely error prone on anything but most basic protocols...

Worse is better.

Moreover, we really mostly need to make it work on http, that's enough to say "you can watch YouTube", but also deficient enough to stimulate ipv6 transition.

>given how many people here for example don't run their ISPs DNS..

Those people usually know how to deal with ipv6.

>Just seems massively complex and very resource intensive process.

No more complex than hanging in limbo for 30 years .


> Your isp dns converts the AAAA record to AA record with the ipv4 of it's own (only router) configured dualstack machine, that machine does cgnat46 and visits the ipv6 only server.

You mean like:

* https://en.wikipedia.org/wiki/IPv6_transition_mechanism#464X...

* https://en.wikipedia.org/wiki/4over6


Not, it's the other way round.

If an isp already has full ipv6 deployment, accommodating for ipv4 is easy.

The biggest issue is, surprisingly, not ISPs, but service providers.


> It _was_ possible to create an address space extension that was IPv4 backwards compatible

How? People keep claiming this, but I've yet to see a coherent design (either back then or today). Do you think they broke backwards-compatibility _on purpose_? What's the motivation here?


By promoting cgnat46

> no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6

This is not an inherent design flaw and is often brought up in IPv6 threads commonly referred to as the "just add more octets, bro" argument. This comment[0] sums it up well, but I'll leave it here for convenience:

> Fact is you'd run into exactly the same problems as with IPv6. Sure, network-enabled software might be easier to rewrite to support 40-bit IPv4+, but any hardware-accelerated products (routers, switches, network cards, etc.) would still need replacement (just as with IPv6), and you'd still need everyone to be assigned unique IPv4+ addresses in order to communicate with each other (just as with IPv6).

[0]: https://news.ycombinator.com/item?id=37120422


> Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6

That's not at all what I'm suggesting, I'm saying "two internets" because there are two internets: IPv4 and IPv6. You need two internet stacks on every host to deal with this, hence two internets. If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, there would still be two internets, because they are not mutually compatible.

Saying "two internets" is not a judgement call on whether IPv6 changed too much or is too different. It's just the literal truth. There are two internets, because one can't communicate with the other.


Well, that's just not true:

    $ ip -4 addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever

    $ ping -6 ipv4.google.com
    PING ipv4.google.com(sof04s06-in-f14.1e100.net (64:ff9b::142.251.140.78)) 56 data bytes
    64 bytes from 64:ff9b::8efb:8c4e: icmp_seq=1 ttl=55 time=27.9 ms
This is a v6-only machine (no v4 address other than lo), talking to a v4-only hostname. If it was truly not possible to communicate, this wouldn't work.

> If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, because they are not mutually compatible.

This part is true though. v6 is backwards compatible in rather a lot of ways, but v4 isn't forwards compatible, and it's important to note that this comes entirely from the larger address size -- no amount of making v6 more identical to v4 than it already is would make v4 any more compatible with it, unless you undermined the entire point and made the addresses 32 bits.


Now try that on an ipv4-only host trying to talk to an IPv6-only address. You can’t, hence, 2 internets.

(Yes, I understand there is no other way for a larger address space to exist that doesn’t have this problem. We are in violent agreement here. But that doesn’t mean there aren’t 2 internets. Let’s call a spade a spade.)


I can do it with a moral equivalent of a port forward.

This situation was inevitable, but it's not like they just gave up and said "welp, guess they're separate then". We've developed basically every single method of communicating between v4 and v6 that are possible to develop. There's two address spaces, but they're linked into one Internet.

(Or really, with NAT in the picture, it's millions of overlapping address spaces, but we still consider that to be one Internet.)


> there is no address space overlap between IPv4 and IPv6

Yes there is: https://www.rfc-editor.org/rfc/rfc4291.html#section-2.5.5.2


It's not that different, it's separate.

>It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.

This was never possible.


cgnat46 is totally possible



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: